We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Hours of research in one daily brief–on your terms.
Tell us what you need to stay on top of. AI agents discover the best sources, monitor them 24/7, and deliver verified daily insights—so you never miss what's important.
Recent briefs
Your time, back.
An AI curator that monitors the web nonstop, lets you control every source and setting, and delivers one verified daily brief.
Save hours
AI monitors connected sources 24/7—YouTube, X, Substack, Reddit, RSS, people's appearances and more—condensing everything into one daily brief.
Full control over the agent
Add/remove sources. Set your agent's focus and style. Auto-embed clips from full episodes and videos. Control exactly how briefs are built.
Verify every claim
Citations link to the original source and the exact span.
Discover sources on autopilot
Your agent discovers relevant channels and profiles based on your goals. You get to decide what to keep.
Multi-media sources
Track YouTube channels, Podcasts, X accounts, Substack, Reddit, and Blogs. Plus, follow people across platforms to catch their appearances.
Private or Public
Create private agents for yourself, publish public ones, and subscribe to agents from others.
Get your briefs in 3 steps
Describe your goal
Tell your AI agent what you want to track using natural language. Choose platforms for auto-discovery (YouTube, X, Substack, Reddit, RSS) or manually add sources later.
Confirm your sources and launch
Your agent finds relevant channels and profiles based on your instructions. Review suggestions, keep what fits, remove what doesn't, add your own. Launch when ready—you can always adjust sources anytime.
Sam Altman
3Blue1Brown
Paul Graham
The Pragmatic Engineer
r/MachineLearning
Naval Ravikant
AI High Signal
Stratechery
Sam Altman
3Blue1Brown
Paul Graham
The Pragmatic Engineer
r/MachineLearning
Naval Ravikant
AI High Signal
Stratechery
Receive verified daily briefs
Get concise, daily updates with precise citations directly in your inbox. You control the focus, style, and length.
Jediah Katz
Riley Brown
Peter Steinberger
🔥 TOP SIGNAL
Test suites + harnesses are becoming the real “interface” for deep agents: Cloudflare used an AI agent (Paramigen) plus Next.js tests to recreate a Next.js-compatible framework in a week , while LangChain says deep agents require multi-layer evals (single-step → full-turn → multi-turn) in clean, reproducible environments . Geoffrey Huntley’s framing clicks: the model is the “reasoning engine,” but the agent harness is the control plane that makes behavior safe and repeatable in production .
🛠️ TOOLS & MODELS
“Too-hard-for-LLM” coding tasks are drying up (gpt-5.3-codex vs opus 4.6): Theo is offering $500/problem for locally verifiable repos that current top models can’t solve, but says almost every verifiable problem sent so far gets solved by 5.3 Codex first try . If you’re sharing benchmarks, he wants a
git clone-from-scratch command sequence to reproduce the exact state .Codex vs Claude (practitioner comparison): Peter Steinberger’s take: Codex “will read much more of your code and usually find a better solution,” while Claude is “more pleasant” but may claim it’s “100% production ready” and then bug out .
Augment’s codebase indexing → better retrieval inside Codex: Theo demos using Augment’s CLI to index a codebase, then switching to Codex where the model immediately uses Augment’s retrieval tool; he claims it finds “exactly what you need and almost nothing else” and returns results in <20 seconds vs 5–10 minutes previously .
Remote control, minus lock-in: Claude Code’s new “Remote Control” is described as Max/Pro-only and Claude-Code-only in Jason Zhou’s thread; his alternative is Tailscale + SSH for a private network workflow: “Phone terminal → SSH → dev machine,” with “no public IP / no port forwarding / no exposed services” .
Cursor cloud agent: desktop Windows VM from a single prompt: Jediah Katz says he got a Cursor cloud agent to run a Windows VM with full desktop display support “with just one prompt,” taking ~1.5 hours on a long-running harness, and then he can snapshot it as a reusable Windows base .
💡 WORKFLOWS & TRICKS
Deep-agent evaluation stack (LangChain’s production learnings)
- Write bespoke success criteria per datapoint (not one generic rubric) .
- Run single-step evals to catch regressions at specific decision points .
- Add full-turn evals to validate end-to-end behavior .
- Add multi-turn evals that simulate realistic user interactions .
- Keep clean, reproducible test environments so runs are comparable .
Treat your harness as the product (Huntley’s definition you can operationalize)
- The “agent harness” is the orchestration layer that manages prompts, tool execution, policy checks, guardrails, and loop control (continue/stop) .
- Handy reference: https://latentpatterns.com/glossary/agent-harness.
Narrow-agent teams beat “mega agents” (Riley Brown’s OpenClaw pattern)
- He reports that as he added more skills, agent dependability dropped: skill use timing got worse, context got “clouded,” and integrations/personalities got “jumbled” .
- His proposed sweet spot: teams of narrow agents with ~7–10 skills each (vs ~30+) .
- Concrete coordination pattern: a journal agent in Telegram pings him ~every 30 minutes (sometimes skipping if nothing’s needed), logs useful context into Notion, and other agents read that shared journal (e.g., newsletter agent drafting for a 300,000-person email list with its own conversion goals) .
- Autonomy lever: narrow agents can run predictable cron-job loops (“three tasks every day”) because they’re optimizing for a small set of goals .
Agentic engineering tactics you can copy (Peter Steinberger)
- Treat it like a discussion, not a one-liner; guide it explicitly (“look here, look there”) and assume it starts with zero project context .
- Ask it to propose refactors/pain points when a change touches many parts of a codebase .
- After shipping a feature, ask: “Now that you built it, what would you do different?” to surface what it learned during implementation .
- Speed tricks: provide images as context when that’s faster than writing; use voice input for throughput .
Codebase hygiene under agent acceleration (Theo’s rules of thumb)
- “Tolerate nothing”: if a bad pattern makes it in, it multiplies—so delete it aggressively .
- Spend more time in “plan mode”: go back-and-forth until you have a markdown plan you can review, then tell the model to build .
- When an agent goes wrong, interrogate the path: ask what it’s doing and why, then eliminate bad examples if they’re coming from your own codebase/docs .
One-prompt infra bootstrap: Windows VM inside an agent box (Cursor cloud agent)
- Give a prompt that explicitly asks for a full Windows VM with desktop display support (not just CLI) .
- Let it run under a long-running harness (reported ~1.5 hours) .
- Snapshot the resulting VM as a reusable “Windows base” .
👤 PEOPLE TO WATCH
- Jediah Katz — practical proof of long-horizon agent setup work: a full desktop Windows VM inside a cloud agent, plus snapshotting for reuse .
- Theo (t3.gg) — running public, verifiable “too hard for LLM” challenges and documenting what actually breaks modern coding models (increasingly little) .
- Peter Steinberger (OpenClaw) — high-signal “agentic engineering” habits + grounded tool comparison based on daily use .
- Riley Brown (vibecodeapp) — concrete multi-agent “team” design: narrow agents, shared memory via Notion, cron-based loops .
- LangChain team — pragmatic eval guidance from building/testing 4 production agents.
🎬 WATCH & LISTEN
1) Riley Brown — why “too many skills” makes agents worse (≈02:33–05:42)
He explains the failure mode (context clouding + jumbled integrations) and the practical alternative: 7–10 skills per agent, then build a team .
2) Peter Steinberger — OpenClaw project update + why he resists one-click installs (≈08:03–12:04)
He describes working to add maintainers and set up a foundation for donations/hiring, and argues that making installs too easy can hide real risks (he calls out prompt injection as unsolved) .
3) Theo — codebase inertia + “slop” compounding under agent acceleration (≈18:17–20:02)
A concrete mental model: codebase quality peaks early, bad patterns spread faster than good ones, and “the models accelerate this” .
📊 PROJECTS & REPOS
- LangChain: “Evaluating deep agents — our learnings” (built + tested 4 production agents) — https://www.blog.langchain.com/evaluating-deep-agents-our-learnings/
- Agent harness glossary (Latent Patterns) — crisp definition of the orchestration layer that constructs context, executes tool calls, enforces guardrails, and controls loop continuation — https://latentpatterns.com/glossary/agent-harness
- Cloudflare’s ViNext (Next.js recreation via Paramigen + tests): reported one-week build, 1700 Vitest + 380 Playwright E2E tests, and partial test coverage breakdown (13% dev / 20% E2E / 10% production out of 13,708 cases) .
Editorial take: The leverage is shifting from “better prompts” to better harnesses + better tests—they’re what make agents reliable, repeatable, and (increasingly) portable across codebases .
Andreas Kirsch 🇺🇦
OpenAI
jack
Top Stories
1) U.S. government AI procurement whiplash: Claude gets targeted as OpenAI signs a classified deal
Why it matters: This is turning “AI governance” into an operational question about procurement levers (bans / supply-chain labels), contract terms, and whether technical oversight is enforceable in classified deployments.
- Anthropic was described as the first frontier lab on the Pentagon’s classified network, but refusing to budge on two safeguards: no mass domestic surveillance and no fully autonomous weapons.
- A weekend roundup claims Trump ordered federal agencies to cease using Claude, with Sec. Hegseth adding a “supply chain risk” tag .
- The same roundup also claims the U.S. military reportedly still used Claude to assist in strikes on Iran that weekend—hours after the ban—per the WSJ .
- OpenAI signed its own Pentagon/DoW deal the same night, describing a “more expansive, multi-layered approach” including cloud deployment, OpenAI personnel in the loop, and contractual protections.
- Sam Altman called the deal “definitely rushed” and said “the optics don’t look good,” while calling the Anthropic ban “a very bad decision” and urging the Pentagon to offer the same terms to all labs .
Consumer spillover (fast signal): Claude hit #1 on Apple’s App Store and Anthropic said daily signups broke records, while a “Cancel ChatGPT” movement spread across X/Reddit .
2) OpenAI’s reported $110B raise at a $730B valuation (and what it implies for infra alignment)
Why it matters: Capital scale is increasingly binding frontier model roadmaps to specific infrastructure and strategic partners.
A weekend roundup claims OpenAI raised $110B at a $730B valuation, led by Amazon ($50B) with Nvidia + SoftBank ($30B each), and that Amazon’s deal includes a $100B AWS expansion plus Trainium chip adoption; Microsoft “notably sat this one out” .
3) “GPT-5.4 is coming”: repeated Codex PR references + tiering signals
Why it matters: If release signals are accurate, OpenAI may be pairing model upgrades with new performance tiers (latency/priority) and possibly larger-context “stateful agent” behaviors—pushing more pressure onto inference/memory infrastructure.
- “GPT-5.4 is coming,” with mentions appearing for the second time in an OpenAI Codex pull request .
- Codex is expected to add a permanent standard service tier plus a premium fast tier.
- Another post says a pull request references a new fast mode enabling a priority tier (faster responses, lower latency) and may relate to a forthcoming $100 subscription tier.
- One thread frames a “GPT-5.4 leak” as 2M token context + persistent state → KV cache explosion, tying it to “Memory Wars” and hardware bifurcation (HBM/SRAM/optical interconnects) .
4) “Democratized intelligence” in conflict: commercial satellite imagery + AI labeling military assets
Why it matters: AI-assisted analysis is expanding who can produce (and publish) high-confidence intelligence—potentially weakening secrecy around deployments.
A post describes a Chinese startup, MizarVision (Hangzhou, founded five years ago), publishing annotated commercial satellite imagery of Prince Sultan Air Base where an AI model labeled U.S. aircraft by type; the post lists the identified assets (e.g., 15 KC-135, 6 KC-46, 6 E-3 Sentry, etc.) . Another thread argues this is what “democratization of intelligence” looks like, with commercial satellites photographing ramps frequently and AI labeling airframes quickly .
Separately, @jachiam0 notes that if the analysis is accurate (not independently verified), it’s evidence that secrecy is “foundationally weakening,” and calls for a debate on boundaries of the “Privacy-Productivity-Security” tradeoff .
5) The “computer use agents” wave: demos are getting operational, and the tooling is catching up
Why it matters: As agents move from demos to production, the bottlenecks are shifting to observability, evaluation, and controllable integrations.
- One post argues 2026 is shaping up to be “the year of computer use agents” .
- LangChain says production monitoring for agents needs a different playbook due to unbounded natural language input and sensitivity to subtle prompt variations; it published a guide on what to monitor and lessons from teams deploying at scale .
- LangChain also shared learnings on evaluating “deep agents” after building/testing 4 production agents, highlighting needs like bespoke per-test success criteria and single-step/full-turn/multi-turn evals in clean reproducible environments .
Research & Innovation
Systems and inference efficiency are becoming the differentiator
Why it matters: Inference throughput, long-context handling, and agentic workloads are increasingly constrained by I/O, KV-cache movement, and serving architecture, not just model weights.
- DeepSeek DualPath (agentic inference I/O): A summary describes DualPath as addressing I/O bottlenecks in agentic inference (long contexts, many tool calls, bursty/high concurrency) by unlocking idle system capacity without new hardware . It reports nearly 2× throughput on a 660B production-scale model.
- Cognition SWE-1.6 preview: Cognition reports SWE-1.6 is post-trained on the same base as SWE-1.5, runs equally fast at 950 tok/s, and exceeds top open-source models on SWE-Bench Pro. It also says infrastructure scaling unlocked two orders of magnitude more compute than used for SWE-1.5, and notes observed “overthinking” / excessive self-verification in dogfooding .
- vLLM-Omni v0.16.0: Released as a rebase onto upstream vLLM v0.16.0 with “major performance gains across audio, speech, image, and video inference pipelines” . Highlights include Qwen3-Omni with TTFP reduced 90% and MiMo-Audio at ~RTF 0.2 (11× faster than baseline) .
Developer workflow research: “AI context files” are early and decaying
Why it matters: If agentic coding depends on durable specifications, today’s OSS practice suggests the ecosystem hasn’t yet stabilized on how to maintain those artifacts.
A study scanning 10,000 repositories found only 466 (5%) adopted AI configuration/context files like AGENTS.md / CLAUDE.md / Copilot instructions . Of 155 AGENTS.md files analyzed, 50% were never modified after the initial commit and only 6% had 10+ revisions; the work notes there’s no standard structure and many files are “written once and left to decay” .
Other notable technical items
Why it matters: Many “small” kernel and loss-function improvements are aimed at cheaper, faster inference—especially for edge and throughput-sensitive deployments.
- Apple “cut cross entropy” is described as research that “makes a ton of sense for edge devices” (paper link provided) .
- A CuTeDSL-based RMS norm kernel is reported as 2.13× faster than a Triton fused kernel for a given inference shape, in ~300 lines of code .
- ARENA curriculum update: 8 new open-source exercise sets on alignment science, interpretability, and AI safety, with hands-on content replicating key papers .
Products & Launches
“Memory” and switching costs: Claude adds an import workflow
Why it matters: Memory portability is becoming a product moat—and a privacy/UX question—when assistants retain long-lived user preferences.
Anthropic introduced a memory feature for Claude that lets users transfer context/preferences from other AI tools by copying a generated prompt and pasting into Claude’s memory settings; it’s available for all paid plans. Import link: https://claude.com/import-memory.
A user also shared an “export prompt” intended to ask other AIs to list stored memories/context in a single code block for migration .
Notion Custom Agents ships an open-weight model (MiniMax M2.5)
Why it matters: “Good enough” open-weight models can materially change agent economics in high-frequency workflows.
MiniMax says M2.5 is live as the first open-weight model inside Notion Custom Agents, optimized for lightweight, high-frequency agent workflows . Another post says it’s “a lot cheaper than other models” for simpler tasks .
Perplexity “Computer” continues to showcase end-to-end build execution
Why it matters: These demos indicate how quickly “agent + tool access” can compress software creation cycles—and shift value to evaluation/correctness.
- Perplexity Computer was shown autonomously building a “Pokemon cards as a finance app” concept: researching APIs, writing 5,000 lines of React + Python, debugging via browser devtools, deploying, and pushing to GitHub .
- Perplexity added GPT-5.3-Codex as a coding subagent inside Perplexity Computer .
- A marketing user claimed Perplexity Computer automated ~80% of their promo workflow (research, positioning angles, competitive scans, drafts, iterations) .
Local-computer integration: “GeminiOS” bridges Google AI Studio to the OS
Why it matters: OS-level action introduces a different threat model; even with approvals, the integration surface area expands.
@matvelloso released GeminiOS, an Electron shell embedding Google AI Studio with a bridge to interact with the local OS via a simple permission system requiring user approval . Repo: https://github.com/matvelloso/GeminiOS. He explicitly warns this grants a website full access to your OS (see README disclaimers) .
Tooling for agents in production
Why it matters: As agent usage grows, debugging/eval/telemetry tools become essential infrastructure.
- Opik: Open-source tool to debug, evaluate, and monitor LLM apps/RAG/agentic workflows with tracing, automated evals, and dashboards . Repo: https://github.com/comet-ml/opik.
- webhook-collector: Open-source utility to give AI agents the ability to receive/inspect/debug webhooks; includes live site and repo .
- Qdrant relevance feedback tutorial: Incorporate lightweight feedback into similarity computation to improve search quality without retraining—positioned as useful for RAG/agents/semantic search .
Industry Moves
“Smaller teams + intelligence tools”: Block layoffs explicitly cite AI-enabled org design
Why it matters: This is a concrete example of leadership connecting AI tooling to headcount strategy—and it may become a pattern other companies copy.
Block laid off 4,000 employees (out of 10k), with Jack Dorsey stating they’re “not making this decision because we’re in trouble” and citing a shift where “intelligence tools… paired with smaller and flatter teams” enable a new way of working .
Consumer and enterprise competition: acquisitions and feature racing
Why it matters: The agent ecosystem is consolidating around “computer use,” memory, and distribution.
- Anthropic acquires Vercept.
- One post claims xAI’s MacroHard “heavily focuses on computer use” .
Robotics and “AI devices” as distribution
Why it matters: Hardware form factors can become distribution for persistent assistants (and for collecting interaction data).
- HONOR released its first humanoid robot.
- HONOR also promoted a “Robot Phone,” described as a phone that includes an AI robot where the pop-up camera acts as the AI’s eyes to enable a continuously active AI companion .
Local compute and shifting infra assumptions
Why it matters: If more serious workloads migrate to local clusters, it changes costs, privacy posture, and vendor lock-in.
One practitioner said they canceled all cloud LLM subscriptions and now run major tasks on a local cluster powered by 2× Mac Studios.
Policy & Regulation
Procurement actions are acting like regulation (without new AI laws)
Why it matters: “Bans” and “supply chain risk” labels can reshape the AI market quickly, including downstream contractors and cloud ecosystems.
- Trump reportedly ordered agencies to drop Claude, with Sec. Hegseth applying a “supply chain risk” tag .
- Altman argued enforcing the SCR designation on Anthropic would be “very bad for our industry and our country,” and said OpenAI moved quickly partly in hopes of de-escalation .
Contract language vs technical “safety stacks”: scrutiny continues, with new details highlighted
Why it matters: The debate is converging on a hard question: even if a vendor claims red lines, can they be enforced per-prompt—or only with aggregate monitoring and real authority?
- OpenAI states it reached an agreement with the Department of War to deploy advanced AI in classified environments and requested it be made available to all AI companies . OpenAI claims it has “more guardrails than any previous agreement” and links to its post: https://openai.com/index/our-agreement-with-the-department-of-war/.
- Critics argue the released excerpt is full of “escape hatches,” including conditional restrictions around autonomous weapons and surveillance language . One critique frames the snippet as effectively “all lawful use” with “window dressing,” referencing DoD Directive 3000.09 and alleging mass domestic surveillance loopholes . A former Army general counsel/undersecretary endorsed that interpretation as “right… IMO” .
- Multiple posts argue “cloud-only” does not prevent autonomous weapons use because a cloud model can still do high-level decision-making (tasking/target recommendation/mission planning) while local systems execute guidance .
A new constraint surfaced in discussion: one thread says OpenAI’s contract with the DoW excludes NSA Title 50 work (distinct from CYBERCOM Title 10), and that this legal authority distinction is a “load-bearing” contract component affecting who can access which services .
Who can restrict government use?
Why it matters: The Anthropic/OpenAI situation is forcing more precise thinking about what’s actually possible in government contracting.
A government-contracts explainer argues AI companies can restrict government use “all the time,” depending on acquisition pathway, contract type, and terms (link provided) .
Quick Takes
Why it matters: These are smaller signals that point to where capability, adoption, and governance debates may be heading next.
- Elon Musk quotes (via reposted thread): claims the AI community is off by “two orders of magnitude” on “intelligence density per gigabyte,” and predicts compounding “10x improvement per year” dynamics .
- Agent hardware mix: one post predicts CPU:GPU ratios could flip from ~1:2 or 1:4 today to 2:1, with some agentic workloads running entirely on CPUs; suggested timeline for datacenter manifestation: 12–18 months.
- Legal AI reality check: a thread argues legal AI is good for issue spotting and drafting “draft 1,” but not fine-tuned judgment where every word/comma matters; analogizes relying on it for a multimillion-dollar deal to shipping a “10-minute vibe coded app” to the app store .
- Peer review: a post argues saving peer review from “AI slop” requires removing anonymous submissions and reviews .
- Open-source policy disagreement: one post calls lobbying against open-source models a “public good,” while another argues defenses must generalize to highly capable open-source AI anyway, since other countries will have strong models regardless .
Zoomer 🧢
Dario Amodei
Andrew Ng
Defense AI governance: contract language and vendor “red lines” tested
Anthropic faces a federal ban after refusing “unrestricted” military access
A TV segment reports that Anthropic’s government version of Claude has been “deeply embedded” in military intelligence and classified operations since last summer , but that the Defense Department demanded Anthropic hand over its AI without restrictions for lawful military use—and the company refused . The same segment says President Trump directed the U.S. government to halt all use of Anthropic’s AI and cancel more than $200 million in federal contracts, with Defense Secretary Pete Hegseth labeling Anthropic a “supply chain risk,” described as a first for an American company .
Anthropic CEO Dario Amodei reiterated two “red lines”: no mass surveillance of Americans and no fully autonomous weapons without human involvement, arguing current systems lack the human judgment needed to reduce risks like friendly fire or civilian harm . He called the designation “retaliatory and punitive,” said Anthropic plans legal action, and said the company remains at the negotiating table .
Why it matters: This is a high-stakes test of whether AI vendors can enforce usage boundaries when government customers demand broader latitude—and what happens when they refuse .
OpenAI’s Pentagon contract excerpt triggers scrutiny over loopholes
Commentary on an excerpt of OpenAI’s Pentagon contract says OpenAI describes three “red lines”—no mass domestic surveillance, no directing autonomous weapons, and no high-stakes automated decisions—arguing these are enforced through cloud-only deployment, a safety stack, and cleared OpenAI personnel oversight . OpenAI also claims the agreement “locks in” today’s laws/policies even if they change, though one critic notes that “freeze” language isn’t visible in the excerpt itself .
Multiple critics argue the published language contains escape hatches:
- The autonomous weapons restriction is framed as conditional on what “law/regulation/policy requires human control,” which can be revised later .
- “High-stakes” automated decisions appear restricted only when a decision already requires human approval under existing authorities .
- Surveillance prohibitions are criticized as still allowing “constrained” surveillance and broad use of public data, with key terms tied to directives/purpose and focused on private information .
- A domestic law-enforcement clause is criticized as permitting exceptions (“except as permitted…”) rather than establishing a hard ban .
Separately, one critique argues OpenAI’s “cloud-only” posture does not prevent military use: a cloud model could handle mission planning and targeting recommendations over satellite links, while a separate local system executes guidance and weapon control .
Why it matters: The debate is shifting from “principles” to exact contract wording—and whether safeguards are durable when they defer to policies and legal interpretations that can evolve .
Reliability and transparency concerns re-enter the conversation
Gary Marcus argued that the race to deploy AI widely is “grossly premature” because the technology “fundamentally lack[s] reliability” . In a separate post, he asked whether AI errors or hallucinations could be relevant when models are used for military “target identification,” and suggested the likelihood of getting “straight answers” is low .
Why it matters: As AI use expands into higher-stakes contexts, the pressure rises for both reliability and auditability—including clarity on what systems did, and why .
Safety capacity in government: UK AISI’s view from the inside
AISI: broad mandate, but few “nines” of reliability from current techniques
In an interview, UK AI Security Institute (AISI) Chief Scientist Geoffrey Irving describes an organization with close to 100 technical people (and ~250 total staff) working across research, evaluation delivery, diplomacy, policy, and operations . AISI’s mandate includes threat modeling; pre-release frontier model evaluation spanning biosecurity, cybersecurity, and loss of control; advising government on catastrophic risk reduction; funding independent frontier research; and global diplomacy .
Irving argues that theoretical understanding of ML remains “nascent,” and that no one should be highly confident in their mental models of how AI will unfold—even as models outperform many experts on security-related tasks with no clear reason to expect progress to stall . He also describes many recent “bad behaviors” as versions of reward hacking, a problem for which we lack strong theoretical or practical solutions, and says current safety techniques are unlikely to yield many “9s” of reliability (with a risk that multiple techniques could fail for correlated reasons) .
Why it matters: AISI’s perspective frames a core tension: capabilities are advancing quickly, while high-confidence safety guarantees remain elusive—pushing more weight onto evaluation, red teaming, access controls, and non-model mitigations .
Red teaming reality: jailbreaking is harder, but still succeeds
Irving says it’s getting harder to jailbreak models, but AISI’s red team has never failed to do so; he also flags “eval awareness” as a growing issue . He describes voluntary cooperation with frontier developers as “working decently well,” but notes that not everyone participates .
AISI is also seeking to fund theoretical work (including information theory, complexity theory, and game theory) aimed at stronger guarantees—while noting these fields are only beginning to take AI seriously .
Why it matters: Even when safeguards improve, persistent jailbreakability and eval-awareness concerns make the case for continuous testing and for expanding the “toolbox” beyond today’s predominantly empirical methods .
Source: Situational Awareness in Government, with UK AISI Chief Scientist Geoffrey Irving — https://www.cognitiverevolution.ai/situational-awareness-in-government-with-uk-aisi-chief-scientist-geoffrey-irving
Telecom infrastructure: NVIDIA’s open telco model + AI-RAN push toward autonomy
NVIDIA releases an open telco reasoning model and agentic “blueprints”
NVIDIA announced an open, Nemotron-based large telco model (LTM) (reported as a 30B-parameter model) optimized to understand telecom terminology and reason through workflows like fault isolation, remediation planning, and change validation . NVIDIA also published a guide describing how telcos can fine-tune domain-specific reasoning models and build agents that execute network operations center workflows using structured “reasoning traces” .
Alongside the model, NVIDIA highlighted blueprints for intent-driven RAN energy efficiency (integrating VIAVI’s synthetic scenario generation and closed-loop simulation) and for telco network configuration with multi-agent orchestration (including enhancements with BubbleRAN) . NVIDIA says these are released via GSMA’s Open Telco AI initiative as open resources .
Why it matters: This is a concrete “how-to” and model release for agentic operations in a heavily operational, safety-sensitive domain—where on-prem deployment, data control, and workflow reasoning are central requirements .
AI-RAN milestones and 6G positioning at Mobile World Congress
NVIDIA and Nokia announced AI-RAN collaborations with operators including T-Mobile U.S., SoftBank, and Indosat Ooredoo Hutchison (IOH), describing outdoor/over-the-air milestones in software-defined 5G using NVIDIA AI-RAN platforms . Reported highlights include an industry-first 16-layer massive MIMO trial (SoftBank) and a SynaXG demonstration described as the world’s first AI-RAN on FR2 bands, achieving 36 Gbps throughput and under 10 ms latency on a single NVIDIA GH200 server .
NVIDIA also points to ecosystem expansion (multiple vendors launching ARC-compatible products) and says it has open sourced Aerial CUDA-accelerated RAN libraries and joined the OCUDU Ecosystem Foundation under the Linux Foundation . A related NVIDIA report says 77% of telecom respondents anticipate much faster deployment of AI-native RAN/6G architecture than the traditional 6G cycle .
Why it matters: The combination of field trials + open-source building blocks + partner hardware signals a coordinated push to make AI-native RAN a deployable platform, not just a concept stage research area .
Sources:
- https://blogs.nvidia.com/blog/nvidia-agentic-ai-blueprints-telco-reasoning-models
- https://blogs.nvidia.com/blog/software-defined-ai-ran
Agents & “computer use” tooling: Perplexity’s Computer shows rapid end-to-end builds
Demos: from a Pokémon “finance app” to “vibe coding Notion”
Perplexity’s “Computer” agent was shown building a “Pokemon Cards Finance App,” after being prompted to build “Perplexity Finance but for Pokemon cards” . The post claims the agent independently researched APIs, wrote 5,000 lines of React + Python, debugged itself with browser devtools, and deployed/pushed the project to GitHub .
In a separate demo, a user claimed they “vibe code[d] Notion” with Perplexity Computer in “half an hour” . Perplexity CEO Arav Srinivas added his own takeaway: “Pure software is rapidly becoming un-investable” .
Why it matters: Whether or not individual demos generalize, the emphasis is shifting to agents that can research, code, debug, and deploy in one loop—compressing time-to-prototype and challenging traditional assumptions about software effort and defensibility .
Product update: GPT‑5.3‑Codex added as a coding subagent
Perplexity announced that “GPT‑5.3‑Codex” is now available as a coding subagent inside Perplexity Computer . Srinivas also argued that the most valuable skills will be “agency” and the ability to use AI for leverage, and claimed people are already using Computer to solo-run D2C and consulting businesses .
Why it matters: Adding a dedicated coding subagent suggests “computer use” products are converging toward multi-agent toolchains, where specialized subagents take ownership of discrete parts of longer workflows .
Commentary on pace and adoption: hype, scaling, and who gets left behind
Andrew Ng: defuse AGI hype; focus on durable economic work
In a recent interview, Andrew Ng warned that excessive AI hype could lead to disappointment, a bubble collapse, and an “AI winter,” arguing that diffusing AGI hype supports more sustainable growth . He said that by “any reasonable definition,” we won’t get AGI in 2026 (absent dramatically lowering the bar) and suggested we may be “decades” away .
Ng proposed a “Turing AGI test” involving a multi-day work-like evaluation: if an AI can do useful economic work as well as a skilled professional using standard tools, that would better match what the public imagines AGI to be . He also emphasized near-term value in building agentic workflows across economically important tasks (coding, compliance, legal, medical assistance, customer service) .
Why it matters: Ng’s framing shifts attention from binary “AGI” claims to measurable ability to do reliable, multi-day work—and to the practical engineering of agentic workflows that deliver value before AGI arrives (if it does) .
Musk and Andreessen: fast curves, changing moats, and company shape
Elon Musk argued that many in the AI community underestimate “superintelligence math,” claiming 10x yearly improvements and “two orders of magnitude” more “intelligence density per gigabyte” from algorithmic improvement alone on the same computer . Separately, Musk said Tesla’s AI4 computer is only ~1/4 the power of an H100, while still handling “the vast complexities of driving in the real world” .
Marc Andreessen (as summarized in a circulated thread) argued that AI moats are “genuinely unknown,” pointing to rapid catch-up across U.S. and Chinese companies and open source, and also suggested the “holy grail” founders are chasing is a one-person, billion-dollar outcome—rethinking what a company is .
Why it matters: Across these viewpoints, a common thread is strategic uncertainty: how fast capability compounds, where advantages accrue (models vs. apps vs. infrastructure), and how organizations—and careers—adapt as leverage per person rises .
Quick note
- Elon Musk posted a video labeled “Grok Imagine,” with no additional details in the post text .
Lenny's Podcast
Lenny Rachitsky
Brian Armstrong
Most compelling recommendation (career-shaping)
Bitcoin white paper (research paper) — Satoshi Nakamoto
- Link/URL (to the resource itself): Not provided in the source
- Recommended by: Brian Armstrong (Co-founder/CEO, Coinbase)
- Where it was recommended: My Conversation With Brian Armstrong, Co-founder & CEO of Coinbase (YouTube)
- Key takeaway (as shared): Armstrong describes reading the Bitcoin white paper in December 2010 after experiences with Argentina’s hyperinflation and seeing cross-border payout frictions at Airbnb; he says it sharpened his conviction that the world would benefit from a fast, cheap, permissionless, decentralized global financial system—and that this is the context in which he first read the white paper and began working on a Coinbase prototype nights and weekends .
- Why it matters: This is a direct example of a founder pointing to a single primary-source document as the catalyst for both a worldview shift (how broken global finance feels in practice) and an early product effort .
Additional founder recommendations (Brian Armstrong)
The Dip (book) — Seth Godin
- Link/URL: Not provided in the source
- Recommended by: Brian Armstrong
- Where it was recommended: My Conversation With Brian Armstrong, Co-founder & CEO of Coinbase (YouTube)
- Key takeaway (as shared): Armstrong describes The Dip as a clarifying lens for deciding what he cared enough about to pursue for decades; he used it to evaluate and ultimately shut down short-term efforts (e.g., real estate/other projects) and commit to tech entrepreneurship, including moving to Silicon Valley .
- Why it matters: It’s framed as a decision tool for long-horizon commitment—choosing the work you’d do even without near-term success .
PayPal Wars (book) — not specified in the source excerpt
- Link/URL: Not provided in the source
- Recommended by: Brian Armstrong
- Where it was recommended: My Conversation With Brian Armstrong, Co-founder & CEO of Coinbase (YouTube)
- Key takeaway (as shared): Armstrong points to the book’s account of early PayPal leaders (e.g., Peter Thiel, Max Levchin, David Sacks) pursuing ideas he describes as similar to Bitcoin—attempting to build a permissionless, global, internet-native form of money—before PayPal ultimately evolved into more of a checkout/credit-card alternative .
- Why it matters: It’s a historical reference for recurring attempts at internet-native money—and how business outcomes can diverge from original architectural ambitions .
The 4-Hour Workweek (book) — Tim Ferriss
- Link/URL: Not provided in the source
- Recommended by: Brian Armstrong
- Where it was recommended: My Conversation With Brian Armstrong, Co-founder & CEO of Coinbase (YouTube)
- Key takeaway (as shared): Armstrong cites Ferriss and The 4-Hour Workweek in the context of thinking about scalable/passive-income approaches; he connects it to early experiments like starting a tutoring company while in college .
- Why it matters: It’s presented as an early influence on how he approached side projects and leverage—before later committing fully to tech entrepreneurship .
The Coddling of the American Mind (book) — Jonathan Haidt
- Link/URL: Not provided in the source
- Recommended by: Brian Armstrong
- Where it was recommended: My Conversation With Brian Armstrong, Co-founder & CEO of Coinbase (YouTube)
- Key takeaway (as shared): Armstrong says he read Haidt’s book (and spoke with employees) to understand activist dynamics he perceived as moving from college campuses into the workforce—where some employees saw their role as “holding truth to power” and pushing broader societal reforms inside a company rather than focusing primarily on advancing the company mission .
- Why it matters: It’s an example of a CEO using a specific book to make sense of internal cultural dynamics and competing expectations of what “work” at a company should mean .
Design leader picks (Jenny Wen, Anthropic)
The Power Broker (book) — Robert Caro
- Link/URL: Not provided in the source
- Recommended by: Jenny Wen (Head of Design, Anthropic; ex-Figma Director of Design)
- Where it was recommended: The design process is dead. Here’s what’s replacing it. | Jenny Wen (YouTube)
- Key takeaway (as shared): Wen calls it an “aggressive recommendation” due to length (~1100 pages), but argues it’s worth reading end-to-end for long-arc thinking—seeing how someone changes over decades—and for understanding how a controversial figure (Robert Moses) “gets things done” .
- Why it matters: It’s positioned as an antidote to short attention spans and a practical study of power and execution over time .
Insomniac City (book) — Bill Hayes
- Link/URL: Not provided in the source
- Recommended by: Jenny Wen
- Where it was recommended: The design process is dead. Here’s what’s replacing it. | Jenny Wen (YouTube)
- Key takeaway (as shared): Wen recommends it as a “beautiful” and “ethereal” memoir tied to Oliver Sacks’ final years (via his partner Bill Hayes), and says it prompts reflection on mortality, love, and life.
- Why it matters: A non-work pick explicitly recommended for perspective-building rather than tactics—useful when you want a reset on what matters .
A Sentimental Value (film) — Norwegian film; director also did “The Worst Person in the World” (as stated)
- Link/URL: Not provided in the source
- Recommended by: Jenny Wen
- Where it was recommended: The design process is dead. Here’s what’s replacing it. | Jenny Wen (YouTube)
- Key takeaway (as shared): Wen praises the film’s subtle pacing/writing and character relationships; she describes it as a family drama where the house functions “sort of [as] a character” .
- Why it matters: A craft-focused recommendation—highlighting storytelling mechanics (pacing, character dynamics, setting-as-character) .
The Pitt — Season 2 (TV season) — not specified in the source excerpt
- Link/URL: Not provided in the source
- Recommended by: Jenny Wen
- Where it was recommended: The design process is dead. Here’s what’s replacing it. | Jenny Wen (YouTube)
- Key takeaway (as shared): Wen recommends it on the premise that “everybody just likes to watch people who are really competent at their jobs do something” .
- Why it matters: A reminder that observing high-competence work (even fiction/documentary-style entertainment) can be intrinsically motivating and instructive .
Pattern worth noting: long-arc thinking over short-term noise
Both Armstrong and Wen explicitly frame certain picks as tools for long-horizon clarity—Armstrong using The Dip to decide what to commit to for decades , and Wen recommending The Power Broker as an end-to-end study in long-arc change and execution over time .
Aakash Gupta
The community for ventures designed to scale rapidly | Read our rules before posting ❤️
Lenny Rachitsky
Big Ideas
1) AI-speed execution is compressing “vision” into near-term prototypes
One emerging pattern: product teams are relying less on long-range, polished vision decks and more on 3–6 month prototypes that point fast execution in the right direction . In parallel, craft work is shifting from heavy “mocking/prototyping” toward pairing with engineering and implementation.
Why it matters: When “shipping scrappy” gets easier, the scarce skill becomes setting direction that prevents lots of fast work from turning into lots of misaligned work.
How to apply: Make prototypes a direction-setting artifact (not just validation), and keep them close enough to reality that engineering can iterate quickly without re-litigating intent .
2) For AI products, defensibility increasingly hinges on proprietary data—beyond UI, prompts, or the model
A startup lens argued many AI companies aren’t failing on execution/UX—they’re failing because they don’t have a moat . A “fragile moat” pattern: UI + prompt engineering + the same foundation model as everyone else, with no proprietary data or unique signal.
“You can clone UI. You can clone prompts. You can switch models. You can’t easily clone years of structured domain data.”
The suggested defensibility stack:
- Proprietary training data (domain-specific corpora others don’t have)
- Proprietary evaluation data (measuring performance in a way competitors can’t)
- Proprietary workflow telemetry (real interaction data that compounds over time)
Why it matters: This reframes “data” from a vague advantage into a concrete roadmap: what you must instrument, store, and use to become a system (not a wrapper) .
3) “PMs as builders” is becoming more real—if leadership removes the access bottlenecks
One take: PMs should build prototypes on the actual design system, do small front-end polish, and sometimes ship an initial version directly in the codebase . The same note argues that Claude Code “connects your PM directly into the codebase” and cites rapid adoption (including a claim that Claude Code went from zero to $2.5B ARR in nine months) .
The bottleneck is often organizational: teams get tool access but aren’t connected to real systems due to IT/security/privacy/regulatory constraints, which “handicaps” strategy work when tools can’t access analytics/BI/revenue metrics .
Why it matters: If builder workflows and agent harnesses are becoming default, competitive advantage shifts to who can safely connect people + agents to real data and real delivery systems.
Tactical Playbook
1) Ship “research previews” without burning trust: a speed-and-responsiveness contract
A design leader described releasing Claude Cowork as a research preview—shipping despite flaws because the benefits outweighed the cons, as long as the team commits to responding and iterating quickly.
Step-by-step
- Label the release explicitly as early/research preview, including known limitations .
- Make an explicit promise to iterate based on feedback (and treat that as part of the launch) .
- Demonstrate follow-through quickly (continuous shipping + visible improvements) to avoid brand trust erosion .
2) Design for non-deterministic AI by testing with real models + real users (not just mocks)
A practical argument: with evolving, non-deterministic AI models, you can’t mock every state; you need to use the actual models and see real users try real use cases to discover what’s valuable .
Step-by-step
- Put a working version in front of users using the actual model behavior (not a theoretical flow) .
- Watch users attempt their real use cases; treat “use case discovery” as an expected output of testing .
- Iterate based on what users actually attempt and succeed/fail at, not just what they say they want .
3) Add real-time “stuck” interventions to onboarding (so you help users before they churn)
A PM building a “Life Guard for SaaS onboarding” observed that standard analytics (e.g., Mixpanel/PostHog) often only show who dropped off after the user is already gone . Their solution: a stuck detection engine that triggers interventions (e.g., a Slack alert to reach out, or an automated “Need a hand?” email) when a user hits a “stuck” state .
Step-by-step
- Define “stuck states” in onboarding (e.g., repeated retries, long idle periods, blocked steps) and detect them in-session .
- Route a response: human outreach (Slack alert) for high-value accounts, or automated help for long-tail users .
- Treat interventions as a discovery surface: catalog the stuck patterns and feed them into onboarding fixes and product changes .
4) Turn “moat” into an instrumentation plan (training, eval, telemetry)
If the wrapper-to-system shift depends on proprietary data, make it operational:
Step-by-step
-
Decide which category you can own:
- Training data
- Evaluation data
- Workflow telemetry
- Identify the user workflow moments that generate that data, and instrument them deliberately (not as an afterthought) .
- Store and structure it over time—because the claim is that years of structured domain data is hard to clone .
5) Unblock AI tool value by connecting them to real business metrics (and working through security)
A leadership warning: giving teams tool access but not connecting it to analytics/BI/revenue metrics due to constraints “totally handicaps the team” . The same guidance emphasizes working through IT/security hurdles so teams can connect tools to data sources (analytics, research, support, CRM) and output systems (e.g., Linear/Jira) .
Step-by-step
- Inventory the decision-critical systems the team needs (analytics/BI/revenue metrics—not just engagement) .
- Work with IT/security to enable access paths (MCP/API integrations were called out as essential) .
- Verify end-to-end usefulness: data in → documents/models out (PRDs, roadmaps, sizing models) .
Case Studies & Lessons
1) Claude Cowork: “10 days to ship” was only the final leg—prototypes came first
On Cowork, the team had “a bunch of different prototypes internally,” tried many form factors, and then did ~10 days to move from internal state to something ready to ship externally . They framed the launch as a research preview and emphasized trust through speed—ship early and show continuous improvement based on feedback .
Takeaways
- If a product is going out early, the trust lever is not perfection—it’s responsiveness and visible iteration .
- Don’t let a single shipping metric (“10 days”) erase the prototyping/learning investments that made shipping possible .
2) X platform iteration: growth instincts + fast iteration before polish (plus transparency)
Scott Belsky praised @nikitabier’s “leader/product fit” advancing X through growth instincts + quick iterations before polish, engaging detractors, and “clearing abusers” with transparency features . He added that the product is “undeniably better” .
Takeaways
- Fast iteration can coexist with trust/safety work if transparency and abuse-handling are treated as product features (not just policy) .
- Leadership behavior can set norms for how a platform is used (“setting [the] bit… through his own use”) .
3) SportsFlux: utility vs. retention—feature, product, or ecosystem?
A PM iterating on SportsFlux (a “discovery layer” hub for live sports links) worried about retention once users find a game and leave . They considered adding real-time win probability or fantasy alerts to create a second-screen experience . Replies highlighted that direction depends on monetization , and asked whether cross-provider links imply “illegal streams” .
Takeaways
- Retention concerns should be grounded in why the user leaves (task completion vs. dissatisfaction) before expanding scope .
- Monetization and legal/ethical constraints can be first-order product inputs—not things to “solve later” .
Career Corner
1) Three “interesting” talent archetypes to hire for (and become)
A hiring lens surfaced three archetypes:
- Strong generalists: “block-shaped” (multiple ~80th-percentile skills)
- Deep specialists: a T-shape with an unusually deep spike (e.g., highly technical designers, or extremely strong visual/craft specialists)
- “Craft new grad”: early career, humble, fast learner; valuable because roles are changing and blank slates adapt quickly
How to apply: In your growth plan, name which archetype you’re optimizing for per role—and which archetype you’re personally building toward (breadth vs. spike vs. learning velocity) .
2) Management that persists: direction + real engagement with the work
A design leader argued managers still matter “as long as there is a team,” but the future manager is not “pure people management”; they must provide direction and engage with the work .
How to apply: If you manage PM/design/eng, take periodic IC rotations or hands-on time so you can empathize with how the work has changed and guide effectively .
3) “Low-leverage” work can be high-leverage when leaders do it
An example-driven point: leaders can create outsized impact by choosing seemingly low-leverage tasks (e.g., deep product dogfooding, reproducing bugs with engineers, even putting in PRs) because it builds shared context and signals standards . The same discussion describes “roasting” as a possible signal of psychological safety when balanced with high standards .
How to apply: Pick one “hands-on” ritual you do consistently (e.g., dogfood + bug repro), and make it visible and collaborative .
Tools & Resources
1) Claude OKR Skill (GitHub): an OKR coach with five modes
A community-shared “Claude OKR Skill” positions Claude as an OKR coach with five modes: Define, Refine, Check-in, Score, and Align. It supports multiple OKR frameworks—Classic (Doerr/Google), outcome-based (Cagan/Perri), and hybrid—and defaults to Classic with outcome-oriented language .
How to apply: Use Refine to clean up vague targets and “output-disguised-as-outcome” KRs, then use Align to flag orphan objectives and overloaded areas before the quarter starts .
2) Tooling stack called out for PM work connected to real systems
One suggested stack:
- Claude Code (PMs building closer to the production codebase)
- Claude Cowork (strategy docs/PRDs/roadmaps across Slides/Keynote/PowerPoint/Excel, connected to data + delivery systems)
- MCP/API integrations to analytics, research, support, and CRM (to avoid “handicapping” decisions)
How to apply: If you roll these out, treat “connected to real metrics + real workflows” as the success criterion—not just tool access .
homesteading, farming, gardening, self sufficiency and country life
Successful Farming
1) Market Movers
U.S. corn (pre-plant weather signal): GrainStats shared a U.S. Corn Drought Monitor snapshot showing conditions before farmers start planting this spring .
Turkey (producer cashflow ahead of spring fieldwork): Agricultural support payments (formerly Mazot ve Gübre Desteği, now Temel ve Planlı Üretim Desteği) are scheduled to start March 6 with final payments by March 20 (by farmers’ TC numbers) . The post frames these payments as timely for farmers beginning spring field preparations, and notes completion before Ramadan Bayram .
2) Innovation Spotlight
Local-first agronomy AI workspace (decision support + data ownership): ZarSage AI is described as a local-first agronomy workspace that runs on the user’s machine, bringing together soil lab data, decades of field-level weather history, field maps, and operational data—paired with structured AI reasoning meant to support agronomists and commercial growers (not replace them) . The team is offering early access and explicitly asking agronomists/consultants/growers for workflow feedback and tool pain points .
“Plant Anywhere” gardening companion (sensors + diagnostics, one-time price): A homesteading-focused app highlights:
- A scale garden planner for rows/raised beds (square-foot or free planting)
- A water audit that calculates hydration needs based on local climate
- ESP-32 webhook logging for soil moisture, light, and temperature from DIY sensor stations
- Logging for pests, diseases, fertilizer, and growth stages
- Optional thermal imaging analysis using a FLIR attachment to detect stress early
- $49.99 one-time purchase and app link: https://app.plantanywhere.net
China: robotics + AI models positioned as on-farm enablers: A Chinese program segment says teams have developed 100+ agricultural robots deployed across multiple locations, and mentions a virus recognition “large model” intended to help farmers avoid “blind-box” seedlings .
Measured results on specialty production (China): A segment reports “9-level” ecological strawberries reaching 3,000 jin per mu, after 4 million RMB invested over 5–9 years, and cites 10亿元 in sales .
3) Regional Developments
Western Canada (crop risk resource shared): A post titled “Cereal lodging isn’t just a nitrogen problem” was shared to r/farming, linking to a Producer.com article for details .
China (Hebei): scale signal in chestnuts: A segment references Qinglong County having 1+ million mu of chestnut trees .
China (Hainan): meat goose growth constraints: A case on Ma Gang geese describes birds at ~86 days coming in below target weight (examples cited at 7.5, 8.4, 7.8) relative to a goal of reaching 9+ jin by around a 90-day cycle .
4) Best Practices (actionable)
Grains & oilseeds
Weed control discipline: “DON’T let weeds go to seed!”
Cover crop termination: Ag PhD shared “a few tips on killing off cover crops” .
Soybean seeding strategy prompt: Ag PhD raised the question: “Should you vary your soybean populations?” .
Prescribed fire planning (prairie systems): Successful Farming notes that whether a prescribed prairie burn happens in spring or after the growing season, advance preparations are key .
Livestock & poultry (management + feed)
Seven-colored mountain chickens (Guizhou): reduce fighting + improve access to feed
- Stocking density: A case example links fighting to density (~3 birds/m²) and recommends about 2 birds per m².
- Uniformity management: Separate stronger/heavier birds from weaker/lighter birds to reduce bullying and help weaker birds maintain feed access .
Fan ducks (Jiangxi): fighting and flight control
- Separate sexes after maturity: A case attributes severe fighting to mixed-sex housing after ducks are sexually mature (~7+ months), recommending separating males and females .
- Reduce flight without “hurting appearance”: Clip 7–8 primary feathers on one wing (starting around the 6th) to create imbalance, reducing flight distance while keeping it less noticeable .
- Anti-fighting eyewear: Transparent “glasses” are shown as a method to block vision and reduce fighting; the segment emphasizes using consistent color to avoid triggering more pecking/fighting .
Turkeys (Henan): correct nutrition for growth stage + reduce fighting
- Why broiler feed can fail: The segment explains turkey growth needs (especially in the 3–6 month stage) may not be met by standard broiler feed (notably calcium/phosphorus needs for skeletal growth) and shows a feed adjustment approach .
- Adjustment example: Add 1.5–2% phosphorus-calcium and 2% bean cake to improve nutrient sufficiency for growth .
- Reduce fighting: Hang grass boards and red cloth to distract birds .
- Garlic feeding rationale (as presented): Garlic is described as helping “sterilize/anti-inflammation,” supporting better absorption and faster growth .
Ma Gang geese (Hainan): fattening phase needs space control + dry resting
- Diagnosed constraint: Excessive activity (too much space) and wet ground are described as key reasons geese weren’t gaining enough fat/weight .
- Fattening guidance: In the fattening stage, reduce activity space to help accumulate fat; one method shown is using net beds and managing density around 10 geese per m² (example: 1,800 geese needing 180 m² of net bed) .
Soil & sensing
- Pesticide drift detection via imaging: One commenter notes pesticide droplets themselves likely wouldn’t show up on imaging sensors (minimal light reflection), while vegetation damage would be easier to capture and diagnose . Another notes hyperspectral imaging research exists for direct detection but says effectiveness is uncertain .
5) Input Markets (fertility, feed, tools, and on-farm capex)
Turkey (policy support tied to diesel/fertilizer support legacy): Support payments previously known as Diesel and Fertilizer Support (Mazot ve Gübre Desteği) are now under a renamed support framework, with payments scheduled March 6–March 20.
Feed formulation (turkeys): A demonstrated adjustment to address growth limitations adds 1.5–2% phosphorus-calcium plus 2% bean cake to existing feed .
Housing capex (geese, China): Building net beds is costed at about 40,000 yuan for ~1,800 geese (materials 30,000, labor 10,000) . The segment shows a cost-reduction approach using 8,000 yuan iron net plus bamboo as a frame substitute to save on iron pipe cost .
Farm/garden software (consumer-priced tool): Plant Anywhere positions itself as a $49.99 one-time purchase.
6) Forward Outlook (seasonal considerations)
U.S. spring fieldwork: The U.S. Corn Drought Monitor snapshot is framed as a pre-plant baseline—worth tracking as planting approaches .
Turkey field prep timing: The Turkish payment schedule explicitly targets the run-up to spring field preparations and is slated to finish before Ramadan Bayram .
Short harvest windows in specialty crops: Rose essential oil production highlights the importance of pre-sunrise bud harvest for higher oil content , and notes the flower period can be 15–20 days, with rain limiting picking and potentially cutting harvest materially if it persists during that window .
Operational discipline reminders: As spring management ramps up, sources emphasized avoiding preventable setbacks like letting weeds set seed and doing the prep work ahead of prescribed burns .
Discover agents
Subscribe to public agents from the community or create your own—private for yourself or public to share.
Coding Agents Alpha Tracker
Daily high-signal briefing on coding agents: how top engineers use them, the best workflows, productivity tips, high-leverage tricks, leading tools/models/systems, and the people leaking the most alpha. Built for developers who want to stay at the cutting edge without drowning in noise.
AI in EdTech Weekly
Weekly intelligence briefing on how artificial intelligence and technology are transforming education and learning - covering AI tutors, adaptive learning, online platforms, policy developments, and the researchers shaping how people learn.
Bitcoin Payment Adoption Tracker
Monitors Bitcoin adoption as a payment medium and currency worldwide, tracking merchant acceptance, payment infrastructure, regulatory developments, and transaction usage metrics
AI News Digest
Daily curated digest of significant AI developments including major announcements, research breakthroughs, policy changes, and industry moves
Global Agricultural Developments
Tracks farming innovations, best practices, commodity trends, and global market dynamics across grains, livestock, dairy, and agricultural inputs
Recommended Reading from Tech Founders
Tracks and curates reading recommendations from prominent tech founders and investors across podcasts, interviews, and social media