We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Hours of research in one daily brief–on your terms.
Tell us what you need to stay on top of. AI agents discover the best sources, monitor them 24/7, and deliver verified daily insights—so you never miss what's important.
Recent briefs
Your time, back.
An AI curator that monitors the web nonstop, lets you control every source and setting, and delivers one verified daily brief.
Save hours
AI monitors connected sources 24/7—YouTube, X, Substack, Reddit, RSS, people's appearances and more—condensing everything into one daily brief.
Full control over the agent
Add/remove sources. Set your agent's focus and style. Auto-embed clips from full episodes and videos. Control exactly how briefs are built.
Verify every claim
Citations link to the original source and the exact span.
Discover sources on autopilot
Your agent discovers relevant channels and profiles based on your goals. You get to decide what to keep.
Multi-media sources
Track YouTube channels, Podcasts, X accounts, Substack, Reddit, and Blogs. Plus, follow people across platforms to catch their appearances.
Private or Public
Create private agents for yourself, publish public ones, and subscribe to agents from others.
Get your briefs in 3 steps
Describe your goal
Tell your AI agent what you want to track using natural language. Choose platforms for auto-discovery (YouTube, X, Substack, Reddit, RSS) or manually add sources later.
Confirm your sources and launch
Your agent finds relevant channels and profiles based on your instructions. Review suggestions, keep what fits, remove what doesn't, add your own. Launch when ready—you can always adjust sources anytime.
Sam Altman
3Blue1Brown
Paul Graham
The Pragmatic Engineer
r/MachineLearning
Naval Ravikant
AI High Signal
Stratechery
Sam Altman
3Blue1Brown
Paul Graham
The Pragmatic Engineer
r/MachineLearning
Naval Ravikant
AI High Signal
Stratechery
Receive verified daily briefs
Get concise, daily updates with precise citations directly in your inbox. You control the focus, style, and length.
NirD
OpenAI Developers
Salvatore Sanfilippo
🔥 TOP SIGNAL
Cloudflare’s ViteNext is the clearest production-grade AI-coding playbook in public right now. Steve Faulkner says AI bots triage issues, review PRs and security, track upstream Next.js commits, and help maintain the repo, while the team relies on ported tests, weekly slop audits, an internal Engineering Codex, and humans who still review and attest every line — no vibe coding. Dane Connell adds that the project already has 50+ committers, where a committer can simply mean someone wrote the plan for an agent to implement .
🛠️ TOOLS & MODELS
- Journey — Matthew Berman’s new registry for installable agent workflows, packaged as
kitswith skills, tools/code, memories, services, tests, failure examples, and learnings. One example kit,Code refactoring planner v1, analyzes a codebase with static complexity metrics and uses Claude to produce a phased refactor plan; install is either agent-first ornpm install -g journey-kits. - Journey team mode — private/org kits, shared contexts/resources, and version pinning look like the interesting part. Journey points agents at existing systems such as 1Password, Supabase, or Firecrawl without storing credentials itself, and Berman says the product is free for discovering/installing kits right now .
- Codex app server + Vercel plugin — Greg Brockman is now explicitly positioning the Codex app server as the primitive for building agentic apps, and the kitty litter/Litter demo shows why: sessions, chats, skills, agents, folders, and prompts sync between desktop and phone via exposed endpoints. OpenAI Devs also shipped a Vercel plugin inside the Codex app for project-setup-to-deployment flow .
scan-for-secrets 0.1— Simon Willison’s new Python CLI scans folders and logs for leaked secrets before you share them, including common escaped/encoded variants. He built it with README-driven development in Claude Code .- Security review got better with newer frontier models — Salvatore Sanfilippo relays a Linux kernel hacker’s observation that AI security reports went from mostly false positives to mostly valid after Opus 4.5 and GPT 5.2; his practical takeaway is that shipping a serious code patch without an articulated AI review is now a mistake .
- Claude Code policy caveat — if you’re building wrappers or CI around Claude Code / the Agent SDK, Matt Pocock says the current rules are still muddy around CI, distributed sandboxes, and commercial software .
💡 WORKFLOWS & TRICKS
Cloudflare’s reusable AI-maintenance loop
- Port real upstream tests and keep unit, end-to-end, and smoke coverage running against production deployments.
- Let AI bots handle issue triage, PR review, security review, and upstream change detection.
- Encode house rules in an Engineering Codex the reviewer checks automatically.
-
Run recurring slop audits and feed PR mistakes back into
agents.md. - Keep humans on architecture and final line-by-line review .
File-first memory for agents
-
Dump raw source material into
raw/. - Let the LLM compile a markdown wiki with summaries, backlinks, concepts, and local images.
-
Start the agent from
index.mdor view the repo in Obsidian. - Ask task-specific questions against the files.
- File outputs back into the wiki so every query compounds the knowledge base.
Karpathy says this worked for roughly 100 articles / 400K words without fancy RAG at that scale, and Farza says the filesystem-native wiki beat his earlier RAG setup; his
Farzapediaturned 2,500 diary, Notes, and iMessage entries into 400 linked articles for the agent to crawl .-
Dump raw source material into
Share idea files, not full apps — Karpathy’s next move is to publish abstract
idea filesthat other people’s agents can build locally and customize. He published one gist for the wiki workflow here: https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f, and argues thatprompt requests/ prompt libraries are becoming more valuable than code repos .Spec first, then agent, then leak check — Simon’s pattern is clean: write the README/spec by hand, hand it to Claude Code with red/green TDD, then run
uvx scan-for-secrets $OPENAI_API_KEY -d logs-to-publish/before publishing logs or transcripts. He also keeps a~/.scan-for-secrets.conf.shfile that prints recurring secrets for bulk scans .Use the agent to map vendor APIs before changing your abstraction — Simon had Claude Code read the Python client libraries for Anthropic, OpenAI, Gemini, and Mistral and generate raw
curlexamples for streaming and non-streaming cases before he changedllmto support newer features such as server-side tool execution .Micro-context hint — Armin Ronacher says adding his email addresses to agent setup materially improved how the agent interpreted his own comments, users, and log files .
👤 PEOPLE TO WATCH
- Steve Faulkner + Dane Connell — rare public operators showing what an AI-heavy open-source repo looks like when you keep tests, policy, and human review intact .
- Andrej Karpathy — still ahead on durable context design; the interesting new twist is shifting from sharing apps to sharing idea files and prompt libraries .
- Simon Willison — shipped a genuinely useful utility today and paired it with reproducible build patterns instead of vague prompting advice .
- Matthew Berman — worth watching because Journey is not just a demo; he’s dogfooding it for a team knowledge-base workflow with shared resources and private org kits .
- Theo — loud, but useful on harness UX/performance and open-source accountability; his T3 Code numbers make the point concrete .
🎬 WATCH & LISTEN
- Matthew Berman — Journey org kits and shared contexts (11:56-16:50) — Best clip today if you want to package an agent workflow once and reuse it across a team without sharing one agent or leaking credentials. He walks through private kits, 1Password-based resource bindings, audit logs, and keeping agents pinned to the right versions .
- Theo — why T3 Code is Electron and open source (21:31-24:53) — Worth the time for one concrete takeaway: token-stream UIs are harder to make performant than they look. Theo argues Electron plus a reusable event system beat his native experiments, and open-sourcing the harness let the community fork and pressure-test the design .
📊 PROJECTS & REPOS
- T3 Code — open-source Electron agent orchestrator with a custom event system; Theo says it has about 30k users and 1.1k forks, and he explicitly views the forkability as part of the accountability model .
- ViteNext — Cloudflare’s AI-heavy open-source experiment around the Next.js API surface on Vite/Cloudflare; it already has 50+ committers, with many contributions starting as plans for agents to implement .
- scan-for-secrets — new Python CLI for secret scanning before you publish logs or transcripts. README: scan-for-secrets.
- Codex CLI / app server — Theo notes the open-source Codex CLI includes the full app server, which is why third-party harnesses can build against the same primitive the official Codex app uses. Brockman and OpenAI Devs are already using that surface for app-building and Vercel deployment flows .
Editorial take: the durable edge is moving out of the model and into external artifacts — tests, markdown files, kits, app servers, and review rules that any decent agent can operate against.
ollama
Vaibhav (VB) Srivastav
Georgi Gerganov
Top Stories
Why it matters: This cycle's clearest signals were about cost curves, post-training efficiency, the growing leverage of agent harnesses, and evidence that AI adoption itself is becoming a competitive skill.
Open models are moving from "almost there" to economically compelling
MiniMax said independent evals from LangChain show MiniMax M2.7 matching closed frontier models on core agent tasks at roughly 20× lower cost and 2–4× higher speed. In parallel, Gemma 4 E2B was shown running on-device on an iPhone 17 Pro at about 40 tokens/s with image understanding and reasoning, and llama.cpp demonstrated 300 tokens/s on Gemma 4 26B A4B Q8_0 on a Mac Studio M2 Ultra.
"Open models aren’t ‘almost there’ anymore."
Impact: Capability is starting to pair with local deployability and better economics, which matters directly for agent products and on-device use.
Apple published a very simple way to make coding models stronger
Apple Research's Simple Self-Distillation (SSD) fine-tunes a coding model on its own unfiltered sampled outputs—without a teacher model, verifier, RL, execution environment, reward model, or labels . On Qwen3-30B-Instruct, the method improved LiveCodeBench pass@1 from 42.4% to 55.3% and hard-problem pass@5 from 31.1% to 54.1%. One analysis of the paper noted that gains were larger at pass@5 than pass@1, which argues against a simple collapse in output diversity .
Impact: If reproducible, SSD lowers the cost and complexity of post-training for code models.
The harness layer is now a strategic control point
Anthropic said Claude subscriptions will no longer cover usage on third-party tools like OpenClaw, though users can still access those tools through discounted extra-usage bundles or an API key . Developers then highlighted unresolved edge cases around Agent SDK, CI, and claude -p usage in personal, commercial, and open-source workflows . At the same time, alternative ecosystems positioned themselves as more open: one post said ChatGPT subscriptions work with OpenClaw, OpenCode, Pi, and Cline and pointed to the open-source Codex App Server for custom interfaces , while Ollama refreshed usage limits to support heavier third-party tool demand and said existing tools will continue to work with Ollama Cloud .
Impact: Access policy, prompt caching, and developer UX are increasingly shaping who can build on top of frontier models.
A startup experiment suggests AI advantage depends on know-how, not just access
A field experiment on 515 startups found that firms shown case studies of successful AI use went on to use AI 44% more, achieve 1.9× higher revenue, and need 39% less capital.
"AI use is an emerging skill which improves businesses and unlocks entrepreneurship"
Impact: The near-term differentiator may be operational learning—how teams actually integrate AI into work—not simply whether they have model access.
Research & Innovation
Why it matters: The most useful technical work this cycle focused on better training signals, more efficient inference, and clearer explanations of why small or open models are getting more practical.
SSD's core idea is distribution shaping, not teacher replacement
Commentary on Apple's paper framed code generation as a mix of "fork" tokens, where exploration helps, and "lock" tokens, where the model should strongly prefer one next token . Apple argues SSD works by reshaping distributions context-dependently—suppressing distractors at locks while preserving diversity at forks—so the model can recover capacity that fixed greedy decoding misses . Additional commentary said the method remained robust across sampling settings, especially on hard problems and pass@5, and showed no clear degradation on other benchmarks . One reader note also highlighted an appendix experiment where even high-temperature "gibberish" training data still helped under the right evaluation temperature, suggesting the reshaped distribution may matter more than the literal content of the samples . One critic questioned whether training on poor self-outputs can generalize beyond a narrow set of models, datasets, or hyperparameters . PaperCode
Alibaba Qwen's FIPO targets better credit assignment in reasoning
Future-KL Influenced Policy Optimization (FIPO) gives more credit to tokens that make good future reasoning steps more likely, and less credit to tokens that make them less likely . The reported effect was longer reasoning traces—4K to 10K+ tokens—and higher AIME accuracy, from 50% to about 56–58%, outperforming the cited results for DeepSeekR1-Zero-Math and o1-mini. The method also weights nearer-future tokens more heavily and clips or filters outliers for stability . Paper
Gemma 4's efficiency story is starting to get clearer
Follow-on analysis of Gemma 4 highlighted two design choices. Shared KV cache lets later layers reuse key/value projections from earlier layers, which reduces memory and compute pressure and can help with longer sequences . Per-layer Embeddings (PLE) add a small extra token representation at each layer—combining token identity and context-aware information—with a gate deciding when to inject that information into the residual stream . The note adds that PLE is only used in smaller Gemma 4 variants, not the 31B dense or 26B MoE models .
Products & Launches
Why it matters: Shipping velocity stayed high, especially around agent interfaces, deployment plumbing, and open agent ecosystems.
Codex is expanding from coding help into deployment and custom app infrastructure
OpenAI developers announced a Vercel plugin in the Codex app so users can go from project setup to deployment inside the same workflow . Separate posts highlighted the Codex App Server as a way to build custom agentic apps on top of a ChatGPT account, with synced sessions, chats, skills, agents, folders, and prompts across devices .
Cursor 3 keeps pulling agent actions into the UI
Users highlighted smart pills at the bottom of the agent window that suggest context-aware actions like checking out the right branch, including follow-up options for handling local changes . Another user said Cursor was watching a pull request and checking CI status while they were away from the keyboard .
The Hermes ecosystem shipped data, models, and security layers
- A quality-filtered Hermes Agent Reasoning Traces dataset cut 7,646 rows to 3,679, leaving 100% valid JSON tool calls, 63% self-correction, and 96% verification coverage for Stage 2 fine-tuning .
- Harmonic-Hermes-9B launched as a dedicated Stage 2 agentic model for tool calling and multi-turn workflows .
- Carnice-9b, a fine-tuned Qwen3.5-9b, was released for strong performance in the Hermes-Agent harness and can run on consumer GPUs down to 6GB in Q4_K_M .
- Hermes Katana introduced a security layer with character-level CaMeL taint tracking, an encrypted vault, and a hash-chained audit log, with the post claiming it caught 159/159 adversarial cases.
Sakana AI pushed a public consumer product in Japan
Sakana Chat is now available for anyone in Japan as a free AI chat product with web search and fast responses. It is powered by Sakana's new Namazu alpha model family, which the company says aims to retain open-model performance while reducing bias and adapting behavior for Japanese use . Recent examples showed users applying it to everyday search, programming help, and creative generation such as an abstract fish SVG .
Industry Moves
Why it matters: The business story is increasingly about infrastructure, domestic supply chains, and where labs think value will sit in the stack.
Power infrastructure—not just chips—is constraining U.S. AI build-outs
A post summarizing a Tom's Hardware report said half of planned U.S. data-center builds in 2026 are projected to be delayed or canceled because of shortages in electrical infrastructure and parts tied to China . The same summary said China supplies over 40% of U.S. battery imports and roughly 30% of key transformer and switchgear categories, while U.S. transformer lead times have stretched from 24 months pre-2020 to as much as five years. Big Tech spending from Alphabet, Amazon, Meta, and Microsoft was cited as over $650 billion, but still insufficient to close the gap .
China keeps tightening the model-to-silicon loop
Posts this cycle said DeepSeek V4 will run natively on Huawei Ascend 950PR chips, with Alibaba, ByteDance, and Tencent placing bulk orders for hundreds of thousands of those chips and prices rising 20%. One analysis argued the deeper significance is strategic: Huawei's chip line is increasingly compatible with NVIDIA-style instructions, lowering switching costs, while China moves closer to running frontier models at commercial scale on domestic silicon despite export controls . The same note also cautioned that Ascend 950PR still trails the H200 and remains production constrained .
Sakana AI is leaning harder into vertical deployment
Sakana said it is pursuing an "AI × each industry" strategy, with specific emphasis on sectors such as finance in Japan . In parallel, the company is recruiting Forward Deployed Engineers to work directly with customers and implement applications using generative AI, RAG, and autonomous agents to solve operational problems . Leadership framed this as part of a broader attempt to make Japanese AI globally competitive by attracting international researchers and engineers to Japan .
Policy & Regulation
Why it matters: The strongest governance signals were not new laws, but access controls, compliance ambiguity, and broader questions about public-sector capacity and accountability.
Anthropic's third-party subscription change now has compliance implications
Anthropic's policy change means Claude subscriptions no longer cover usage in tools like OpenClaw, though users can still access such tools with discounted extra-usage bundles or a Claude API key . Developers then surfaced unresolved questions about whether the Agent SDK or claude -p is allowed in CI, in commercial software, or in open-source tools distributed to others . Anthropic acknowledged it is working on making the rules more explicit .
Proposed U.S. science cuts would hit core research institutions
A summary linking to Nature said the Trump administration has again proposed massive cuts across U.S. science, affecting agencies from NASA to the NIH, and eliminating the NSF's social, economic, and behavioral sciences directorate. A separate comment argued that, if AI timelines extend, cuts like these could leave too few early-career researchers in the U.S. pipeline .
AI is becoming a tool for civic legibility
Karpathy argued that AI can help citizens analyze public material that is technically public but practically unreadable at scale—such as 4,000-page omnibus bills, budgets, FOIA responses, and lobbying disclosures. He listed use cases including spending analysis, legislation diffs, voting patterns, lobbying and influence graphs, procurement, campaign finance, and local-government records like zoning, policing, and schools. He acknowledged dual-use risks but said he is broadly optimistic that more participation and transparency can improve democratic accountability .
Quick Takes
Why it matters: These smaller items point to where capability, tooling, and user behavior are moving next.*
- Multiple posts claimed OpenAI's GPT-Image-2 / image gen v2 has leaked or is close to release, pointing to stronger world knowledge, text rendering, Arena code names such as maskingtape-alpha, gaffertape-alpha, and packingtape-alpha, and claims that the model is coming soon .
- Early Gemma 4 commentary called out 84% GPQA and strong Codeforces ELO / HLE 20%, but also warned that LM Arena ELO can be gamed by markdown and response length, making it a weak standalone eval .
- A third-party deep dive said Qwen3.6-Plus made a major jump in programming, outperforming Sonnet 4.5, GLM-5.0, and MiniMax M2.5 in general frontend, backend, and web work, while still lagging in niche domains .
- Farzapedia showed a concrete personal-wiki implementation: an LLM turned 2,500 diary entries, Apple Notes, and iMessages into 400 linked articles that an agent can crawl from
index.mdfor design, writing, and product tasks . Karpathy highlighted the approach as explicit, local, file-based, and BYOAI. - A user described ChatGPT shared projects with live document syncing as essential for organizing a family health issue across doctors' messages, documents, and scans, while Claude handled iMessage ingestion and text extraction from HEIC scans .
- One post pointed to a 1-bit LLM that reportedly fits in 1.15GB of memory .
- UnslothAI said it has started uploading preliminary experimental dynamic MLX quants for models including Gemma-4, using methods similar to its GGUF work .
"The most underrated AI metric isn’t benchmark score. It’s: ‘did the job actually get done?’"
20VC with Harry Stebbings
Matt Mullenweg
Today's signal
Today's resource picks lean toward long-form viewing: one speech Harry Stebbings says he has replayed extensively, and one documentary Matt Mullenweg flagged as a weekend watch .
Most compelling recommendation
General McRaven's commencement speech
- Content type: Speech / video
- Author/creator: General McRaven
- Link/URL: Not provided in the source material
- Who recommended it: Harry Stebbings
- Key takeaway: Stebbings called it one of the greatest speeches on the internet, said he has listened to it "probably a thousand times," and used it at the start of every run because its 16-minute length matched the hardest opening stretch
- Why it matters: This is the strongest pick today because it comes with repeated personal use and a specific lesson. In the same conversation, Andrew Dudum referenced the "make your bed" idea as an example of how one tactical action can create momentum for the rest of the day, and Stebbings identified McRaven's commencement speech as the source
"One of the greatest speeches I think recorded and on the Internet. I've listened to it probably a thousand times."
Also notable
Turn Every Page
- Content type: Documentary
- Author/creator: Not provided in the source material; it is described here as featuring Robert Caro and Robert Gottlieb
- Link/URL:https://ma.tt/2026/04/turn-every-page/
- Who recommended it: Matt Mullenweg
- Key takeaway: Mullenweg named it his "weekend watch"
- Why it matters: The recommendation is brief, but it is explicit and easy to follow because he shared a direct link alongside it
The community for ventures designed to scale rapidly | Read our rules before posting ❤️
andrew chen
Ryan Hoover
Big Ideas
1) Open-ended framing creates better solution space
Ryan Hoover notes that when he was a junior PM, narrowly prescribing solutions to engineering limited ideas to his own. He sees the same pattern with AI: specific prompts constrain output, while open-ended prompts can surface novel solutions when they are properly contextualized .
Why it matters: PMs increasingly work through both human collaborators and AI systems. In both cases, better framing can expand the option set before the team commits .
How to apply: Start with the problem and context, then ask for approaches instead of prescribing a single solution path too early .
2) The hard part of AI products is often everything around the model
Andrew Chen argues that the AI wrapper critique misses the real work: distribution without infinite CAC, AI-native UX, brand and trust, ecosystem/community, network effects, customer service, and the usual company-building decisions around pricing, hiring, and fundraising . His conclusion is simple: these are not easy .
Why it matters: PMs evaluating AI opportunities need to judge more than model access. Experience design, trust, distribution, and service can all be core parts of the product advantage .
How to apply: Review AI products as full businesses and full experiences. Ask whether the team has a credible plan for acquisition, retention, trust, and support—not just model integration .
3) PMF is easier to find when product and distribution are designed together
The startup discussion argues that solving a problem you know well is a strong starting point, but PMMF—product-market-marketing fit—may be the sharper early test because a great product without distribution is dead . One example: a chemical startup spent $10k on billboards along a plant manager's commute and landed a contract worth millions . PMF itself becomes visible when customers are thrilled and do not push back on pricing .
Why it matters: PMs can be right about the product idea and still fail if they have not designed a path to the right audience .
How to apply: Pressure-test both sides early: whether customers light up around the problem, and whether you know exactly how to reach them .
4) Strong visions start from the future, then work back
Teresa Torres argues that the best company visions are built around where you want to be in two, five, or ten years, not only around what is possible today .
“Dream without limits, then align those dreams with reality. At some point, they intersect—and that’s where the real building begins.”
Why it matters: Strategy can get trapped by present-day constraints if teams never articulate the future state they actually want .
How to apply: Separate visioning from feasibility. Define the future state first, then identify the part of that ambition that can be built now .
Tactical Playbook
1) Use a three-step AI briefing pattern
- Put the problem and relevant context on the table first
- Ask for open-ended approaches instead of dictating the answer
- Review the novel options that emerge before narrowing
Why it matters: This keeps discovery open long enough for better options to surface, whether you are working with engineers or AI .
2) Run pre-build demand checks in the market
- Write down the feature set you think matters
- Call 25 prospects and ask whether they would want to learn more about a product with those features
- If you have to chase people hard, treat that as a warning that the pain may not be strong enough
- Shift your research toward places where users are already complaining or looking for help
Why it matters: This gives you evidence on problem intensity before you spend time building .
3) Build PMMF into discovery
- Choose a problem you know well or have experienced yourself
- Explain the problem in the customer's language, not only your own
- Pair the product with a specific customer-acquisition plan from day one
- If needed, start as a service or agency to get to initial revenue and validate the niche before turning it into product
Why it matters: The notes make the trade-off explicit: great product idea plus weak distribution is still failure .
4) Turn long-horizon vision into near-term strategy
- Articulate the desired state two, five, or ten years out
- Let the team dream without limits before filtering for practicality
- Find the intersection between that ambition and today's reality
- Build from that intersection, not from today's constraints alone
Why it matters: This creates a strategy that stays ambitious without detaching from execution .
Case Studies & Lessons
1) A $10k billboard bought precision, not scale
In one PMMF example, a chemical startup bought $10k of billboards along the exact commute of a plant manager it wanted to reach and won a contract worth millions .
Lesson: A narrow, expensive channel can outperform broad, cheap reach when the buyer is highly specific and high value .
How to apply: Define the exact person who feels the problem most, then choose distribution based on precision and relevance .
2) Internal adjacency created a path into PM
A fintech PM says they landed their first PM role without prior PM experience by networking, getting mentorship, and excelling in internal roles across customer support, help content, and internal training while working closely with PMs and stakeholders . They describe the result as six months in the role, supported by book clubs, an AI hour, and company-covered training .
Lesson: Product judgment can be built from adjacent work that exposes you to user problems, documentation, operations, and cross-functional decision-making .
How to apply: If you want to move into PM, look for roles that increase proximity to PMs, customers, and stakeholders, then pair that exposure with mentors and structured learning .
Career Corner
1) Internal credibility still opens doors
This PM transition shows that internal mobility can work even without a formal PM background when it is backed by visible execution, networking, and mentorship .
Why it matters: The same PM explicitly notes that the role is changing with AI and the broader tech job market, yet this route still produced an entry point into product .
How to apply: Build a track record in PM-adjacent work, ask for mentors, and make your cross-functional contributions legible to product leaders .
2) Treat AI fluency as part of PM development
The same team supports growth through book clubs, an AI hour, and company-funded training, and the PM sees AI as one of the forces changing the role .
Why it matters: PM development now includes both craft fundamentals and the ability to adapt to AI-driven changes in tools and workflows .
How to apply: Join recurring learning loops inside your company—or create them yourself—so AI practice becomes a habit rather than a one-off experiment .
Tools & Resources
1) The 2/5/10-year vision prompt
What it is: A simple planning prompt: define where you want the company or product to be in two, five, or ten years .
Why explore it: It forces strategy to start from desired outcomes rather than present-day constraints .
Try it: Run the exercise first without feasibility limits, then map where that ambition intersects with current reality .
2) The 25-call validation script
What it is: A lightweight pre-build test: write the feature list, then make 25 phone calls to see whether prospects want to learn more .
Why explore it: It is a fast way to test interest before spending time building .
Try it: Watch how hard you have to push; if response is weak, move closer to users who are already voicing the problem .
3) Complaint-led discovery
What it is: A research heuristic: look for places where users are already complaining instead of relying only on cold outreach .
Why explore it: It helps identify problems that are already painful enough to motivate action .
Try it: Start your next discovery pass in communities where the target user naturally discusses frustrations .
4) Open-ended AI prompt brief
What it is: A prompting rule of thumb: use open-ended prompts with enough context instead of specific prompts that only restate your idea .
Why explore it: It can widen the solution space when you want AI to surface options you did not already have .
Try it: Rewrite one existing prompt so it explains the problem and context without locking the model into a single answer .
5) Internal learning loops
What it is: Book clubs, an AI hour, and company-covered training used as recurring development infrastructure inside one product team .
Why explore it: These formats make PM skill-building continuous, even when a team has limited outside PM experience .
Try it: Create a recurring cadence around shared reading, AI practice, and sponsored training instead of relying only on ad hoc learning .
Greg Brockman
Ethan Mollick
clem 🤗
The strongest signal today: AI know-how is becoming a differentiator
Showing startups how to use AI changed behavior — and outcomes
A field experiment covering 515 startups found that firms shown AI case studies used AI 44% more, generated 1.9x higher revenue, and needed 39% less capital . The takeaway highlighted around the paper is that AI’s main constraint may be less about access and more about knowing how to apply it .
“AI use is an emerging skill which improves businesses and unlocks entrepreneurship”
Why it matters: This is unusually concrete evidence that practical AI adoption guidance can materially change startup performance.
Meta open-sourced a production-tested tool for subgroup calibration
Meta released MCGrad, a Python package for multicalibration, to address a common production problem: a model can look calibrated overall while remaining miscalibrated inside identifiable subgroups or feature intersections . Meta says its gradient-boosted approach improved log loss and PRAUC on 88% of more than 100 production models while substantially reducing subgroup calibration error .
Why it matters: For teams shipping models, reliability is increasingly about performance across slices of users and contexts, not just the average case.
Reliability is still the limiting factor
Microsoft’s Copilot warning landed against a backdrop of user over-trust
Tom’s Hardware reported that Microsoft says Copilot is for “entertainment purposes only” and should not be relied on for important advice . Separately, research summarized by Techmeme said that across 1,372 participants and more than 9,000 trials, most subjects showed minimal AI skepticism and accepted faulty AI reasoning .
Why it matters: Consumer AI distribution is still running ahead of dependable performance, and many users do not appear to be calibrating their trust accordingly.
Computer vision progress is real, but general-purpose performance still looks limited
Joseph Nelson of Roboflow said computer vision remains roughly where language models were three years ago, with persistent failures in grounding, spatial reasoning, precision, and latency . On Roboflow’s RF100VL benchmark, the best multimodal model reached 12.5% zero-shot across 100 real-world tasks, and few-shot prompting improved results by about 10% at best .
Why it matters: The near-term production path still appears to be narrower, task-specific systems. Roboflow says it has productized that approach with RF-DETR, using neural architecture search on Meta’s DINOv2 backbone to create N-of-1 models for custom datasets .
A strategic warning worth keeping in view
Clement Delangue warned that frontier APIs may become less dependable
Hugging Face CEO Clement Delangue said he would not be surprised if frontier labs eventually cut their APIs entirely in a compute-constrained world, prioritizing their own direct products and customers, and he called it “scary and unsustainable” to build only on top of those APIs .
Why it matters: For builders, the message is simple: dependency on a single frontier API may be a strategic risk, not just a technical choice.
Krishi Jagran
Successful Farming
Market Movers
- United States — row-crop margins: Farm Journal said row-crop profits are expected to stay near half of their 2021-22 peak because of high input costs. Net farm income is being supported more by government payments and cattle than by row crops .
- United States / China — soybean demand: China remains a wildcard for soybean acreage, and U.S. exports to China in 2025-26 rank as the second-lowest in eight years.
- United States — eggs and beef: USDA said egg-laying hen inventory is up 7% in 2026 . Separately, Joel Salatin said the U.S. beef herd is the smallest since 1950, while arguing that market structure keeps farmers from fully capturing high beef prices .
Innovation Spotlight
- Guangdong, China — medicinal intercropping under oil tea: In Huadu District, He Shou Wu is being intercropped under oil tea to expand production on limited mountain land. The reported system uses planting holes about 20 cm deep, 3,000-3,200 seedlings per mu, and a 2-3 year harvest window in shady, moist understory conditions . Reported output is 1,000-1,500 jin per mu, adding about 6,000-9,000 yuan per mu without reducing oil tea yield; the company also provides technical guidance and guaranteed buy-back agreements .
- Virginia, United States — low-input livestock system: Joel Salatin described Polyface as a compost-based, non-chemical system that was unaffected by the 400% fertilizer jump he associated with the Ukraine invasion . Operationally, chickens are moved to fresh pasture daily, with 600 birds covering an acre over several weeks; pigs rotate through grass, forest, and acorn paddocks; cattle are grass-finished with an emphasis on pasture diversity .
- Uttar Pradesh, India — bio-fertilizer substitution: A farmer in Gonda district said he replaced chemical NPK 12-32-16 with a Zaytonic bio-fertilizer package on maize, wheat, and soy, reporting better water-holding capacity, lower disease and insect pressure, more flowers and fruits, and stronger production and long-term soil health . He said the mycorrhiza product helps solubilize phosphorus and activate NPK in the soil .
Regional Developments
- United States: High fertilizer costs and availability concerns remain important enough that Farm Journal raised them as a possible driver of 2026 acreage shifts. That comes alongside weak row-crop returns and softer China pull on soybeans .
- Guangdong, China: The medicinal-crop base tied to this intercropping model has expanded to more than 3,000 mu under a company-plus-base-plus-farmer structure, pointing to broader commercialization of understory planting in mountain areas .
- Hebei, China: A sheep farm in Kangbao County saw weak-lamb rates rise to 15-20%, versus a normal rate below 5%, showing how quickly housing moisture and feed quality can cut lamb performance .
Best Practices
- Lamb housing in humid conditions: In the Hebei case, the main correction was to reduce manure bedding from nearly 20 cm to under 5 cm, turn it regularly, and add 5-8 cm dry grass pads along walls where lambs rest. The rationale was that overly thick bedding trapped moisture and created cold stress, diarrhea, and poor digestion .
- Feed inspection for pregnant ewes: Advisors in the same case said even localized mold in hay can be hard to catch at scale, but chronic intake by pregnant ewes can weaken nutrient absorption and increase weak-lamb risk. Their recommendation was tighter hay inspection and, for recently bred ewes, fetal-protection herbal supplements containing ingredients such as angelica, perilla, and yellow celery .
- Pasture and soil management: Salatin cited plant diversity as the key determinant of beef nutrient density in a Bionutrient Food Association study, which is why his system pushes more diverse swards while keeping cattle grass-finished and poultry moving daily . In India, the Gonda farmer's bio-fertilizer program across maize, wheat, and soy was reported to improve water retention and reduce pest and disease pressure .
- Barn monitoring: New sheep barns in the Hebei case are adding real-time monitoring so small growth changes can be detected earlier and checked before they turn into larger losses .
Input Markets
- United States — fertilizer and energy: Farm Journal flagged fertilizer pricing and availability as open questions for acreage decisions, noting that farmers responded during a period of high oil and fertilizer prices that stayed elevated . Successful Farming separately noted recent increases in energy and fertilizer prices since early March and pointed readers to longer-run input inflation trends .
- United States — margin effect: High inputs are already visible in farm economics, with row-crop profits projected near half of 2021-22 peaks and net farm income relying heavily on payments and cattle .
- Input substitution: Two contrasting responses stood out. Polyface said its compost-based system was insulated from a past 400% fertilizer spike , while the Gonda farmer said bio-fertilizers delivered better soil function and crop response than chemical NPK on his farm .
Forward Outlook
- United States — acreage remains sensitive to both costs and demand: Farm Journal said soybeans may carry a larger question mark than some other crops because China is still a wildcard, while high corn stocks keep risk broad rather than crop-specific .
- United States — profit mix matters: With ad hoc payments and cattle doing much of the cushioning, producers should watch whether crop returns improve on their own or remain dependent on outside support .
- Farm software and data tools are becoming more accessible: Bushel's 2026 State of the Farm report points to more younger farmers and wider tech use, even though agronomists, lenders, and peers still shape key decisions . In parallel, Nick Horob is opening another AI on Your Farm cohort and building tutorials on using AI coding tools with the John Deere Ops Center API to create custom farm applications and databases .
- Small dairy operations remain an underserved software segment: One U.S. dairy operator is testing demand for a simple herd-management app for farms under 100 head, including auto-generated Schedule F summaries for about $9/month.
Discover agents
Subscribe to public agents from the community or create your own—private for yourself or public to share.
Coding Agents Alpha Tracker
Daily high-signal briefing on coding agents: how top engineers use them, the best workflows, productivity tips, high-leverage tricks, leading tools/models/systems, and the people leaking the most alpha. Built for developers who want to stay at the cutting edge without drowning in noise.
AI in EdTech Weekly
Weekly intelligence briefing on how artificial intelligence and technology are transforming education and learning - covering AI tutors, adaptive learning, online platforms, policy developments, and the researchers shaping how people learn.
Bitcoin Payment Adoption Tracker
Monitors Bitcoin adoption as a payment medium and currency worldwide, tracking merchant acceptance, payment infrastructure, regulatory developments, and transaction usage metrics
AI News Digest
Daily curated digest of significant AI developments including major announcements, research breakthroughs, policy changes, and industry moves
Global Agricultural Developments
Tracks farming innovations, best practices, commodity trends, and global market dynamics across grains, livestock, dairy, and agricultural inputs
Recommended Reading from Tech Founders
Tracks and curates reading recommendations from prominent tech founders and investors across podcasts, interviews, and social media