Your intelligence agent for what matters

Tell ZeroNoise what you want to stay on top of. It finds the right sources, follows them continuously, and sends you a cited daily or weekly brief.

Set up your agent
What should this agent keep you on top of?
Discovering sources...
Syncing sources 0/180...
Extracting information
Generating brief

Your time, back

An AI curator that monitors the web nonstop, lets you control every source and setting, and delivers verified daily or weekly briefs.

Save hours

AI monitors connected sources 24/7—YouTube, X, Substack, Reddit, RSS, people's appearances and more—condensing everything into one daily brief.

Full control over the agent

Add/remove sources. Set your agent's focus and style. Auto-embed clips from full episodes and videos. Control exactly how briefs are built.

Verify every claim

Citations link to the original source and the exact span.

Discover sources on autopilot

Your agent discovers relevant channels and profiles based on your goals. You get to decide what to keep.

Multi-media sources

Track YouTube channels, Podcasts, X accounts, Substack, Reddit, and Blogs. Plus, follow people across platforms to catch their appearances.

Private or Public

Create private agents for yourself, publish public ones, and subscribe to agents from others.

3 steps to your first brief

1

Describe your goal

Tell your AI agent what you want to track using natural language. Choose platforms for auto-discovery (YouTube, X, Substack, Reddit, RSS) or manually add sources later.

Weekly report on space exploration and electric vehicle innovations
Daily newsletter on AI news and research
Startup funding digest with key venture capital trends
Weekly digest on longevity, health optimization, and wellness breakthroughs
Auto-discover sources

2

Review and launch

Your agent finds relevant channels and profiles based on your instructions. Review suggestions, keep what fits, remove what doesn't, add your own. Launch when ready—you can always adjust sources anytime.

Discovering sources...
Sam Altman Profile

Sam Altman

Profile
3Blue1Brown Avatar

3Blue1Brown

Channel
Paul Graham Avatar

Paul Graham

Account
Example Substack Avatar

The Pragmatic Engineer

Newsletter
Reddit Machine Learning

r/MachineLearning

Community
Naval Ravikant Profile

Naval Ravikant

Profile
Example X List

AI High Signal

List
Example RSS Feed

Stratechery

RSS
Sam Altman Profile

Sam Altman

Profile
3Blue1Brown Avatar

3Blue1Brown

Channel
Paul Graham Avatar

Paul Graham

Account
Example Substack Avatar

The Pragmatic Engineer

Newsletter
Reddit Machine Learning

r/MachineLearning

Community
Naval Ravikant Profile

Naval Ravikant

Profile
Example X List

AI High Signal

List
Example RSS Feed

Stratechery

RSS

3

Get your briefs

Get concise daily or weekly updates with precise citations directly in your inbox. You control the focus, style, and length.

Stitch’s Series A, Vertical AI Standouts, and the New AI Infrastructure Squeeze
May 15
6 min read
874 docs
Elad Gil
Sarah Guo
Elad Gil
+18
Stitch led the funding news, while YC and indie builders surfaced credible vertical AI teams in fintech, healthcare, construction, and internal operations. The strongest macro signals centered on data-center politics, open-source geopolitics, and where value may accrue as software shifts toward reasoning layers above systems of record.

1) Funding & Deals

  • Stitch raised a $25M Series A led by a16z, with Arbor, COTU, Raed, and SVC participating; a16z said it is the firm's first investment in Saudi Arabia . Stitch is building an API-first operating system for financial institutions that unifies ledgers, primitives, and workflows, and a16z frames it as a next-generation global fintech infrastructure company . Reported operating data: more than $5B transacted in the last six months, with customer count up 10x and revenue up 20x in 2025 .

  • Furientis launched with a $5M pre-seed to build next-generation interceptor missiles optimized for scalability, rapid iteration, design simplicity, and low cost . The market case is explicit: current U.S. systems use roughly $4M missiles to intercept $40k drones, with only about 200 interceptors delivered annually . Founders are Brody Franzen and Aris Simsarian.

  • Flick raised $6M for AI-native filmmaking tools and says 14 filmmakers have already produced 13 films on the platform . The founding team pairs filmmaker Zoey Zhang with Instagram Stories founding engineer Rui Cromwell; backers include True Ventures, GV, YC, Lightspeed, Pioneer Fund, Formosa Capital, Olive Tree, and N1 . Garry Tan called it one of the best new creative startups of the year .

2) Emerging Teams

  • PLAN0 turns architectural plans into construction cost estimates and analytics in minutes, and says $20B of projects have already run through the platform . Founders: @abaratiiii, Shervin, and Dimitris.

  • Gigacatalyst lets software companies ship missing product features by talking to an AI. The early signal is strong: in just 6 weeks, it helped customers unblock $1M in pipeline and ship 800 features. Founder: @namanyayg.

  • Clara is an AI-powered primary care doctor that reads a patient's medical history, diagnoses, and treats, with licensed clinicians reviewing every decision . The pedigree matters: the team previously built Circle Medical to nearly 1 million visits per year. Founders are George Favvas, Zeeshan, and Caitlin.

  • Astraea automates clinical trial biometrics, turning raw study data into CDISC-compliant datasets, TFLs, and FDA-ready outputs in days. It targets a specialized drug-development workflow with heavy formatting and submission requirements. Founders: @joshwqngsr and Sanmay.

  • Outside YC, Runik AI reported early product-usage signals worth noting. The product builds a business system from conversation in under five minutes , and two weeks after launch it had 28 signups, 12 active businesses, and 12,000+ operations across industries such as construction, poultry farming, auto parts, HR, and software project management . It also added WhatsApp-based data queries and a Claude MCP connector, and the founder says the more important signal is that users were running real workflows without training .

  • Other YC vertical-software launches in the batch include TakeCareOS for home-care agency back-office automation and Auxos for simulated customer decision testing across messaging, pricing, and positioning .

3) AI & Tech Breakthroughs

  • A notable technical release in the batch is Datadog's Toto 2.0. The company released a family of open-weight time-series foundation models from 4M to 2.5B parameters under Apache 2.0, and said each size outperforms the last using a single hyperparameter configuration while leading the BOOM, GIFT-Eval, and TIME benchmarks . Clément Delangue framed the larger implication as time-series models finally getting predictable scaling laws similar to language and vision, linking compute, data, parameters, and downstream performance .

  • PerfectBit is attacking training data quality at the source. Its data is designed to be correct by construction and verified against physics simulators, scientific databases, and formal proof systems for LLMs, robotics, and AI for Science. The technical idea is to ground training data in external verification rather than post-hoc filtering .

  • ArcGate is a useful security pattern for agent builders. Its LangChain callback blocks prompt injection by treating the problem as unauthorized instruction-authority transfer, meaning webpages, emails, tool outputs, and retrieved documents can provide data but cannot override agent instructions . The implementation is one-line to test , and community feedback already points to real-world issues such as nested-encoding bypasses and false-positive risk on legitimate user input .

4) Market Signals

  • AI infrastructure is becoming a permitting and power story, not just a capex story. Garry Tan highlighted a federal bill from Sanders and AOC to pause AI data center construction, plus 300+ local bills that he says are putting half of planned 2026 data centers at risk of delay or cancellation . He linked that debate to large local economic multipliers, citing 4.7M total jobs nationally in 2023, a 7.5x employment multiplier, and Brookings estimates of 2,000-4,000 jobs per county from a single large data center . In separate commentary, he argued that blaming data centers obscures deeper grid and baseload policy problems .

  • Open-source AI's center of gravity may be shifting east. Clément Delangue said China is now the strongest open-source contributor and that many U.S. startups and academic groups are already using Chinese open models such as DeepSeek, Qwen, and Kimi . In parallel, Suhail flagged that China is allocating up to 72K GPUs each to 10 companies, a sign of more coordinated compute buildup . Delangue also warned that, if there is a bubble, it may be in API-distributed closed-source LLMs given aggressive data-center buildouts and uncertain long-term margins .

  • The GTM stack is being reframed around a system of intelligence. a16z's thesis is that the next valuable layer sits above the database: a reasoning layer that pulls from systems of record via APIs, becomes the user's one-stop place for context and action, and may capture most of the next decade's enterprise value in GTM software .

  • Voice AI already looks overbuilt at the wrapper layer. An operator on a voice AI platform says they tracked 40+ launches in the last year, with most either stalled or shut down . Their takeaway: durable winners go deep on a specific vertical and operational integrations, while weak wrappers get displaced quickly and can see brutal churn after a single bad call . The more attractive wedge may be tooling for the wrappers themselves, including evals, analytics, compliance, and voice ops .

  • AI referral traffic is becoming measurable earlier than many founders realize. Zen Reports says it has crossed 200 connected GA4 properties, and the median tracked site gets more traffic from Perplexity than from Bing.

5) Worth Your Time

  • From System of Record to System of Intelligence — the clearest essay in the batch on how value may migrate from legacy systems of record to reasoning layers above them .

  • Data center NIMBYs are killing $1T in AI infrastructure — worth reading alongside today's infrastructure debate if you want the investor case that local permitting is becoming a strategic AI constraint .

  • 20VC on parallel agents and compute consolidation — the best clip in the batch for the combined demand-side and supply-side view: token growth could scale far above consensus as workflows move from sequential usage to parallel agents, while frontier labs continue locking up scarce capacity .

  • Pax Silica on the Philippines industrial base — useful if you are tracking non-chip bottlenecks in the AI supply chain, especially reducers, motors, rare earth magnets, actuators, and robotics inputs .
  • Clément Delangue on open source and robotics — useful for investors thinking about open-source geopolitics and early distribution in robotics; Delangue argues China now leads open-source contributions and says Hugging Face has shipped almost 10,000 Ricci Mini robots with 300+ apps already built .
Codex Goes Mobile and Practical Agent Loops Get Sharper
May 15
5 min read
135 docs
Thibault Sottiaux
Mike Krieger
Sualeh Asif
+15
OpenAI's Codex mobile preview was the clearest workflow shift today: coding agents are becoming remote operators you can steer from anywhere. Inside: Riley Brown's deploy-on-every-change setup, Thibault Sottiaux's recurring Codex workflows, and the key releases and harness signals worth tracking.

🔥 TOP SIGNAL

  • Codex just moved from "terminal tool" to "remote operator." OpenAI's preview puts Codex inside the ChatGPT mobile app so you can start work, review outputs, steer execution, and approve next steps from iOS/Android while the agent keeps running on your laptop, Mac mini, or devbox . Greg Brockman calls it a "huge step forward for universal usage of agents," and Riley Brown's day-one demo makes that concrete with voice prompting, long-running task notifications, and deploy-on-every-change app iteration from the phone .

⚡ TRY THIS

  • Set up Codex mobile as a real remote console. Riley Brown's exact sequence: update Codex desktop and ChatGPT iOS, restart Codex, keep both on the same Wi-Fi, connect from the mobile prompt, and authorize the same account so chats sync . Then switch to Chats first for one-off/non-coding tasks, keep Projects for coding, use voice mode for hands-free prompting, and leave notifications on because some agents run 10-30 minutes on longer jobs . Plugins are available via @; skills do not show up in the mobile picker yet, but Brown says natural-language requests still hit desktop-side skills, and if you're comfortable with it he recommends full access permissions instead of approving every action .

  • Create a deploy-on-every-change skill for phone-based app iteration. Brown's setup: enable the Vercel plugin in Codex desktop, then prompt: make a skill called YOLO Mode ... every single change is deployed to prod ... the public link is sent. After that, add "please YOLO it" to prompts; he shows it creating a landing page, returning a public link, and reusing the same link after a dark-mode revision request . He also demos create a full notes app on YOLO mode. Mobile optimized. Use Supabase for DB... like Trello and gets a deployed app with auth and persisted data .

  • Schedule a daily "chief of staff" agent. OpenAI Codex lead Thibault Sottiaux says he hands off 100+ tasks/day and runs a recurring automation: have Codex go through Gmail, Notion, and calendar, summarize the day, flag what is at risk, and schedule it for 9am daily so the report lands in the inbox . This is a good pattern if your real bottleneck is launch coordination, on-call visibility, or project drift rather than writing the next function.

  • Prompt like a manager; review like a database engineer. Sottiaux says the biggest lift comes from defining what "good" and "solved" look like, including exact output structure . Mike Krieger says he now hashes out the spec with Claude before it writes code so the model converges on a clear North Star . But Simon Eskildsen still manually reviews every line touching TurboPuffer's database, and Peter Steinberger's looped codex /review skill explicitly stops short of architecture decisions — good reminder to use agent loops for local cleanup, not irreversible system design .

📡 WHAT SHIPPED

  • Codex in ChatGPT mobile app (preview). Start work, review outputs, steer execution, and approve next steps from mobile while Codex runs on laptop/Mac mini/devbox; Greg Brockman called it a "huge step forward for universal usage of agents," and Romain Huet says it's live on iOS and Android . OpenAI post.
  • OpenClaw v2026.5.12. OpenAI setup now defaults to Codex login; runtime fallbacks and stalled-stream recovery were added; Telegram polling survives stalls; installs/startup got leaner/faster . Steipete says the team has been pushing performance, reliability, security, stability, new crabbox automation flows, and automated video QA . Release notes.
  • CodexBar 0.26.0. New integrations for Kiro, Antigravity, OpenRouter, Kimi; calmer menus + keyboard nav; better Codex/Claude limits and cost scoping; macOS asset and CLI/Homebrew fixes . Release.
  • mcporter 0.11.0. Steipete says he now uses it mainly as a more stable browser-automation CLI and for agents testing MCPs without restarts; he expects it to matter less as code mode spreads across harnesses . Release.
  • Harness quality signal from Theo. His current ranking is rough on Claude Code: he calls it the worst harness, says OpenCode has better UX, multi-model support, and cheaper/faster tool-call pruning, says Cursor performs better with Opus, and notes that most of his own T3 Code usage is with Codex 5.5 low/fast anyway .
  • xAI Grok Build (early beta). New CLI for coding, app building, and workflow automation for SuperGrok Heavy subscribers; xAI says the beta is meant to improve from user feedback, and Theo's immediate take was "fast and flicker-free" . Try it.
  • LangChain Deep Agents 0.6. New harness profiles for open models, code interpreter inside the loop, streaming typed projections, DeltaChannel checkpoints, and ContextHubBackend for skills/policies/memories . Blog.

🎬 GO DEEPER

  • 08:35-09:17 — Riley Brown: YOLO Mode setup. Best short clip if you want phone-native vibe coding today: Brown shows the exact Vercel-plugin + custom-skill prompt that makes every change auto-deploy to a public URL .
  • 17:57-19:05 — Thibault Sottiaux: Codex as daily chief of staff. Good clip for anyone thinking beyond codegen. He walks through handing Codex recurring coordination work across Gmail, Notion, and calendar, then scheduling the summary for 9am daily .

  • 27:36-28:37 — Thibault Sottiaux: define "done" precisely. Short, reusable prompting lesson: make the output shape explicit and help Codex evaluate its own success instead of giving a fuzzy objective .

  • 46:40-47:53 — Mike Krieger: spec before code. Worth watching if your agents keep producing something technically correct but strategically off. Krieger's fix is to collaborate on the spec first, then let Claude implement against a crisp North Star .

  • Repo to study — steipete's codex-review skill. Tiny repo, important pattern: iterative review loops are cheap now, but the author explicitly warns they do not replace architecture judgment .

  • Release notes to study — OpenClaw v2026.5.12. Read this if you care about real-world agent reliability; the changelog is mostly about recovery paths, defaults, and startup friction, which is where production agent systems actually leak time .

Editorial take: the durable edge right now is not "more autonomous" by itself — it's better remote control, tighter review loops, and much clearer task definitions.

Codex Goes Mobile, Figure Extends Humanoid Runtime, and Autonomous Agents Beat a Human Baseline
May 15
4 min read
847 docs
Anthropic
Dan Nystedt
clem 🤗
+20
Codex went mobile, Figure extended humanoid runtime past a full day, and PrimeIntellect showed autonomous coding agents beating a human nanoGPT baseline. The brief also covers diffusion decoding speedups, time-series scaling laws, enterprise data agents, Anthropic’s Gates partnership, and the latest U.S.-China compute tensions.

Top Stories

Why it matters: today’s strongest signal is that AI agents are becoming more persistent, more physical, and more capable at open-ended technical work.

  • OpenAI put Codex on the phone. Codex is now in preview inside the ChatGPT mobile app, letting users start work, review outputs, steer execution, and approve next steps from iOS and Android while jobs keep running on a laptop, Mac mini, or devbox; OpenAI also made Remote SSH generally available for managed remote environments . Commentators called it a major unlock for remote agent work and broader day-to-day agent usage .

  • Figure pushed humanoid uptime from a shift demo to around-the-clock operation. Figure said its F.03 robots moved from an original 8-hour target to more than 24 hours of continuous autonomous package sorting without failure, and later crossed 30 hours with no downtime . The company says the robots are now around human parity at roughly 3 seconds per package, run entirely onboard via Helix-02 with no teleoperation, and have processed more than 38,000 packages .

  • Autonomous coding agents beat the human baseline on nanoGPT optimization. PrimeIntellect let Claude Code (Opus 4.7) and Codex (GPT-5.5) run autonomously on the nanoGPT speedrun optimizer track using idle compute, totaling about 10,000 runs, 14,000 H200 hours, and 23.9B tokens . Opus reached 2930 steps and Codex 2950, both ahead of the 2990 human baseline; PrimeIntellect framed the work as a step toward automating AI research .

Research & Innovation

Why it matters: the most notable technical updates were about cheaper inference, clearer scaling laws, and better understanding of what models are doing internally.

  • Zyphra’s diffusion language model targets the decoding bottleneck. ZAYA1-8B-Diffusion-Preview, trained on AMD hardware, claims a 4.6-7.7x decoding speedup over autoregressive LLMs with minimal quality degradation by generating 16-token blocks in parallel . The company argues this matters because autoregressive inference is memory-bandwidth bound, while diffusion removes that bottleneck .

  • Datadog’s Toto 2.0 makes the case that time-series models scale cleanly. The open-weights family ranges from 4M to 2.5B parameters, with each size outperforming the previous one under a single hyperparameter configuration and leading BOOM, GIFT-Eval, and TIME . Datadog’s framing is that time series now shows the kind of predictable scaling behavior long seen in language and vision .

  • Goodfire found a “geometric calculator” inside Llama models. The mechanism encodes numbers as positions on multiple circles, handles arithmetic as well as weekday and month reasoning, and was tested by steering the circles and watching answers change . Goodfire says this kind of neural-geometry work could improve debugging, control, and model design .

Products & Launches

Why it matters: new tools keep turning agents from isolated assistants into systems that can work across design, data, and the browser itself.

  • MagicPath 2.0 is now a multiplayer canvas for humans and agents such as Codex and Claude Code, with real-time shared context and fully functional browser-based prototypes built from real code . It also supports design-to-repo and repo-to-design round trips through external agents .

  • Perplexity Computer now connects to Snowflake. The product can run end-to-end workflows on live warehouse data and return answers with SQL, source tables, filters, and metrics, while admins retain control over access and shared business logic .

  • Kimi Web Bridge brings browser actions to major agent stacks. The extension lets agents search, scroll, click, type, fill spreadsheets, and turn repeated browser work into reusable skills; it supports Kimi Code CLI, Claude Code, Cursor, Codex, Hermes, and more .

Industry Moves

Why it matters: major firms are pairing frontier models with real distribution, public-interest deployment, and international expansion.

  • Anthropic partnered with the Gates Foundation on a $200M package of grants, Claude credits, and technical support across global health, life sciences, education, agriculture, and economic mobility .

  • Runway is expanding to Japan with a Tokyo base. The company says Japan is already its third-largest market, its fastest-growing self-serve market in Asia, and has seen 300% enterprise customer growth over the last 12 months .

Policy & Regulation

Why it matters: AI geopolitics still turns on compute, and approvals matter less than actual hardware movement.

  • U.S.-China chip controls remain unresolved in practice. Reuters-reported approvals cover roughly 10 Chinese firms buying Nvidia H200s, but no chips have shipped yet . Separate analysis this week argued Chinese labs remain compute-constrained and continue renting or smuggling Nvidia-designed chips from third countries, so the real signal is deliveries, not approvals .

Quick Takes

Why it matters: these smaller updates point to where the next wave of tooling, governance, and specialty models is heading.

  • Ahead of Google I/O, a leak described Gemini Spark as an always-on agent with access to Gmail, Calendar, location, tasks, and personal context .
  • arXiv now has a one-year ban for hallucinated references in submissions .
  • Baseten says it serves Qwen3-TTS on vLLM-Omni at $3 per 1M characters, about 90% lower than comparable closed-source TTS APIs .
  • Intern-S2-Preview, a 35B open scientific multimodal model, claims performance comparable to the trillion-scale Intern-S1-Pro on core scientific tasks and launched with day-0 vLLM support .
Agents Move Into Workflows While AI Strategy Broadens Beyond the Model Race
May 15
4 min read
224 docs
Sarah Guo
Elad Gil
Thibault Sottiaux
+10
AI’s biggest moves today were about operationalization: OpenAI and xAI pushed agents deeper into real workflows, while Anthropic, Pax Silica, and Datadog highlighted how AI competition is expanding into philanthropy, industrial policy, and domain-specific research.

The dominant pattern

Today’s clearest signal is operationalization: AI is moving from demo surfaces into working surfaces. The biggest updates were about agents becoming easier to run, supervise, and connect to real systems, while policy and strategy news focused on who will control the infrastructure and alliances around AI.

Agents become more operational

OpenAI pushes Codex beyond coding — and into mobile control

OpenAI rolled out a preview that lets users start new work, review outputs, steer execution, and approve next steps for Codex from the ChatGPT mobile app while jobs continue running on a laptop, Mac mini, or devbox . In a separate discussion, OpenAI said Codex has expanded into general-purpose knowledge work: most tasks are now non-coding, with the agent gathering context from repositories, documents, and Slack, plus access to 100+ plugins and enterprise controls such as sandboxing, read-only permissions, and an auto-review agent for risky actions . OpenAI also added hooks and scoped programmatic access tokens for Business and Enterprise teams, extending Codex into CI and internal automations .

"Huge step forward for universal usage of agents."

Why it matters: OpenAI is positioning Codex less as a code generator and more as a long-running work agent that can be supervised from anywhere and embedded into existing workflows.

xAI opens an early Grok Build beta for terminal-native agents

xAI released an early beta of Grok Build, describing it as an agentic CLI for coding, building apps, and automating workflows for SuperGrok Heavy subscribers . xAI said the product will improve based on user feedback, while Elon Musk asked users to list the most important features to improve, fix, or add; he also noted that the default uses vim keybindings .

Why it matters: The coding-agent race is now clearly multi-player, and the competition is shifting toward full workflow execution inside the terminal rather than autocomplete alone.

AI strategy broadens beyond product launches

Anthropic commits $200M to a Gates Foundation partnership

Anthropic said it is partnering with the Gates Foundation and committing $200 million in grants, Claude credits, and technical support for programs in global health, life sciences, education, agriculture, and economic mobility . The company linked the announcement to an official write-up on its site .

Why it matters: This is a notable public-interest deployment signal: frontier-model access is being paired with funding and implementation support for real-world sectors rather than only sold as software.

AI competition is being framed as an allied supply-chain problem

Jacob Helberg described Pax Silica as a 14-country economic security coalition focused on the AI supply chain, and said its first major project is a 4,000-acre economic security zone in the Philippines for manufacturing vital AI inputs . He argued that the supply chain extends well beyond chips to inputs such as precision reducers, servo motors, rare earth magnets, and actuators, with particular interest in the China-dominated robotics supply chain, and contrasted the approach with state-run infrastructure models by emphasizing commercially viable platforms built with private-sector participation . Separately, Anthropic published a paper on AI competition between the US and China and said the US and democratic allies currently hold the lead in frontier AI today .

Why it matters: The strategic conversation is widening from model benchmarks to industrial capacity, allied coordination, and control of the inputs that AI systems depend on.

Product and research signals worth watching

OpenAI says ImageGen 2.0 is becoming workflow infrastructure

In a podcast episode, OpenAI said ImageGen 2.0 lifted usage by more than 50% in its first two weeks and is now generating more than 1.5 billion images a week in ChatGPT . The company highlighted improvements in text rendering, multilingual output, photorealism, arbitrary aspect ratios, and large multi-object composition, and said more than half of internal presentation slides now use ImageGen . OpenAI also framed the next step as a creative agent and pointed to emerging workflows that combine ImageGen with Codex for website and app design .

Why it matters: The story here is no longer just better image quality; OpenAI is treating image generation as an everyday productivity layer and as a building block for broader agents.

Datadog’s Toto 2.0 suggests time-series models may be entering a scaling-law phase

Datadog released Toto 2.0 as an Apache 2.0 family of open-weights time-series foundation models ranging from 4M to 2.5B parameters on Hugging Face, with every size outperforming the last on the BOOM, GIFT-Eval, and TIME benchmarks under a single hyperparameter configuration . Clément Delangue highlighted the more interesting claim: unlike prior time-series model families that showed flat performance across sizes, Toto 2.0 appears to follow scaling laws more like language and vision models .

Why it matters: If that pattern holds, time-series modeling may become more predictable to scale, which could matter for forecasting and monitoring use cases where open weights are especially valuable.

Garry Tan’s Research Stack on Data Center Employment Effects
May 15
2 min read
167 docs
Garry Tan
Today's strongest organic recommendations came from Garry Tan, who shared a small research stack on data center economics. Brookings stood out for causal evidence on employment effects, while PwC added national-scale context on multipliers and job growth.

What stood out

Today's highest-signal recommendations came as a compact research stack from Garry Tan rather than a single standalone book or podcast. The common thread was clear: evaluate data centers by their broader employment multiplier and downstream ecosystem effects, not only by direct headcount at each site .

Most compelling recommendation

New Evidence on Data Center Employment Effects

  • Content type: Research article
  • Author/creator: Brookings Institution
  • Link/URL:https://www.brookings.edu/articles/new-evidence-on-data-center-employment-effects/
  • Who recommended it: Garry Tan
  • Key takeaway: Tan highlighted Brookings' estimate that a single large data center can add 2,000-4,000 total jobs per county within five to six years, along with an 11% construction employment boost, a 22% increase in information-sector employment, and a 4-5% lift in total private employment
  • Why it matters: This was the strongest resource in today's set because Tan emphasized its synthetic-control analysis across 770 data centers in 93 counties, making it the clearest methodology-backed starting point for readers who want more than anecdotal claims

Companion resource

Economic Contributions of Data Centers

Why this cluster was useful

"To be precise: the multiplier effect is the point, not headcount per site."

Tan tied that argument to downstream effects from fiber buildout, power infrastructure, and supplier networks, and cited Virginia data centers supporting 78,140 jobs and $31.4 billion in economic output in 2023 as a regional example .

For readers evaluating data center buildouts, this was the clearest organic recommendation set of the day: a compact group of sources that can be read together to assess claims about economic impact from multiple angles .

Prototype-First PM Work and the Rise of Builder ICs
May 15
4 min read
107 docs
Product Management
Ravi on Product
Sachin Rekhi
+2
This brief focuses on a prototype-first shift in product management, the rise of customer-context-driven PRDs, and what lean PM orgs imply for careers and execution. It also includes a practical prototyping loop, ambiguity-management tactics, and a short tool stack to explore.

Big Ideas

1) PM work is shifting from long specs to fast prototypes

AI looks strongest on upstream inputs and prototyping, not final prioritization. Sachin Rekhi says AI is highly useful for customer and data insights that shape roadmaps, but less helpful for roadmaps themselves because prioritization is still more artful than simple request-counting, and AI-written specs miss competitive nuance; he is also spending less time on specs as prototyping becomes easier . Ravi Mehta makes the same case from a different angle: fast prototypes can turn product guessing into validation . In SaaStr’s Alloy demo, a PM captures an existing screen, prompts a new workflow, demos it live, then moves to codebase-connected changes that can be reviewed and pushed to GitHub .

  • Why it matters: Speed is moving from document production to decision validation.
  • How to apply: Use AI first to synthesize inputs, then build a disposable prototype before writing a full spec.

2) PM artifacts are getting fed by real customer context

Glyphic’s “commercial brain” centralizes conversations, support tickets, emails, calls, and CRM data, then uses that context to create PRDs and surface repeated feature requests to product teams . A PM on Reddit described Glean in similar terms as enterprise search across Slack, Jira, Drive, and the codebase .

  • Why it matters: Discovery improves when PRDs and prioritization draw from live customer evidence instead of scattered notes.
  • How to apply: Start by centralizing searchable customer, support, and sales context, even if the first win is better retrieval rather than full automation.

Tactical Playbook

1) When a UX debate stalls, build both

A repeatable AI prototyping loop:

  1. Generate an editable PRD and first-pass variants .
  2. Pull in the live PRD, design system, and brand voice as Documents .
  3. Define reusable Skills for recurring transformations like copy tone .
  4. Test variants, then sync the winning changes back into the PRD so the spec stays current .
  • Why it matters: Working software reveals options discussion misses.
  • How to apply: Treat prototypes as sketches—fast, disposable, and built to learn, not to ship .

2) In high-ambiguity roles, protect throughput before process

Practitioners in chaotic TPM environments recommend ruthless prioritization, aggressive focus blocks, short stakeholder intros, and LLMs for summarizing docs, structuring thinking, drafting PRDs, and turning notes into action items . They also advise cutting non-essential meetings when calendars become overloaded .

  • Why it matters: The main PM failure mode in chaos is reacting to everything.
  • How to apply: Stack-rank only what matters this week, block deep-work time first, and use AI to compress inputs—not to make the call for you.

Case Studies & Lessons

1) AI news reader onboarding: prototype the cold start

One startup had to choose between a collapsible topic tree and a long scrolling list for onboarding across 50+ topics in 8 categories. AI-generated working variants let the team test real interactions; they ultimately chose the collapsible version because the expanded list felt overwhelming, and they could measure topic selection and overwhelm instead of arguing abstractly .

  • Lesson: For novel UX problems, prototype first and let users break the tie.

2) Whatnot’s lean PM model raises the bar for IC leverage

Whatnot runs its product surface with 20 PMs across 1,200+ employees; PMs map to cross-team problems rather than engineering managers, and everybody ships . In 2025, the company ran 750 experiments—about 3 ship/don’t-ship decisions per day—and estimated that making each decision 3 days faster compounds to $1.1B in incremental seller earnings over two years . A related pattern is the “High-Impact IC”: a builder who takes a project from problem to production with far less coordination overhead .

  • Lesson: AI increases the value of PMs who can own a problem end to end, not just coordinate around it.

Career Corner

Technical literacy for PMs now includes AI-assisted codebase reading. A Reddit discussion argues PMs should have read access so they can inspect flows, permissions, validation logic, feature flags, integrations, and other product constraints that often live in code . The boundary is clear: use it to ask better questions and understand constraints earlier, not to bypass engineers or dictate implementation . For non-technical PMs feeling behind, peers recommend learning Claude, getting comfortable in the terminal, and building small side projects while leaning on business expertise that engineering already values .

Tools & Resources

  • Dazl: spec-driven AI prototyping with live Documents, reusable Skills, and PRD sync in both directions .
  • Alloy: permissionless screen capture and codebase-connected AI app building for fast idea exploration, live demos, and GitHub handoff .
  • Figma Make / Claude Design / Glean: on-brand prototypes via integrated design systems, fast high-fidelity mockups, and enterprise search across Slack, Jira, Drive, and code .

Start with signal

Each agent already tracks a curated set of sources. Subscribe for free and start getting cited updates right away.

Coding Agents Alpha Tracker avatar

Coding Agents Alpha Tracker

Daily · Tracks 110 sources
Elevate
Simon Willison's Weblog
Latent Space
+107

Daily high-signal briefing on coding agents: how top engineers use them, the best workflows, productivity tips, high-leverage tricks, leading tools/models/systems, and the people leaking the most alpha. Built for developers who want to stay at the cutting edge without drowning in noise.

AI in EdTech Weekly avatar

AI in EdTech Weekly

Weekly · Tracks 92 sources
Luis von Ahn
Khan Academy
Ethan Mollick
+89

Weekly intelligence briefing on how artificial intelligence and technology are transforming education and learning - covering AI tutors, adaptive learning, online platforms, policy developments, and the researchers shaping how people learn.

VC Tech Radar avatar

VC Tech Radar

Daily · Tracks 120 sources
a16z
Stanford eCorner
Greylock
+117

Daily AI news, startup funding, and emerging teams shaping the future

Bitcoin Payment Adoption Tracker avatar

Bitcoin Payment Adoption Tracker

Daily · Tracks 108 sources
BTCPay Server
Nicolas Burtey
Roy Sheinbaum
+105

Monitors Bitcoin adoption as a payment medium and currency worldwide, tracking merchant acceptance, payment infrastructure, regulatory developments, and transaction usage metrics

AI News Digest avatar

AI News Digest

Daily · Tracks 114 sources
Google DeepMind
OpenAI
Anthropic
+111

Daily curated digest of significant AI developments including major announcements, research breakthroughs, policy changes, and industry moves

Global Agricultural Developments avatar

Global Agricultural Developments

Daily · Tracks 86 sources
RDO Equipment Co.
Ag PhD
Precision Farming Dealer
+83

Tracks farming innovations, best practices, commodity trends, and global market dynamics across grains, livestock, dairy, and agricultural inputs

Recommended Reading from Tech Founders avatar

Recommended Reading from Tech Founders

Daily · Tracks 137 sources
Paul Graham
David Perell
Marc Andreessen 🇺🇸
+134

Tracks and curates reading recommendations from prominent tech founders and investors across podcasts, interviews, and social media

PM Daily Digest avatar

PM Daily Digest

Daily · Tracks 100 sources
Shreyas Doshi
Gibson Biddle
Teresa Torres
+97

Curates essential product management insights including frameworks, best practices, case studies, and career advice from leading PM voices and publications

AI High Signal Digest avatar

AI High Signal Digest

Daily · Tracks 1 source
AI High Signal

Comprehensive daily briefing on AI developments including research breakthroughs, product launches, industry news, and strategic moves across the artificial intelligence ecosystem

Frequently asked questions

Choose the setup that fits how you work

Free

Follow public agents at no cost.

$0

No monthly fee

Unlimited subscriptions to public agents
No billing setup

Plus

14-day free trial

Get personalized briefs with your own agents.

$20

per month

$20 of usage each month

Private by default
Any topic you follow
Daily or weekly delivery

$20 of usage during trial

Supercharge your knowledge discovery

Start free with public agents, then upgrade when you want your own source-controlled briefs on autopilot.