We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
PM Daily Digest
by avergin 100 sources
Curates essential product management insights including frameworks, best practices, case studies, and career advice from leading PM voices and publications
20VC with Harry Stebbings
andrew chen
Elena Verna
Big Ideas
1) Tab count is a fast AI opportunity filter
Andrew Chen's heuristic is simple: the number of browser tabs or alt-tabs in a workflow is a proxy for how much AI can compress that work into a single experience . His example is person/company research, which used to require LinkedIn, X, Google, notes, and Slack, but can now be collapsed into one prompt in about 10 seconds . He says the biggest opportunities sit in workflows where users alt-tab 20+ times per task, especially in sales, recruiting, research, compliance, and procurement .
Why it matters: it gives PMs a concrete way to prioritize AI work around workflow compression rather than novelty .
How to apply: audit a few high-frequency jobs your users perform, count tabs and copy-paste loops, and prioritize the flows with the most context-switching first .
"AI doesn’t need to be superintelligent to be wildly useful. it just needs to be good enough to close the tabs"
2) AI monetization needs flexibility, not pricing dogma
Elena Verna argues current monetization models are not right for every AI company because many teams are still passing through expensive LLM costs to users . She expects LLM costs to fall and says monetization will need to move toward outcomes as models commoditize . She is also explicit that subscription-only monetization is a poor fit for bursty usage; at Lovable, adding top-ups on top of subscription increased monetization capture and improved retention .
Why it matters: if usage is uneven and model costs are moving, pricing becomes part of product strategy, not a one-time packaging decision .
How to apply: test ad hoc purchases alongside subscription for bursty use cases, and make pricing changes operationally easy instead of treating them as annual events .
3) For productivity tools, meaningful frequency beats intensity
Verna frames activation around product engagement: define the aha moment, the steps to reach it, and the early habit loops that bring users back . She argues intensity can be an anti-metric for simple productivity tools, because more time may mean users are stuck, while daily or weekly usage sits in the habitual zone and monthly usage drifts into the forgettable zone . She also warns against login-based metrics and prefers value-creating actions instead .
Why it matters: teams often mistake activity for value .
How to apply: choose one or two actions that clearly represent user value, then track repeat frequency on a daily or weekly basis rather than visits or logins .
4) "Minimum lovable" is part of the product bar
Verna argues teams should aim for a minimum lovable product in every feature, because software is increasingly judged by the emotion, trust, and connection it creates, not just by basic functionality . In her framing, the progression is: it works, users trust it, then users connect with it .
Why it matters: she argues personality and emotional connection are becoming a minimum bar to kickstart growth .
How to apply: during reviews, evaluate not just whether a feature works, but whether it creates trust and a recognizable product feel .
Tactical Playbook
1) Run a tab-count audit before you scope an AI feature
Use this sequence:
- List the tabs, docs, and tools a user opens to finish one job; Chen's core idea is that tab count signals compressibility .
- Mark every copy-paste handoff; Chen says eliminating 6+ tabs and a copy-paste loop is immediately useful to users .
- Prioritize jobs with extreme context switching; he highlights workflows with 20+ alt-tabs per task .
- Prototype the whole flow as one AI-native experience; his example collapses LinkedIn, X, Google, notes, and Slack into a single prompt-driven workflow .
Why it matters: this turns abstract AI brainstorming into a concrete prioritization method .
2) Redefine activation around value, not logins
A practical setup from Verna's framework:
- Write down the user's aha moment and the steps required to get there .
- Decide which action proves value; at Lovable, examples include building an app or receiving traffic on a published app .
- Track whether that action repeats daily or weekly, because that is the habitual zone Verna wants to see .
- Treat raw logins as a vanity metric and be careful with time-spent metrics if your product is supposed to feel simple .
Why it matters: it aligns your core metric with value creation instead of mere presence .
3) Use a two-speed launch system
Lovable's operating rhythm suggests a clear playbook:
- Ship customer-facing improvements daily, not just bug fixes .
- Let the people closest to the work share releases; Lovable encourages engineers to post launches socially and then "beeswarms" those posts for amplification .
- Reserve major narrative effort for bundled launches every 1-2 months, when multiple capabilities add up to a story and a step-function change .
- Treat ongoing visibility as part of retention and resurrection, not just acquisition; Verna says the constant noise brings people back because the product feels alive and evolving .
Why it matters: it separates release velocity from storytelling cadence without losing either .
4) Treat freemium as a marketing channel with its own metric
Verna's framing is unusually direct: a free user has value if they get delighted and then market the product on your behalf . Lovable tracks this with a "lovable score" that measures how often users refer the product to someone else .
How to apply:
- Define what a successful free experience looks like before conversion .
- Track referral behavior explicitly, not just free-to-paid conversion .
- Protect the parts of the free experience most likely to create delight and sharing .
Why it matters: it gives PMs a clearer way to value free usage in products where word of mouth matters .
Case Studies & Lessons
1) Lovable turned shipping cadence into retention infrastructure
At Lovable, engineering releases improvements every day, employees post about those releases on social, the company amplifies them internally, and marketing concentrates on bigger tier-one launches every 1-2 months . Verna says that constant noise is part of retention and resurrection because users feel the product is "living, breathing" and worth revisiting .
Key takeaway: if your category is moving quickly, consistent visible improvement can be part of the product experience, not just a marketing layer .
2) A Romanian accountant SaaS validated the workflow before polishing the brand
One founder started with a very specific problem: accountants were spending 3-5 hours each month chasing invoices, bank statements, and receipts over WhatsApp . Validation was lightweight and direct: they messaged about 50 Romanian accountants on WhatsApp, got repeated confirmation, and built the MVP in 2 weeks . The product itself stayed close to the workflow: each client gets a personal upload link with no account or onboarding, and the accountant sees a dashboard showing who sent documents and who did not . On day one, the product saw 172 visitors, 18 signup reaches, 2 registered accounts, 2 Stripe checkout visits, and a 59% bounce rate .
A commenter highlighted the strongest decision: the product started from a real workflow rather than "cool tech," and recommended 5-10 Zoom walkthroughs of actual month-end work to surface edge cases before chasing more traffic . The founder's own lesson was that niche, non-English B2B can be slow, but each signup is more likely to be a real customer than a curiosity click .
Key takeaway: tight workflow validation plus narrow positioning can produce higher-signal early learning than broad top-of-funnel traffic .
3) Lovable used top-ups to fit bursty AI usage
Verna says Lovable introduced ad hoc top-ups on top of subscription and the response was "absolutely wild" . Her claim is that this kind of purchase adds incrementally rather than cannibalizing recurring revenue, and that retention improves when users get this flexibility .
Key takeaway: when usage comes in bursts, a hybrid pricing model can capture more value than subscription alone .
Career Corner
1) Compare roles by daily work loop, not just by title
In one Product Management community thread, the choice was between an IT Requirements Engineer role in IAM and a Product Owner role in another area . The IT Requirements Engineer description centered on gathering requirements for identity and access management systems and translating business needs into technical specifications , while the Product Owner role centered on stakeholder work, product requirements, backlog prioritization, and guiding development teams .
Why it matters: the titles sound adjacent, but the day-to-day work is different .
How to apply: evaluate career options against growth, compensation, job security, work-life balance, domain interest, longevity, and pay—not title prestige alone .
2) Use community signals carefully when assessing AI exposure
In the same thread, one commenter said an IT Requirements Engineer sounds closer to a Business Analyst role . Another suggested IAM may be more repetitive, but also less likely to be handed over to AI than a Product Owner role .
Why it matters: job security discussions are already being filtered through assumptions about which work AI will and will not absorb .
How to apply: treat this as community signal, not settled fact, and stress-test any role by asking which parts of the job are domain-heavy, stakeholder-heavy, or easy to standardize .
3) Pricing and engagement design are becoming stronger PM differentiators in AI products
Across Verna's interview, two recurring responsibilities stand out: defining meaningful engagement signals instead of vanity metrics and building the infrastructure to test monetization model changes quickly as AI costs and economics shift .
Why it matters: these are product problems that cannot be solved by feature delivery alone .
How to apply: if you want to broaden your scope, volunteer for activation metric design or pricing and packaging experiments rather than limiting yourself to backlog management .
Tools & Resources
- Andrew Chen's tab-count post — a compact framework for identifying AI opportunities by counting tabs, alt-tabs, and copy-paste loops in a workflow .
- Tab-count worksheet — create a simple table with columns for job-to-be-done, tabs opened, copy-paste handoffs, and whether the flow could be collapsed into one AI-native experience .
- Elena Verna: How Lovable Launches Product & Hacks Social to Go Viral — useful for PMs working on launch cadence, activation metrics, freemium, and AI monetization design .
- Meaningful action scorecard — document the aha moment, the action that proves value, the target frequency, and the anti-metric you want to avoid, such as logins or excessive time spent .
- Romanian accountant workflow-first case study — a useful teardown of direct problem validation, narrow MVP scope, simple pricing, and day-one funnel metrics in a niche B2B market .
scott belsky
Tony Fadell
John Cutler
Big Ideas
1) Taste at speed is becoming a real PM advantage
Aakash Gupta’s framing is that the emerging skill is taste at speed: the ability to evaluate working software quickly, kill most of it, and ship the survivors . In that model, AI does not just speed building; it changes the bottleneck from can we build it to should we ship it. The workflow contrast is sharp: the older path runs from idea to PRD to design to engineering to QA to ship in 8-12 weeks, while the AI-era loop described here is idea → 5 prototypes → evaluate → kill 4 → spec the survivor → ship in 1-2 weeks.
“The spec didn’t disappear. It moved from step 2 to step 6.”
- Why it matters: the leverage comes from filtering, not from shipping everything faster; the cited 80% kill rate is the point .
- How to apply: for ambiguous problems, require multiple working prototypes, review them against empathy, simulation, strategy, taste, and creative execution, then spec only the winner .
2) Context and rituals are becoming the real operating leverage
John Cutler argues that the first place to inspect a product organization is its rituals: the daily and weekly interactions people have through meetings, dashboards, Slack, and other tools . The weak spot is often the layer between front lines and leadership, where information has to move fluidly across the organization . He also warns that AI makes it easy to generate documentation, but more documentation does not create intentionality . Leah Tharin makes the complementary product point: context is the real value, not the model, and a jobs lens like “I want to listen to music on the go” opens a much broader solution space than a demographic profile .
“Frameworks are models and all models are useful but wrong.”
- Why it matters: without shared context and deliberate rituals, faster output just creates faster drift. Cutler also points to a collective memory problem where teams keep re-documenting old issues because context is co-created over time .
- How to apply: build living context around recurring customer challenges, not just one-off deliverables, and deliberately design how information moves up, down, and across the org .
3) AI speed makes alignment and work-shape clarity non-negotiable
Scott Belsky argues AI creates a stronger case for talent density and far more alignment than usual because teams can now move very quickly in the wrong direction . Tony Fadell’s reminder is simpler: knowing the destination helps people self-prioritize and decide what and how to build . Cutler adds that most organizations have parallel motions at once—some work is large, coordinated, and governance-heavy, while other work is highly iterative—and pretending everything should run through one process is damaging .
- Why it matters: more prototyping capacity increases the cost of fuzzy goals and one-size-fits-all process.
- How to apply: make the destination explicit, separate high-coordination work from iterative work, and align your operating rhythm to each motion rather than forcing one template across both .
Tactical Playbook
1) Run a prototype-first decision loop
- Start with multiple working options. The model here is five fast prototypes, not one polished plan .
- Evaluate against five lenses. Check empathy, simulation, strategy, taste, and creative execution while looking at working software .
- Kill aggressively. An 80% kill rate is framed here as a feature, not a failure .
- Write the spec after you have a winner. In this flow, the spec follows the prototype, not the reverse .
- Keep a human gate before production. Anthropic still requires an engineer to approve changes before anything goes live .
- Why it matters: it compresses false certainty early and moves discussion onto concrete artifacts.
- How to apply: pilot this on one fuzzy feature before greenlighting a full PRD or roadmap commitment.
2) Use challenge memory in discovery
- State the customer challenge in job terms. Prefer “listen to music on the go” over a demographic profile .
- Capture the surrounding context. Tharin’s argument is to build memory around the challenge, not just a single job statement .
- Feed better context into the system. More correct context improves the odds of better decisions and of spotting bad data early .
- Revisit old knowledge before reopening old problems. Cutler points to teams repeatedly documenting the same issue because collective memory is weak .
- Why it matters: it broadens solution space and reduces rediscovery waste.
- How to apply: keep one artifact per problem area with the job to be done, prior evidence, edge cases, and what the team already learned.
3) Repair the operating system through rituals
- Map the current rituals. Start with the daily and weekly interactions people actually have, not the official process deck .
- Design information cadence intentionally. Cutler’s advice is to get information moving up, down, and across the org deliberately; documentation alone is not enough .
- Name parallel motions. Separate large, coordinated efforts from fast iterative streams so each gets the right governance .
- Label relationships honestly. Do not force work into fake linear hierarchies when frontline teams can sometimes move business metrics directly .
- Treat new habits as repeated experiments. One kickoff meeting or spreadsheet rarely survives without sustained reps .
- Why it matters: many execution problems are really information-flow and habit-formation problems.
- How to apply: redesign one recurring meeting and one update channel before adding another template or framework.
Case Studies & Lessons
1) Anthropic’s Claude Code workflow pairs extreme speed with explicit review gates
Boris Cherny is described as shipping 20-30 PRs a day using five parallel Claude instances, with a third of his code potentially started from the iOS app and 100% of his own code written with Claude Code . This sits inside an unusually technical culture where everyone shares the title Member of Technical Staff and PMs, designers, data scientists, and even finance code . Company-wide, Claude Code writes about 80% of code, and productivity per engineer is cited as up 200% since launch even as Anthropic tripled headcount . The process is not review-free: every PR is first reviewed by Claude Code, which catches about 80% of bugs, and a human engineer still does the second pass and approves anything before production .
- Lesson: the interesting move is not just AI-assisted coding; it is AI-assisted coding plus automated review plus human approval.
- Boundary condition: Gupta notes this setup fits a small, senior team with deep shared context, where the product is the AI tool itself, though the prototype-first discipline can translate beyond Anthropic .
2) Anthropic’s product work uses volume to improve judgment, not just output
The reported iteration counts are unusually high: agent teams went through “probably hundreds of versions” before shipping; condensed file view saw about 30 prototypes followed by a month of internal dogfooding; the terminal spinner had roughly 50-100 iterations, with about 80% not shipping. One example is plugins: Daisy reportedly used a swarm of a couple hundred agents over a weekend, producing about 100 tasks and an implementation that became “pretty much the version of plugins that we shipped” .
“And it’s a filtering function, not an acceleration function. The 80% kill rate is the whole point.”
- Lesson: faster tools matter most when the team is willing to discard most versions.
3) PM hiring loops are being redesigned because old homework signals got cheaper to fake
Andrew Chen says homework has become a common interview step for PMs and other knowledge roles because it can surface real work output . His update is that, in recent weeks, these responses have been flooded with “AI slop”—long, meandering documents instead of a short, high-signal point of view . His proposed fixes are a recorded presentation, where candidates sign off on what they wrote and can be questioned later, and a true work trial reserved for the end of the funnel . He also cautions that overly structured formats can alienate top-end talent .
- Lesson: when a signal becomes easy to mass-produce, move evaluation closer to live reasoning or real work.
Career Corner
1) Build taste reps now, before the gap compounds
The clearest career signal in this set is volume of evaluation. A PM who reviews 15 prototypes a week builds judgment faster than one reviewing one spec a month; over six months, Gupta argues that becomes a widening taste gap and then a career gap . He also says PMs who start building these reps now will have a massive head start .
- Why it matters: the compounding advantage comes from pattern-matching on working software.
- How to apply: keep a log of prototypes you killed, what you learned, and which evaluation lens changed your mind.
2) Nontraditional backgrounds still map well to core PM work
In community discussion, PMs pointed to several transferable skills from psychology, behavior analysis, and research backgrounds: observing people use a product, noticing nonverbal cues, asking better questions, and using surveys and statistics to understand how broad a problem is . One experienced PM described these as among the most important things PMs do , while a former cognitive neuroscience researcher said they successfully switched into product and enjoy the work . Another poster said the technical side of a specific product or industry looked like the stimulating challenge, not a blocker .
- Why it matters: PM hiring still rewards human observation and research judgment, not just AI fluency.
- How to apply: translate prior work into PM language: user observation, hypothesis formation, research design, and statistical interpretation.
3) Candidates and hiring managers both need an AI-era interview upgrade
For candidates, the implication of Chen’s post is straightforward: concise thinking and live defense now matter more than polished take-homes alone . For hiring managers, recorded walkthroughs can scale better than full work trials, while the most realistic evaluation should still happen late in the funnel . Chen’s warning against overly structured formats is also a reminder not to optimize the process so tightly that you filter out strong candidates .
- Why it matters: interview loops are becoming tests of reasoning ownership, not document generation.
- How to apply: if you’re a candidate, practice explaining trade-offs live; if you’re hiring, keep one live defense step in the loop.
Tools & Resources
- There’s a New PM Skill. It’s Called Taste at Speed — the clearest source here on prototype-first PM work, spec-after-prototype sequencing, and Anthropic’s reported metrics
- Claude Code / Cowork — examples of AI-assisted building workflows to watch; Cowork is described as a full product for non-engineers built in about 10 days and part of Anthropic’s push to bring this style to non-engineers
- The missing layer in government tech? A real operating system. — John Cutler on rituals, information flow, and why a real operating system is more than a framework
- 10 print Hello World — Leah Tharin on context as value, jobs framing, and building memory around customer challenges
- Second Axis — a community example of tools targeting the “messy middle” between idea and execution by generating docs, tickets, and edge cases from a feature idea; treat this as a category to watch rather than a vetted recommendation
Aakash Gupta
Product Management
Lenny Rachitsky
Big Ideas
1) The scarce skill shifted from building to deciding
Creation is easier; judgment is now the differentiator. Hiten Shah frames the new bottleneck as deciding what should exist, who it is for, what problem it solves, and whether it should be built at all . Rekhi makes the operational PM version: AI has sped up delivery enough that discovery and design can become the constraint, increasing the risk of feature factories if teams stop validating with customers .
The bottleneck moved.
- Why it matters: faster output does not automatically create better products.
- How to apply: raise the bar on problem selection before prompting or prototyping starts, and require a clear statement of what is being built and why .
2) AI should speed discovery, not replace product intuition
Rekhi's core caution is to stay in the loop. PMs still need to read customer feedback, sample interview recordings, and digest research themselves to build intuition and empathy . In his experience, end-to-end workflows that ask AI to synthesize feedback, invent the feature, and write the spec produce poor results; the better pattern is to let AI surface pain points and summaries, then apply human judgment to the solution .
- Why it matters: many PM decisions still happen without full information, so intuition remains a real operating asset .
- How to apply: ask for verbatims alongside summaries, and keep solutioning as a human step .
3) Discovery is becoming a continuous operating system
The notable change in Rekhi's workflow is not one tool but a connected system: continuous surveys, feedback rivers, AI-generated interview guides, interview synthesis, AI-moderated interviews, functional prototypes with analytics, and natural-language data analysis . He describes moving from quarterly NPS work with a marketing team to continuous collection and weekly automated reporting .
- Why it matters: PMs can keep pace with faster engineering without waiting on long research or analytics queues.
- How to apply: replace one-off studies with a recurring loop: collect, synthesize, validate, instrument, and revisit.
4) Some products may need to design for agent-to-agent use
Aakash Gupta argues that many teams still optimize for a single interaction model - human opens app, human types, AI responds - while the next surface may be agent-to-agent, where a user's assistant contacts the product directly . His suggested PM questions shift to what the product exposes to other agents, what permissions it grants, and where it escalates when an AI cannot answer .
- Why it matters: onboarding flows, nudges, and empty states assume a human is watching .
- How to apply: inventory which actions, permissions, and escalation paths would still work if another agent were the caller .
Tactical Playbook
1) Build a continuous signal stack
- Run surveys continuously. In Rekhi's example, Claude calculated an overall NPS of 41 from 2,600 responses, showed 58% promoters, visualized monthly trends, and supported deeper segmentation work .
- Automate the reporting layer. His agent checks the CSV, runs numerical analysis and verbatim themes, then generates both an HTML report and a Gamma presentation .
- Add a feedback river. Tools in this category pull from sources such as App Store, Google Play, G2, Zendesk, cancellation surveys, and NPS, then group feedback into themes with trend lines and counts . In his example, one theme showed 138 complaints about Evernote import issues .
- Keep the customer voice attached. Review exact verbatims and use the linked identities or contact records when you need follow-up conversations .
Why it matters: this turns customer signal from a quarterly project into a weekly operating rhythm.
Start here: automate one recurring survey report, then add one support or review source next.
2) Upgrade interview work end to end
- Seed AI with strong interviewing principles. Rekhi uses a summary of actionable best practices from The Mom Test plus a research brief to generate better interview guides .
- Transcribe and summarize recordings against a template. His workflow extracts takeaways, pain points, workflow/tools, feature requests, and direct quotes from each interview .
- Ask for cross-interview patterns. The same workflow can cluster themes across a batch and count how often pain points appear across 10 interviews .
- Use AI-moderated interviews when speed matters. Rekhi says these tools can run async concept interviews, probe with follow-up questions, and return summarized results by the next morning .
- Still watch some interviews yourself. He explicitly keeps sampling recordings to build customer and product intuition .
Why it matters: scripting, transcription, and synthesis compress from hours or days into an overnight workflow.
Start here: standardize one interview template before automating anything else.
3) Use prototypes as research instruments
- Build a functional prototype, not just a mockup. Rekhi used Bolt to create a working Ask AI concept .
- Instrument it like a real product. He adds in-product surveys, retention tracking, session replay, and heatmaps through analytics tooling such as PostHog .
- Use synthetic users for fast usability feedback. In his example, synthetic feedback surfaced a hidden entry point, 5-20 second waits, missing example prompts, and the lack of multi-note querying .
- Then let real behavior settle design choices. Heatmaps showed users clicking the Ask AI button in the top right, not the bottom-right floating button .
- Do not confuse synthetic feedback with product-market fit. Rekhi says it is useful for usability-style feedback, not for determining product-market fit .
Why it matters: this gives teams pre-launch signal on usability and return behavior that mockups cannot provide.
Start here: add one survey question and one heatmap before you send a prototype out.
4) Make analytics self-serve - carefully
- Connect AI to your data safely. Rekhi uses MCP or a database dump with read-only access so the model can inspect schema and query data .
- Ask questions in plain English. His workflow reads tables and columns, writes SQL, groups results, and returns charts or recurring dashboards from natural-language prompts .
- Teach the system with examples. His advice is to add real question-and-SQL pairs because the model learns patterns from examples well .
- Document schema quirks explicitly. A simple instruction such as using
canonical_sourceinstead ofsourceimproved accuracy in his example . - Audit early, then share. Rekhi says he audited a few dozen queries to build confidence, and recommends sharing the Claude Project so the whole team benefits from the learned context .
Why it matters: PMs can answer more of their own product questions without waiting on a data queue.
Start here: teach one high-value dataset first; do not trust a zero-context setup .
Case Studies & Lessons
1) Tesla optimized for the moment of doubt
Tesla's Supercharger spacing is described here as a product decision, not a simple infrastructure rule. The cited explanation is that chargers were placed around where drivers typically hit 15-20% battery - when range anxiety begins and people start doing mental math - rather than at uniform intervals . The deeper move was optimizing for the user's emotional state at a key moment, even though that is harder to put in a dashboard than coverage or utilization .
- Why it worked: it reduced the moment where users begin to doubt the journey .
- How to apply: identify the point in your journey where users start calculating, hesitating, or seeking reassurance, and design around that moment first .
2) Notejoy's Ask AI concept shows layered discovery in practice
Synthetic users first surfaced usability issues in Rekhi's Ask AI prototype: the entry point was not obvious before a note was selected, response time felt too slow, example prompts were missing, and cross-note querying was absent . After that, prototype instrumentation showed that users preferred the top-right Ask AI button over the bottom-right alternative .
- Why it worked: different methods answered different questions - synthetic users for early usability friction, real usage for design preference.
- How to apply: use simulated feedback to narrow what to test, then use real telemetry to settle decisions .
3) LocalMind was interesting, but not frequent enough
Lenny Rachitsky describes LocalMind as an app on top of Foursquare that let people ask questions of users checked into places around the world, such as whether there was a long line at a location . His conclusion was that it solved a real problem, but only occasionally, which made it hard to support as a standalone business .
- Why it matters: novelty and utility do not guarantee repeated use.
- How to apply: pressure-test frequency of need early, not just whether the concept is clever or helpful .
Career Corner
1) Strong PM narratives still center on impact, judgment, and alignment
Lenny's shorthand for the role is impact, collaboration, judgment, and alignment, with coordination close behind . He also frames the PM job as delivering business impact by prioritizing and solving the most impactful business problems, while thinking the way a CEO would think about the product's success .
My words are impact, collaboration, judgment, alignment.
How to use it: in interviews and promotion narratives, explain the business problem, the decision you drove, the alignment work, and the outcome.
2) AI hiring signals are moving from prompts to systems
Aakash Gupta says PM interviews now commonly ask how candidates use AI in their workflow, and the answer interviewers want is a system, not a generic tool mention . His examples of strong setups include custom GPTs for PRD drafts, Claude Projects loaded with company design principles, Gemini Gems for competitive analysis, and Claude Code with context; he estimates roughly two hours of setup for 5+ hours of weekly return .
How to use it: build one reusable system for writing, one for analysis, and one for recurring tasks - and be ready to explain inputs, outputs, and guardrails .
3) Protect the human skills AI can erode
Rekhi's guidance is consistent across research and analytics: use AI to speed production, but keep empathy, pattern recognition, and solution judgment with the PM . He continues to watch interviews and read research himself even after automating much of the process .
How to use it: keep a weekly habit of reviewing raw customer material, not just summaries.
4) Classic PM strengths still compound
Lenny points to organization, a high bar for quality, and succinct communication as the traits that carried over from PM work into creator work . He also describes iterating heavily rather than shipping first drafts, sometimes revising a newsletter post dozens of times .
How to use it: treat memos, specs, and stakeholder updates the way you treat product flows - refine them until the point is easy to grasp.
Tools & Resources
- Feedback rivers: Reforge Insights, Interpret, Craftful, Birdie, Miro Insights, Unwrap, and Productboard for aggregating reviews, support, and survey data into themes and trends .
- Interview synthesis: NotebookLM and Claude for transcribing recordings, summarizing interviews, and finding patterns across batches .
- AI-moderated interviews: Reforge, Listen, Outset, and Maze for asynchronous concept testing with dynamic follow-up questions .
- Synthetic user testing: Reforge and Simile for persona-based usability feedback on prototypes .
- Prototype stack: Bolt for fast functional prototypes; PostHog for surveys, retention, heatmaps, and session replay .
- Analytics stack: Claude Projects or Claude Code plus MCP for natural-language SQL and dashboards, especially when paired with example queries and shared instructions .
- Reporting output: Gamma for auto-generated presentations from recurring analysis workflows .
- Interview prompt seed:The Mom Test as a best-practices source for AI-generated customer interview guides .
Hiten Shah
Melissa Perri
Big Ideas
1) PM workflow is moving from sequential handoffs to parallel, compounding systems
Aakash Gupta argues that the fastest PMs are discarding sequential work: they plan one feature while another builds, iterate on UI while the data layer assembles, and run multiple workstreams in parallel because the tooling now supports it . Dave Killeen's Claude Code setup shows the operational version of that shift: a single /dailyplan command pulls calendar, CRM, meeting notes, LinkedIn messages, YouTube transcripts, newsletters, and quarterly goals into one page , while hooks inject priorities, preferences, and past mistakes every time a new session starts .
“Skills are what you do. MCP is how you connect. Hooks are how you compound.”
- Why it matters: less tab switching, faster context loading, and more room for judgment and experimentation .
- How to apply: start with one repeatable command, one connected system, and one persistent file structure; treat AI as infrastructure, not just a faster chat window .
2) Strong problem framing means moving across layers, not polishing one sentence
The Beautiful Mess argues there is no perfect articulation of a problem. Teams need to move between multiple layers: what question to ask, what the term means, the surrounding environment, what is happening now, why it happens, why it matters, what better futures exist, and how success should be evaluated . Product leadership, in this framing, is not just defining the problem and handing it off; it is creating conditions for people to engage it from several elevations at once .
“The trick is to dance between layers.”
- Why it matters: vague statements like “too slow” or “too heavy” have little diagnostic value .
- How to apply: force teams to articulate behavior, causes, stakes, and measurement separately before jumping to solutions .
3) The best growth teams hunt friction, then measure whether relationships deepen
Brian Hale's contrast is consistent across seven dimensions: excellent growth teams own the full user journey, remove blockages, maximize learning rate rather than experiment count, practice product growth instead of growth hacking, optimize relationships rather than raw activity, compound wins, and hire people who diagnose readiness before joining .
“Excellent growth teams relentlessly do the most important thing, even when it’s unglamorous, non-obvious, or uncomfortable.”
Robby Stein adds a measurement lens: early PMF looks like flat or J-curve retention through day 30/60/90, followed by organic week-over-week user and usage growth and rising intensity of use over time .
- Why it matters: activity can rise because power users do more, while new or hesitant users stay stuck .
- How to apply: look for where users hesitate, then track people-count metrics, cohort retention, and deeper usage instead of celebrating experiment volume alone .
4) Large product orgs are getting clearer about alignment: fund capacity, not just projects
Across Vanguard, Chase, and Affirm, the operating model is similar: align teams to business outcomes through OKRs and portfolio reviews , fund product/design/data/engineering capacity at the domain level and let teams prioritize within that capacity , rebalance selectively without constant budget swings , use weekly forums for problem validation, sequencing, and dependencies , and embed legal/compliance early when needed .
“Empowerment without alignment is chaos.”
- Why it matters: this creates room for local judgment without losing strategic coherence .
- How to apply: review business, product, and quality/engineering metrics separately, run weekly decision forums, and make capacity allocation a portfolio choice rather than a feature-by-feature fight .
5) PMs need a money language, not just a product language
Rich Mironov's “money stories” framework flips user stories for executive audiences: the question is not how a feature works, but roughly how much revenue or retained revenue a set of work could return . His advice is to use order-of-magnitude ranges, sort ideas by digit count, label roadmap swim lanes with value ranges, and avoid mythical feature-level ROI claims .
- Why it matters: executives often decide at the level of business impact, not implementation detail .
- How to apply: tell simple upsell or churn stories, pressure-test them with sales, marketing, or finance, and use team-level “earning your keep” logic instead of pretending every ticket has a precise ROI .
Tactical Playbook
1) Build a minimal PM operating system
- Create one command for your morning workflow. Dave's version checks whether digests already ran, pulls structured data through MCP, and outputs priorities, account context, and suggested Slack messages .
- Connect one tool first. The guidance here is to start with calendar, find the API/MCP/CLI docs, give Claude the docs plus an API key, and let it build the server .
- Separate skills, MCP, and hooks correctly: use skills for flexible judgment, MCP for deterministic integrations, and hooks for session-start context .
- Keep the knowledge base alive. Use stakeholder/project/company pages, a mistakes file, working preferences, and a short Claude.MD map that points to deeper files .
- For product execution, let the system move from backlog to PRD to Kanban, but keep Dave's own caveat in mind: AI PRDs are strong first drafts, yet work use still needs tighter commercial context and metrics .
- Why it matters: it turns scattered PM admin into a reusable operating loop .
- Apply it this week: clone DEX and run
/setup, or create your own first command and one persistent project file .
2) Diagnose a fuzzy problem before you prioritize it
- Start with the question you are actually trying to answer .
- Define key terms .
- Describe what users are doing today and the surrounding environment .
- State the most plausible causes .
- Explain why the problem matters and what improving it would make possible .
- Generate alternatives and define how you will know it is better .
- Why it matters: the initiative-creation example shows that “too heavy” only becomes useful once you separate ambiguity, missing information, and definition confusion from the downstream planning impact .
- Apply it: require teams to bring at least one sentence per layer before solution reviews.
3) Run prioritization with three lenses: customer, business, and build reality
- Anchor on customer evidence, even if discovery is fast .
- Make the business unlock explicit: revenue, complaints, expense, or another OKR .
- Expose the speed-versus-complexity trade-off instead of hiding it .
- Bring decisions into weekly cross-functional forums for sequencing, dependency, and partnership questions .
- In regulated or high-risk contexts, pull legal/compliance into ideation rather than near launch .
- Monitor three dashboard buckets: business, product, and quality/engineering .
- Why it matters: this keeps prioritization grounded without collapsing into either feature prescription or vague empowerment .
- Apply it: make every roadmap discussion show the customer signal, the business unlock, and the delivery trade-off on one page.
4) Translate roadmap bets into money stories
- Use a range, not false precision, because executive trade-offs are usually order-of-magnitude decisions .
- Sort ideas by digit count so you can ignore three- and four-digit requests early .
- For upsell work, multiply target customers × price delta × expected upgrade rate .
- For retention work, multiply customers at risk × annual value × expected churn reduction .
- Put the value range on the roadmap swim lane, not on every ticket, and require an equivalent revenue case before disrupting the lane .
- Sanity-check the numbers with sales, marketing, or a finance partner .
- Why it matters: it converts stakeholder fights into business trade-offs instead of backlog theater .
5) Verify PMF before you scale
- Start with 10 trusted testers; if 10 people do not like it, scale will not fix that .
- Move to a small opt-in launch without pushing marketing hard .
- Look for J-curve or flat retention across day 0, 30, 60, and 90 .
- Then ask whether users and usage are growing week over week without unnatural intervention .
- Watch engagement depth and follow-up behavior, not just shallow reach .
- Why it matters: it separates genuine product pull from novelty or forced distribution.
Case Studies & Lessons
1) DoorDash: the problem was communication, not the offer
DoorDash had users hesitating because they feared delivery fees even when a zero-delivery-fee offer existed . The growth win came from making the existing value impossible to miss, not inventing a new incentive.
- Why it matters: some conversion problems are explanation problems.
- Apply it: audit places where users hesitate because they do not understand an existing benefit .
2) Instagram Close Friends: fix the feedback loop, then simplify everything around it
Close Friends initially failed because the experience was confusing, poorly translated in some regions, and lacked a feedback loop . It worked only after Instagram made it a Stories behavior, changed the name, added the green ring indicator, and made it easy to create a 20-30 person list so replies would actually happen .
- Why it matters: the smallest viable experience is not always the one with the fewest UI elements; it is the one whose behavior makes sense to users.
- Apply it: map the desired social or functional loop first, then strip away everything that does not reinforce it.
3) Instagram Reels: similar surfaces can hide incompatible incentives
Instagram first launched Reels as a Stories-adjacent experience in Brazil because the surface looked similar: full-screen video . That failed because creators wanted persistence and the possibility of going viral, not content that disappeared in a day .
- Why it matters: product teams often overfit to interface similarity and underweight creator or user incentives.
- Apply it: test whether the surface reinforces the creator or user payoff, not just whether the format looks familiar.
4) Google AI search: strong product fit can still create ecosystem trade-offs
Robby Stein says AI Mode worked for Google because users already came to Google with hard informational tasks and showed latent demand by explicitly adding “AI” to queries after AI Overviews launched . In a separate analysis, Hiten Shah argues Google also chose to own zero-click behavior, citing organic CTR down 61%, paid CTR down 68%, and zero-click news searches up from 56% to 69% over 18 months . He also cites search ad spend up 9% year over year versus 4% click growth .
- Why it matters: the same move can look like better user alignment from inside the product and ecosystem compression from outside it.
- Apply it: when you add AI to a core surface, evaluate both native task fit and the second-order effects on partners, monetization, and traffic flows.
5) Vanguard: outcome framing outperformed feature prescription
Vanguard describes giving teams outcome goals tied to helping investors take the next best action, rather than prescribing features . In one financial wellness experience, the opening survey reached nearly 80% completion versus a cited 10-20% benchmark for comparable retirement-industry tools .
- Why it matters: teams get smarter when they own the problem, not just the brief.
- Apply it: define the customer outcome, then let the team design the experience that gets there.
Career Corner
1) The PM job market is splitting between manual operators and system builders
Aakash Gupta frames two groups: PMs still writing PRDs and updates manually, and PMs with tuned Claude.MD files, custom skills, and PRD writers that generate most of a shipping-ready doc in minutes . The second group reinvests time into users, engineering relationships, and strategy .
- Why it matters: the compounding advantage comes from systems, not one-off prompts.
- How to apply: automate one recurring artifact per week and spend the recovered time on discovery or stakeholder work.
2) Build your promotion case continuously
Dave's career MCP listens for evidence of skills, feedback, and outcomes, maps gaps against goals, and calculates promotion readiness before review season arrives .
- Why it matters: most PMs track the backlog better than their own growth .
- How to apply: keep an evidence log by skill and outcome, review gaps weekly, and enter performance reviews with assembled proof rather than memory.
3) Financial fluency is becoming part of PM credibility
Rich Mironov argues PMs are not being trained to talk about money, even though the basics are often just P&L and cost accounting 101 . His suggestion: learn how the company makes money, what one more unit contributes, and what the product team costs .
- Why it matters: you cannot defend priorities or funding well if you cannot tell a money story.
- How to apply: find the finance partner in your company, ask basic questions, and learn the economics of your product line .
4) Owner mindset still matters, but leaders should spend detail-time on the few bets that matter most
Robby Stein describes successful builders as people with a strong internal locus of control who fully own outcomes . He also says leaders should pick a small number of projects where the upside is five-to-ten-year value and where their direct intervention is uniquely useful, then co-create intensely until the work is on track . Chase describes a similar expectation: product teams should think end to end and understand the P&L with their business partners .
- Why it matters: high agency without focus becomes thrash.
- How to apply: own results end to end, but reserve deep involvement for the few bets where leadership detail changes the outcome.
Tools & Resources
- DEX — open-source PM OS;
/setupscaffolds the system around your role and goals in minutes . - This CPO Uses Claude Code to Run his Entire Work Life | Dave Killeen, Field CPO @ Pendo — practical walkthrough of daily planning, backlog-to-PRD flow, and career evidence capture .
- Claude Code setup — useful if you want a Claude.MD structure built around progressive disclosure .
- TBM 410: Dancing With Problems — compact framework for turning fuzzy problem statements into decision-quality framing .
- What Excellent Growth Teams See That Others Miss — Brian Hale's seven contrasts between okay and excellent growth teams .
- How to communicate the value of your product work — Rich Mironov on money stories, ROI ranges, and executive communication .
- Google VP of Product on The Future of Search and AI Mode — Robby Stein on PMF metrics, testing progression, and AI Mode decisions .
- Episode 264: Product at Scale Inside the World’s Largest Financial Institutions — operating model examples for outcome alignment, capacity funding, and metrics design .
Hiten Shah
Sachin Rekhi
Paul Graham
Big Ideas
1) AI moved the bottleneck upstream
AI is no longer just changing how fast teams ship. Hiten Shah argues the core constraint is shifting from engineering velocity to knowing what to build . Sachin Rekhi makes the same point for PM work: delivery has accelerated so much that discovery and design are now the limiting factors .
- Why it matters: Faster execution helps only if the team is pointed at the right problem.
- How to apply: Put more PM time into customer discovery, design decisions, and rapid prototyping. Treat making ideas tangible as part of the role, not a specialist add-on .
2) In the AI launch glut, positioning is becoming core PM work
As AI makes it easier to build and launch products, the harder problem is distribution: getting attention in an increasingly noisy market . The positioning framework April Dunford outlines is designed to solve that by helping buyers quickly understand why a product is for them .
- Why it matters: Better execution does not help if prospects cannot place your product or understand why it is different.
- How to apply: Build positioning around five components: competitive alternatives, distinct capabilities, differentiated value, best-fit accounts, and market category .
“A single shift in positioning can mean the difference between a product that flops and one that breaks through”
3) Slow growth now needs two diagnoses: attention and churn
One set of notes points to an attention problem: distribution is getting harder as launches multiply . Paul Graham adds a different warning: churn is the worst reason to have slow growth, because it means people are trying the product and deciding they do not like it .
- Why it matters: Low awareness and poor retention can both depress growth, but they point to different problems.
- How to apply: First ask whether users are failing to notice the product or trying it and leaving. If the issue is attention, sharpen positioning; if the issue is churn, treat it as a product-value problem, not just a marketing gap .
Tactical Playbook
1) Start positioning from the prospect’s real alternatives
- Ask: if we did not exist, what would the customer use?
- Keep the answer grounded in the near term — “sell what’s on the truck”
- Include the status quo, which Dunford says accounts for about half of lost B2B opportunities and sometimes more than 80%
- Run the exercise cross-functionally with product, sales, marketing, customer success, and the founder or business leader; experienced AEs are especially useful
- Why it matters: Weak positioning often starts with internal disagreement about what you are actually competing against .
- How to apply: Use real deal behavior, not hypothetical competitor lists, as the source of truth .
2) Translate capabilities into buyer language with a “so what?” test
- List the distinct capabilities alternatives do not have
- Define the value those capabilities create
- Keep pushing until the value is stated in terms buyers understand — the “so what?” test
- Avoid five common traps: assuming prospects understand a feature, stopping short of buyer value, abstracting value until it becomes generic, piling on too many themes, and confusing value with objection handling
- Why it matters: Teams often know their features better than their differentiated value.
- How to apply: Force every major claim to answer why a buyer should pick you over the alternatives in one clear story, not a long list of partial arguments .
3) Counter product pessimism with evidence from winning deals
- Watch for symptoms: overly broad ideal customers, long hypothetical competitor lists, dismissing sales explanations for wins, and treating PM only as problem identification
- Re-center the discussion on where the product wins today, not every gap it may have tomorrow
- Bring experienced sales voices into the room
- Use a moderator who can challenge unsupported pessimism and ask for evidence
- Why it matters: If the team cannot articulate genuine strengths, it will struggle to position or sell them .
- How to apply: Separate roadmap-gap debates from positioning work; positioning should focus on current differentiated strengths .
4) Rebuild the PM workflow around faster prototyping
- Treat prototyping as expected work. Meta now uses a live vibe-coding interview where candidates build with Claude Code, Figma Make, or Lovable
- Use agentic tools where they are already strong. Rekhi says Claude Code has moved from experimental to essential for PMs
- Use the best tool for the task: analysis can now happen in Sheets, Excel, or directly against databases in Claude
- Apply the same principle to communication: Google Slides, PowerPoint, and Gamma can now materially speed presentation creation
- Why it matters: Faster tooling changes what PMs can do directly without waiting for handoffs.
- How to apply: Build rough prototypes, analyze the data yourself, and communicate decisions visually — while protecting time for discovery and design, which are now scarcer inputs .
Case Studies & Lessons
1) Meta is signaling that prototyping is no longer optional
Meta’s live interview format asks candidates to build prototypes with Claude Code, Figma Make, or Lovable .
- Lesson: PMs are increasingly expected to make ideas tangible in real time, not only describe them.
- Apply it: Add regular prototype reps to your workflow so rapid concept testing becomes normal.
2) Netflix shows what strategic focus really looks like
Aakash Gupta points to Netflix in 2009 focusing on three pillars — streaming transition, device expansion, and content licensing — while saying no to gaming until 2021 and no to sports until 2023 .
- Lesson: Strategy is not just choosing priorities; it is sustaining explicit no’s over time.
- Apply it: Limit active pillars, then keep a visible list of attractive opportunities you are deliberately not pursuing.
3) Epic used a one-minute video to align 5,000 people
At Epic Games, one-minute videos for each Fortnite season helped coordinate 5,000 designers and engineers; Gupta’s point is that you cannot align 5,000 people with a Google Doc .
- Lesson: For high-stakes cross-functional work, a concrete artifact can align faster than a written spec alone.
- Apply it: Pair major documents with a short visual prototype or narrative artifact when alignment matters most.
Career Corner
1) Discovery skill is rising in value
The core challenge is increasingly knowing what to build, not just building faster . Rekhi also argues discovery is now the new constraint for PMs .
- Why it matters: PMs who only coordinate delivery will be less differentiated as execution gets faster.
- How to apply: Invest in customer discovery, design judgment, and problem selection — not just delivery mechanics.
2) Prototyping fluency is becoming a market signal
Prototyping has gone from advanced skill to job requirement in at least some hiring loops .
- Why it matters: This is no longer just a productivity hack; it is part of how PM capability is being evaluated.
- How to apply: Get comfortable building rough flows with Claude Code, Figma Make, or Lovable for concept testing .
3) Tool judgment now matters as much as tool familiarity
Rekhi’s view is that data analysis is now platform-agnostic and presentation creation has caught up across mainstream and AI-native tools .
- Why it matters: PM leverage comes from choosing the right workflow for the task, not from loyalty to one tool.
- How to apply: Build a lightweight stack by task: one tool for prototyping, one for analysis, and one for communication — then switch based on the problem .
Tools & Resources
- Advanced positioning guide:A guide to advanced B2B positioning — best for teams stuck on competitor definition, differentiated value, or status quo losses
- Video walkthrough:A guide to advanced B2B positioning — useful if you want the five-part framework and roadblocks in video form
A quick clip on the core value test:
- Strategy keynote:https://www.youtube.com/watch?v=l5ORfS9h6hs — Gupta shared the Northeastern PM conference keynote for free after noting that 400+ PMs had paid $500+ to see it live
- PM tool stack to explore: Claude Code for agentic prototyping, Figma Make and Lovable for fast mockups, Sheets/Excel/Claude for analysis, and Slides/PowerPoint/Gamma for presentation work
Teresa Torres
Scott Belsky
Big Ideas
1) Strategy has to become an operating system, not an annual document
AI is speeding up both builders and PMs. Engineers and designers can do far more with tools like Cursor and Claude Code; PMs can prototype quickly, write evals, and even push PRs into engineering review. That makes directional clarity more important, not less. Aakash Gupta argues that if 9 out of 10 engineers and designers cannot explain the strategy, while a typical 5 engineer / 1 designer / 1 PM team costs about $1.4M fully loaded, the company is burning money. Common failure modes are strategies that are too long, vague, detached from execution, or too static .
- Why it matters: Faster execution widens the downside of bad direction and narrows the time available to correct it .
- How to apply: Treat strategy as a short, regularly updated decision-making tool that helps the team choose, sequence, and say no .
"Can your engineer or designer explain the strategy in 30 seconds? Can they make decisions based on it? Does it help them say no to things?"
2) In AI products, the new design problem is capability discovery
Enterprise products have always taught users three things: the interface, the domain, and the benefit. Conversational interfaces make interface teaching almost disappear and make domain teaching easier through plain language, but they make benefit teaching harder because the full capability surface is invisible behind a text field. Users can end up having a functional interaction that uses only a narrow slice of what the product can do, while their prior mental model narrows the questions they ask. Suggested prompts help briefly, but as a small static menu they do little to expand the frame .
- Why it matters: If capability stays invisible, differentiated product value stays invisible too .
- How to apply: Design for discovery and judgment: surface the right capability at the right moment, and create feedback loops so the product gets better with use rather than acting like a one-off chat box .
"The interface was the product. The capability is the product now. And capability that stays invisible is as good as absent."
3) Agents are becoming a real user segment
For agent-facing products, Aakash Gupta argues the API, CLI, and MCP server are parallel layers rather than a maturity sequence: API for bulk operations and latency control, CLI for composability, MCP for discoverability and multi-client reach. He also argues agents need discoverability, programmatic auth, structured I/O, idempotency, and rate limits, and that the fix is to treat the agent as a first-class user with a PM who owns the experience .
- Why it matters: If one of those layers or primitives is missing, agents can route around your product to one that is easier to use .
- How to apply: Stop treating agent access as a side integration; define the agent journey, owner, and roadmap explicitly .
4) AI raises the cost of indecision
Shreyas Doshi highlights a simple tradeoff: a leader who makes a B+ decision today may beat the leader with A+ product sense who takes a week longer. Scott Belsky gives the organizational version of the same idea, calling the backlog of unmade decisions "organizational debt." His prescription is to prompt decisions or at least deadlines, run AI change through protected pilots with learning KPIs, and socialize new ways of working until they become obvious. He expects more process to be offloaded to compute, leaving humans to contribute taste and agency .
- Why it matters: As more process moves to compute, slow consensus and process buildup become a bigger drag on product velocity .
- How to apply: Prompt the decision, or at least a deadline for it; use pilots with learning-focused KPIs before hardening new process .
Tactical Playbook
1) Build an AI-era strategy that survives contact with execution
- Start with the seven elements: Objective, Users, Superpowers, Vision, Pillars, Impact, Roadmap.
- Treat them as sequential but iterative; loop back as you learn .
- Check for the four failure modes: too long, too vague, too detached from daily work, and too static .
- Pass the 30-second test: an engineer or designer should be able to explain it, make decisions from it, and use it to say no .
"If not, you have a document, not a strategy."
2) Design AI onboarding around benefit teaching, not just interface reduction
- Separate what the user must learn about the interface, the domain, and the benefit.
- Assume the blank text field hides inventory; identify the capabilities users will never discover on their own .
- Do not rely on a few static suggested prompts to solve discovery; they help briefly but quickly plateau .
- Add an investment loop so the product stores value and improves through feedback and repeated use .
- Use personalization as persuasion - helping users do what they want to do - not coercion .
3) Run AI adoption as a protected operating change
- Start with pilots and play, not blanket mandates .
- Give teams learning KPIs so they are rewarded for insight, not punished for early failure .
- Use collapsed-stack teams or dual-role operators where possible to speed tool adoption and decision flow .
- Keep destroying outdated process while new process is created; otherwise organizational debt accumulates .
- Force a decision, or at least a decision deadline, when issues stall .
4) Prepare your product for agents in one quarter
- This week: run the five-question audit and ship an
AGENTS.mdfile . - This month: stand up a read-only MCP server and list it on PulseMCP .
- This quarter: add approval flows, agent analytics, and agent-specific pricing .
- Build the API, CLI, and MCP layers in parallel, not one after another .
- Verify the basics: discoverability, programmatic auth, structured I/O, idempotency, and rate limits .
Case Studies & Lessons
1) Teresa Torres chose audience fit over easy revenue
Teresa Torres describes shutting down a $19/month community membership that was growing and generating reasonable revenue because it attracted low-effort questions, cannibalized courses and books, and pulled her away from the audience she wanted: people willing to invest in learning. She removed monthly subscriptions and kept annual only, explicitly accepting slower growth for better audience alignment .
- Lesson: Revenue can be real and still be strategically expensive if it trains the wrong user behavior or weakens your better products .
2) She also cut a product worth 40% of revenue
Torres says her deep-dive courses represented 40% of revenue, but the format had weak B2B fit and unstable cohort economics on the direct-to-consumer side, leading to cancellations, refunds, and administrative overhead. She sunsetted the cohort format and replaced it with two experiments: on-demand consumer courses and a subscription for corporate leaders to coach teams .
- Lesson: Stable revenue can hide format-market mismatch. The right question is not just "is this profitable?" but "is this the best use of time and team?" .
"I got to burn the ships."
3) Sold out did not mean optimized
Petra Wille describes rethinking Product at Heart even though the event routinely sold out. The team felt the existing half-day format underused the value of putting about 60 product leaders together, so they did lightweight interviews and redesigned it into a two-day experience despite uncertainty about time commitment and pricing .
- Lesson: Strong demand is not proof that the current format is best; it may only show that the underlying need is real .
4) Portfolio governance ideas worth borrowing
Across the Teresa/Petra discussion, four operating mechanisms stand out: keep a visible sunsetting column on the taskboard, use H1/H2/H3 horizons so replacement bets are already in motion, make sunsetting decisions one level above the product team, and normalize the fact that even successful products have life cycles .
Career Corner
1) Show product sense before anyone asks for it
One AI PM candidate stood out by watching three hours of TikTok videos from coaches serving small businesses, then bringing firsthand user insights to the first interview. The point was not the medium; it was the behavior. The candidate bypassed the company's framing, did lightweight user research independently, and demonstrated product sense rather than talking about it .
- Why it matters: In competitive PM hiring, evidence of judgment beats generic preparation .
- How to apply: Before interviews, go to the end user, build a small artifact, or bring real research. Do the work before you are asked .
2) Build AI fluency on tools that will matter at work
Sachin Rekhi advises PMs to spend their learning cycles on Claude Code rather than OpenClaw if the goal is practical AI fluency in day-to-day work. His reason: Claude Code combines strong agentic capability with broad enterprise adoption, and the related skill set - Skills, CLIs, MCPs, and adjacent workflows - is both productivity-enhancing and marketable .
- Why it matters: Some enterprises are explicitly hiring more junior AI-native talent to inject this fluency into everyday meetings and challenge legacy process .
- How to apply: Prioritize tools your current or next employer is likely to sanction, then learn the surrounding workflow surface, not just the interface .
3) Management is optional; clear thinking is not
Tony Fadell argues that many people should not be pushed into management just because it looks like the default ladder, especially if they prefer hands-on work, daily wins, or are not energized by people leadership. At the same time, Shreyas Doshi argues that long-term relevance in the AI age depends on evaluating logic rather than superficial tells about whether something "looks AI generated." Scott Belsky adds that the human edge will center more on taste and agency .
- Why it matters: Career progression is becoming less about title conformity and more about judgment, fluency, and role fit .
- How to apply: Choose the ladder intentionally, then practice reviewing AI output for reasoning quality instead of style markers .
Tools & Resources
- How to Build Product Strategy in the Age of AI: Step-by-Step with Claude Code — a compact strategy template: Objective, Users, Superpowers, Vision, Pillars, Impact, Roadmap, plus the anti-pattern check and 30-second test .
- The Interface Was the Product — useful if you're designing AI-native workflows and need a sharper lens for interface teaching vs. benefit teaching .
- AGENTS.md + read-only MCP + agent analytics/pricing roadmap — a practical starter set if you expect agents to use your product, not just humans .
- AI Productivity course — the course link Sachin Rekhi shared alongside his advice on Claude Code fluency .
- The Messy Middle of AI — Scott Belsky's interview on organizational debt, collapsed-stack teams, pilots, and the role of taste and agency in AI adoption .
- From Building Habits to Breaking Limiting Beliefs with Nir Eyal #beyondbelief — a useful refresher on the Hook Model, the investment phase, and the persuasion-vs.-coercion boundary for habit-forming products .
The Product Compass
scott belsky
Big Ideas
1) AI is pulling PM and UX toward delivery unless teams protect strategy
A Reddit discussion argues that the current AI reset can pull PM and UX out of product shaping and into faster delivery work . The proposed response is to watch how much time teams spend in problem versus solution space, align UX with PM and business, and push leaders to preserve strategy instead of turning everyone into AI builders . Another commenter added that if you are not at the strategy table, your role may realistically collapse toward execution, especially under older operating models they see as uncompetitive in the AI era .
Why it matters: The risk is not just adopting AI tools poorly; it is losing influence over what gets built .
How to apply: Protect problem-space work, make the strategy-versus-delivery split explicit, and be clear about whether your role is shaping direction or executing it .
2) Platform shifts favor new builds over change-heavy retrofits
Scott Belsky argues that it is much easier to build something new than change something old . In platform shifts, less change management lets teams anchor on first principles, ignore sunk costs, and build for what they think the industry will be more than three years from now .
Why it matters: Legacy change costs can become a strategic drag when the environment is shifting quickly .
How to apply: When evaluating platform-shift bets, separate first-principles thinking from legacy constraints and be explicit about which sunk costs you are carrying forward unnecessarily .
3) Roadmaps are under more pressure to show business impact, not just product logic
In one PM community thread, a team was already using customer interviews and prioritization methods, but the board still wanted to see how the roadmap aligned with company growth . The hard part was that some necessary work addressed poor UX, high time-to-value, scalability, and churn risk rather than net-new revenue . The thread distilled the core tension into a simple question: how do you compare churn-risk reduction against new revenue?
Why it matters: Growth-only framing can underweight product-health work that protects retention and future scale .
How to apply: Translate foundational work into business terms stakeholders already use: churn exposure, time-to-value, scalability risk, and user experience costs .
Tactical Playbook
1) Keep PM work in the problem space before AI pushes everything into delivery
Step 1: Audit how much time your team spends in problem space versus solution space .
Step 2: Keep UX aligned with PM and business when framing problems, rather than defaulting to engineering-led delivery conversations .
Step 3: Push leaders to preserve strategic work instead of relabeling everyone as an AI builder .
Step 4: If you are not in a position to influence strategy, be explicit that your role is execution and optimize for that reality instead of assuming strategy ownership that is not there .
2) Use Claude Code to move from PRD to demo, then to engineer-ready artifacts
A Product Compass guide says Anthropic PMs use Claude Code to go from PRD to working demo in a single session instead of writing specs and waiting for engineering handoffs .
Step 1: Use it when you need to prototype, not just describe an idea .
Step 2: Start from the PRD and build a working demo, using Plan Mode to review before Claude changes anything .
Step 3: If the result is useful, push it to a branch and create a PR, or use it to replace a small Jira ticket by showing the change directly .
Step 4: Use its memory features when context needs to compound across sessions and you do not want to restate the project every time .
3) Make non-revenue roadmap work legible to boards and executives
Step 1: Start with customer interviews and a clear prioritization method, because stakeholders will ask how the roadmap ties back to growth .
Step 2: Challenge whether a supposedly necessary item is actually necessary .
Step 3: Reframe the work in business terms: poor usage feedback, high time-to-value, resilience or scalability gaps, and churn risk .
Step 4: Put that case directly next to the net-new revenue alternative, since that is the comparison stakeholders are already making .
Case Studies & Lessons
1) Claude Code lowers the barrier between product insight and working software
One guide claims Anthropic PMs already use Claude Code to prototype instead of writing specs and waiting for engineering . The same piece also points to an Anthropic hackathon where an attorney, a cardiologist, and a roads worker won because they understood their problems deeply and Code removed friction between idea and build .
Lesson: Deep problem understanding plus lower build friction can matter more than formal engineering background for early product exploration .
2) Revenue-only roadmap debates miss real retention risk
In the roadmap thread, the example problem was a core app experience with poor usage feedback and high time-to-value. The author described it as a ticking timebomb for churn, even though it did not map neatly to new revenue .
Lesson: If prioritization only rewards visible revenue, teams can starve work that protects retention and product quality .
3) Weak AI fluency can narrow ambition inside large organizations
One commenter describing an F500 environment said business PMs, UX, and UXR teams struggled to understand AI well enough, which led to narrow, fixed genAI workflows and slow, confirmation-heavy decisions .
Lesson: AI adoption risk is not only about tooling; it is also about whether the product organization has enough fluency to pursue broader opportunities .
Career Corner
1) Senior-to-IC moves are being treated as normal, not irrational
A Sr Director at a public company described being unhappy in role, worried about being managed out, and getting stronger interest for Principal PM IC roles than for management roles . Several responses said this is a common move and that a high IC title like Principal does not create much long-term concern .
Why it matters: The PM career ladder is becoming less linear in practice .
How to apply: Evaluate the work itself and the level of the IC role, not just whether it looks like a step down on paper .
2) In this thread, compensation did not argue against the IC path
The original poster reported $315k total compensation and said the IC move would not mean much less pay . One commenter said that number looked low for a Sr Director at a public company in a high-cost market . Another pointed to Lenny's Newsletter and said the 50th percentile for M6 was $545k . A separate commenter shared a move from director at a roughly $2B public company making $380k to an IC PM role in big tech making nearly $500k .
Why it matters: In at least this community snapshot, title prestige and pay were not moving in lockstep .
How to apply: Benchmark the role you want against actual market data and peer anecdotes instead of assuming management is always the higher-paying path .
3) The real decision is whether you want the IC day-to-day again
Commenters said IC roles can mean less upward mobility, but potentially better work-life balance, less stress, and more enjoyment of the work itself . Another commenter said it may be a good time to be an IC and catch up on how the PM role is changing . One response also argued that pure senior management PM roles may shrink, while people who are still strategic and tactical could be in a better position in two years .
Why it matters: The question is not only status; it is fit with how PM work is changing .
How to apply: Decide based on whether you want the more hands-on day-to-day of a Principal PM role, not just on title optics .
Tools & Resources
- Guide to Claude Code for PMs — useful if you want to move from PRDs and documents toward working demos, branches, and PRs faster .
- Lenny's PM compensation benchmark — cited in the community discussion as a reference point for evaluating senior-management versus high-level IC compensation .
- SVPG Product Operating Model — recommended in the AI strategy thread as a better fit than older operating models in the current environment .
signüll
Victor
Big Ideas
1) AI is making PM judgment more visible—and more valuable
Across the X conversation, multiple posters make the same distinction: AI speeds up how teams build, but PM leverage still sits in deciding what to build, why it matters, how to sequence it, and how to explain it . One post adds that narrative is "load-bearing from the start" because it aligns the team internally and shapes the user's first interpretation externally .
"you still need someone who can figure out what to build and why - AI just makes the ‘how’ faster"
- Why it matters: As execution speeds up, more outcome variance shifts to judgment, sequencing, and narrative .
- How to apply: In planning and review, use four prompts: what are we building, why now, how should it be sequenced, and what story needs to be present from day one?
2) The scarce profile is a "product thinker," not a job title
The thread argues that the underrated hire is a great product person or "product thinker": someone who understands where the product is strong or soft, can sharpen it through iteration, and can hold a view of where it should be in two years and work backward . The rarest version of that person sits at the intersection of culture and deep technology, with enough technical understanding to know what is possible and enough cultural judgment to distinguish durable currents from ephemeral ones . Lenny Rachitsky agreed with the underlying point and said great PMs will thrive in the AI era, even if they are not formally titled PMs .
- Why it matters: As building becomes less of a bottleneck, the value of this role compounds .
- How to apply: Hire and coach for PM craft rather than title: product intuition, long-range thinking, technical fluency, and the ability to shape a coherent product narrative
3) Current AI-for-PM tooling still breaks on missing product context
A Reddit discussion makes a narrower execution point: teams may have AI research tools and PM software stacks, but the prototype still does not know the product well enough, so edge cases show up late . Teams approve the happy path, then engineering asks a simple failure-state question like "what happens when there's no data here," sending work back to PM, design, and review . When the prototype does not resemble the real product, review time also gets spent on wrong components and mismatched flows instead of the concept itself .
- Why it matters: Early prototyping only saves time if it surfaces real constraints before engineering, not after .
- How to apply: Treat prototype reviews as context reviews too: check unhappy paths, data-missing states, and whether the artifact is close enough to the real product that feedback stays on the decision rather than the mockup
Tactical Playbook
1) Pull edge cases forward before engineering starts
One response in the Reddit thread is direct: user stories and acceptance criteria are supposed to cover this; if edge cases are still surprises in engineering, the PM work needs fixing .
Step-by-step
- Write user stories and acceptance criteria that include edge conditions, not just the happy path .
- In review, ask explicit failure-state questions such as what happens when expected data is missing .
- Update the flow before engineering starts so the work does not loop back through PM, design, and review later .
2) Use AI to challenge the prototype, not just create it
Another commenter argues that most "AI for PM" products behave like copilots, while the missing capability is an agent that holds product context, constraints, and edge cases, then challenges the prototype "like a cranky engineer" .
Step-by-step
- Feed the agent specs so it has the product context the prototype is otherwise missing .
- Pull in analytics or other available signals about how the product behaves today .
- Simulate unhappy paths and edge cases before handoff .
- Turn the output into test cases and a gap list for PM, design, and engineering review .
3) Keep review artifacts close to the real product
The thread's practical complaint is that when prototypes use the wrong components or flows, half the meeting becomes an explanation of what the prototype is not .
Step-by-step
- Use components and flows that resemble the actual product closely enough to keep discussion on the concept .
- Compare the prototype against real product constraints and missing-state behavior, not just visual plausibility .
- If review time is dominated by prototype fidelity issues, revise the artifact before using it for concept review .
Case Studies & Lessons
1) Community case: late edge cases create a predictable rework loop
In the Reddit example, PM writes the flow, design handles the happy path, everyone approves it, and then engineering uncovers missing cases such as no-data states . The result is a loop back through design, PM, and review, which undercuts the point of prototyping early .
- Lesson: A prototype is only early learning if it contains enough product context to expose the uncomfortable cases before engineering does .
2) Community case: the same failure mode produced two different fixes
The same thread surfaced two diagnoses. One says the fix is better PM hygiene through stronger user stories and acceptance criteria . The other says the gap is tooling: build an agentic workflow that ingests specs and analytics, simulates unhappy paths, and outputs test cases and gaps .
- Lesson: Teams can attack the same execution problem from two directions: better requirements discipline or better context-aware tooling .
- What to take away: If your team keeps getting surprised in engineering, first determine whether the failure is missing PM discipline, missing product context in the prototype, or both .
Career Corner
1) PM titles matter less than PM skills
The clearest career signal in this set is that you do not have to be called a PM, but you do have to be good at the "PM'y things" . Those skills include figuring out what to build and why, holding a strong point of view on where the product should go, and shaping how the work is sequenced and explained .
"Great PMs are going to thrive in the AI era."
- How to apply: Make your value visible through product decisions, prioritization, and direction-setting .
2) Build a two-year working model of the product
The "product thinker" description centers on holding a view of where the product should be in two years and working backward from there .
- How to apply: Review current decisions against an explicit forward view, then sequence nearer-term iterations toward it .
3) Become bilingual in culture and deep technology
The rarest profile described in the thread understands both technical possibility and which cultural currents are durable versus ephemeral .
- How to apply: Develop fluency on both sides of the interface: what the technology can realistically do, and which user or cultural signals are durable versus short-lived .
4) Treat narrative as core product work
One post argues that the story around a product matters as much as the thing itself because it aligns the team internally and frames the user's first experience externally .
- How to apply: Build the explanatory narrative alongside the product instead of trying to retrofit it later .
Tools & Resources
- Reddit discussion: "Prototyping has this weird problem nobody talks about" — a useful thread on prototype fidelity, edge cases, and the back-and-forth between PM, design, and engineering .
- Agentix Labs blog — one commenter pointed to this as a place they are tracking patterns for building agentic workflows that ingest specs, pull analytics, simulate unhappy paths, and output test cases plus gaps .
- Original X thread on the rise of the "product thinker" — useful for the career framing around judgment, long-range product sense, narrative, and the culture-plus-technology skill mix .
April Underwood
Josh Kale
Big Ideas
1) A new distribution channel: agents discover products programmatically
Aakash Gupta frames a shift from human-facing discovery (search/app stores/websites) to agent-facing discovery, where agents connect, authenticate, execute, and move on—discovering tools through CLIs, MCP servers, and machine-readable documentation.
“If your product cannot be parsed, authenticated, and executed by an agent, you are invisible in the fastest-growing software channel.”
Why it matters: This changes what “shipping” means for many B2B/dev tools: not only UI/UX, but also whether an agent can reliably find and use your product .
How to apply: Build an “agent-accessible stack” on top of a solid API (docs → CLI → MCP) . Treat tool naming/selection as product work: the PM’s judgment helps decide what features to expose, which to expose first, and how to describe them so agents select correctly .
2) AI didn’t make the PM-UX-Tech trio obsolete; it changed when collaboration matters
Bandan argues that AI made solo work more viable in small moments—but made collaboration more important in the moments that matter. AI also blurs lanes (PMs can generate wireframes, designers can prototype, engineers can ship UI without design review), creating the temptation that one person can do it all .
The catch: collapsing roles reduces self-challenge—“some friction was load-bearing” for catching bad assumptions before they ship .
Why it matters: AI can accelerate execution, but it doesn’t automatically create the perspective diversity needed when interpretation and tradeoffs drive outcomes (core journeys, architectural choices, decisions that are expensive to undo) .
How to apply: Use AI to arrive prepared, then collaborate where interpretation and ownership matter. One suggested AI-era workflow:
PM brings prototype → Trio reacts together → UX generates directions → Tech stress-tests → Align early → Ship3) “AI customer simulation” is the wrong argument; the right one is: what job are you hiring AI for?
Leah Tharin calls the debate a false binary. If you hire AI to predict what customers will do next, it will fail; if you hire it to give “fresh pairs of eyes” on a homepage quickly, it can be “shockingly good” .
She distinguishes:
- What AI can’t do: simulate real behavior over time, predict churn, model willingness to pay, understand buying-committee politics, or replace talking to real customers .
- What AI can do: heuristic evaluation—spot confusing messaging, contradictions between pages, or mismatched CTAs/forms .
Why it matters: Teams risk over-trusting “plausible personas” that won’t surprise you like real interviews—and cannot tell you whether people will buy .
How to apply: Use AI as a fast heuristic pass (especially pre-traffic) to catch messaging blind spots and stress-test positioning across segments, then validate with real customer conversations .
4) Pricing and packaging in the AI era: customers want control and predictability
In an a16z interview, Atlassian’s CEO argues usage/outcome-based pricing won’t be the majority for all SaaS, partly because customers “hate it” when usage isn’t clearly tied to value and isn’t in their control . He highlights how AI credits/tokens can feel unpredictable (“casino chips”), and feature additions can unexpectedly increase customers’ usage without the customer choosing it .
He also offers two useful frames:
- Input-constrained vs output-constrained work: some processes have fixed demand (customer service, legal), where AI mainly improves efficiency; others (creative marketing, software development) can scale output as efficiency rises .
- A simplified SaaS classification: some seat-based businesses are vulnerable if AI reduces the need for seats tied to doing the work (he uses Zendesk as an example), while others (e.g., Workday as a system of record) may be more resilient .
Why it matters: Monetization discussions (credits vs seats vs outcomes) often fail when they ignore what the customer can actually control—and what will feel fair/predictable .
How to apply: When proposing AI packaging, pressure-test whether customers can manage cost drivers (and understand them), and whether added AI features change bills in ways customers didn’t “choose” .
5) Procurement can be a moat (not just product)
A post by Josh Kale claims Anthropic introduced a marketplace that lets companies route their existing Anthropic budget to third-party tools (e.g., GitLab, Snowflake, Replit) under one contract, reducing procurement friction—and potentially creating a moat independent of model quality . April Underwood called the concept “super smart,” noting she wanted to reach something similar with Slack Platform .
Why it matters: Distribution and adoption can hinge on non-technical constraints (budgeting/procurement). If true, “contract aggregation” becomes part of the product strategy surface area .
How to apply: When evaluating partnerships/marketplaces, model adoption friction explicitly: what can you bundle into existing procurement pathways, and what requires net-new approvals? (Keep this grounded in how your buyers actually buy.)
Tactical Playbook
1) Build for agents: a practical docs → CLI → MCP sequence
Gupta’s recommended build order (on top of a solid API) is:
- Documentation (AGENTS.md + OpenAPI + Agent Skills)
- CLI
- MCP server
Step-by-step (start this sprint):
Make your API machine-contractible
- If your docs are scattered, agents can’t parse them; create a single OpenAPI 3.0 spec as the “machine-readable contract” .
Add an agent-facing instruction surface
- Draft an AGENTS.md describing how agents should work with your codebase/product (executable commands early, boundaries on what agents should never do, exact framework versions) .
Wrap for composability with a CLI
- Treat the CLI as a structured wrapper around your API that supports Unix-style composability (e.g., JSON output, env-var auth, chaining) .
Expose “tools” via an MCP server
- Use MCP to expose product capabilities as tools AI clients can discover/call through a standard protocol .
Apply MCP quality guardrails (where many teams fail)
- Tool descriptions: avoid vague descriptions (“manages tasks”). Research cited by Gupta suggests agents start failing at 30+ tools when descriptions overlap and are “virtually guarantee[d]” wrong at 100+; reducing Playwright’s MCP server from 26 tools to 8 improved accuracy .
- Auth without a browser: use OAuth device flow (URL + code) or API keys; don’t make browser-dependent auth part of the critical path .
- Structured errors: make errors actionable (e.g., “API_TOKEN is invalid…”) .
- Idempotent endpoints: agents retry; handle duplicates gracefully .
- Clear rate limits: return 429 with Retry-After headers .
2) Run an AI-era “trio kickoff” that starts in the middle
Bandan’s suggested shift: instead of sequential handoffs, each function arrives with an AI-accelerated artifact so the conversation begins with shared, concrete inputs .
Step-by-step meeting recipe:
- PM pre-work: bring a rough AI-generated prototype so the problem is visible—but stop once it starts answering UX questions and hand off .
- UX pre-work: bring AI-generated user flows/rough concepts/research synthesis and multiple directions to explore .
- Engineering pre-work: bring a quick AI-assisted spike/proof-of-concept that clarifies feasibility, risk, and “hard edges” early .
- In-meeting: react together, surface disagreements faster, kill bad ideas earlier, sharpen good ones sooner .
Rule of thumb: AI-enabled solo work is fine for low-stakes/small-scope validation (internal tools, quick experiments, one flow, proof-of-concept) . Bring the trio in when interpretation and reversibility risk dominate (core journeys, architecture, shared ownership) .
3) Use AI for messaging clarity—then validate with real customers
Leah Tharin’s tool “RoastMyWebsite” simulates five ICP personas visiting a homepage for the first time and outputs a grade, a bounce rate, and specific insights quoting the site’s copy .
Step-by-step (60 minutes):
- Paste your homepage URL (it may also scrape pricing if found) .
- Review the five persona reactions (gut reaction, confusion point, objection, and action like sign-up vs close tab) .
-
Classify feedback into:
- Contradictions (e.g., messaging says “simple” vs pricing complexity)
- CTA friction (e.g., CTA vs form complexity)
- Unclear positioning (what it is / who it’s for)
- Make the minimal edits that improve clarity.
- Follow with real customer conversations—because personas are “plausible, not real,” and AI can’t tell you if people will actually buy .
Case Studies & Lessons
1) Tool naming is product: why Stripe-style descriptions beat vague ones
Gupta’s example: “review payments, troubleshoot declines, process refunds” is specific enough that an agent knows what to do; “manages payment operations” is vague and can be skipped .
Takeaway: “Product judgment” increasingly includes tool taxonomy: the words you choose determine whether an agent can correctly select and execute the right capability .
2) Atlassian’s AI in existing workflows: summarize tickets without changing the workflow
In Jira/service workflows, Atlassian describes ticket summarization as a high-leverage insertion point: when a new collaborator joins a ticket with lots of attached files and conversation, summarization can reduce the time to understand context (without changing the underlying workflow) .
Takeaway: Look for “brain bootload” moments in workflows: places where context ramps are costly but the workflow itself doesn’t need to change to realize value .
3) “Create with Rovo”: a UI paradigm shift is also an adoption challenge
Atlassian describes “Create with Rovo” as a shift from blank-page document creation to starting with a prompt/template, with a document pane and a chat pane for operations across the doc (including broad commands like changing headings) . They note power users “love it,” while many regular business users struggle with the new paradigm at first .
Takeaway: AI UX isn’t only model capability—it’s teaching new mental models. Plan explicitly for onboarding users into the new creation/editing paradigm .
4) Procurement as product surface: Anthropic marketplace (as reported)
Josh Kale claims Anthropic’s marketplace could let companies allocate existing Anthropic budget across third-party tools under one contract, reducing procurement friction and creating a moat beyond model quality . April Underwood endorsed the approach as “super smart” .
Takeaway: If your GTM depends on enterprise budgets, distribution may hinge on contracting mechanics as much as feature differentiation .
Career Corner
1) The durable PM value is decisions, not deliverables
Aakash Gupta argues AI will increasingly automate/accelerate “execution layer” deliverables (PRDs, mocks, roadmaps, pulling data), compressing PM-to-engineer ratios; the PMs who struggle are those whose value was the deliverables, while those who thrive create value through decisions under ambiguity .
How to apply: Audit your week:
- List deliverables you produce that AI could accelerate.
- For each, define the decision it supports (what gets built/killed, what tradeoff gets made), and practice making that call explicitly .
2) Owning outcomes + shipping as a “super IC” matters more as teams shrink
Shreyas Doshi says owning outcomes and shipping as a super individual contributor has always mattered—and will matter even more as teams get smaller due to AI .
How to apply: Pick one outcome you own end-to-end this month, and ship at least one artifact that directly moves it (prototype, workflow change, or an agent-facing surface like docs/tooling), not just coordination.
3) Build skill in “restraint” as AI expands your reach
Bandan’s warning: AI gives every role “a longer reach,” but not a better reason to overstep; each role needs to know when to stop and hand the problem back to the right lane .
How to apply: In reviews, add one explicit question: “Where should I stop, and who should take it from here?” .
Tools & Resources
- Agent distribution deep dive (Aakash Gupta): “The PM's Guide to Agent Distribution: MCP Servers, CLIs, and AGENTS.md” https://www.news.aakashg.com/p/master-ai-agent-distribution-channel
- AGENTS.md standard:https://agents.md/
- MCP tool selection research link (as cited):https://www.speakeasy.com/mcp
- Model Context Protocol video (linked in post):https://www.youtube.com/watch?v=a9wO6GSAoGk
- RoastMyWebsite (free):https://tear-my-site-down.vercel.app/
- a16z interview (Atlassian CEO):https://www.youtube.com/watch?v=0lzo2tFBFy8
Sachin Rekhi
Teresa Torres
Casey Winters
Big Ideas
1) When building gets cheap, the bottleneck becomes judgment ("taste at speed")
Teams are increasingly using AI to prototype so fast that the core constraint shifts from "can we build it?" to "should we ship it?". Aakash Gupta highlights Anthropic’s Claude Code team as an extreme example: they build hundreds of working prototypes before shipping a single feature, with Boris Cherny reportedly shipping 20–30 PRs/day across parallel Claude instances, and building "Cowork" in about 10 days.
This shows up in the broader PM community too: one Reddit post describes moving from a weeks-long spec → align → build → measure loop to putting rough versions in front of clients the same day, shrinking feedback loops from weeks to hours.
Why it matters: As prototyping cost collapses, PM leverage moves to rapid evaluation, ruthless focus, and decision quality—especially when stakeholders can react to a demo instead of a doc .
How to apply (this week):
- Run a prototype-first cycle: build a rough demo, test it, then document decisions after validation (not before) .
- Treat the PRD as a source of truth after learning, not an authorization artifact .
2) Alignment is becoming an AI problem: “GitHub for product management”
Teresa Torres spotlights Momental’s vision of a “GitHub for product management”: ingest org documents/transcripts/recordings and use AI agents to map them into a structured context layer, then surface “merge conflicts” in strategy (e.g., one team prioritizing retention while another prioritizes conversion) for humans to resolve .
Momental frames an internal “product chain” (signals → learnings → decisions → principles) and models org context as three trees (product tree, wisdom tree, people/time tree) . They emphasize metadata (who said it, when, and in what context) as critical for preventing hallucinations .
Why it matters: Even if engineers ship faster, PMs still spend large amounts of time coordinating alignment; Momental cites the reality that you “don’t know what you don’t know” when conflicts are implicit or distributed .
How to apply:
- Treat misalignment like a first-class defect: explicitly track decisions with reasoning (not just outcomes), and make conflicts visible for resolution .
- When adopting AI for org context, prioritize provenance/metadata over “just summarization” to reduce ambiguity .
3) Accurate AI agents require domain knowledge + proprietary data, plus a hybrid architecture
In high-stakes domains (fintech, legal, healthcare), accuracy is “the product,” and out-of-the-box LLMs aren’t naturally reliable enough . Two advantages help close the gap:
- Domain knowledge: map workflows, stakeholders, and where a “90% answer” is acceptable vs. a failure .
- Proprietary data: transaction-level data, interaction history, domain corpora for personalization and insights a general model can’t produce .
On architecture, Lisa Huang recommends a hybrid system: LLMs (including multi-agent workflows) where they fit, but deterministic code where you need reliability and control .
Why it matters: Without domain constraints, data advantage, and deterministic guardrails, teams can build fast but ship unreliable behavior in the places users care most .
How to apply:
- Before building: map tasks/subtasks and define explicit accuracy thresholds by step .
- Build hybrid: identify components that must be deterministic and keep them in code .
4) “Personalized AI” beats one-off chats: build persistent context assistants (Gems/Projects)
Lisa Huang argues the core issue with typical LLM usage is starting from scratch each chat—role, strategy, writing style, product history all reset . Gemini Gems (and analogous Claude Projects / custom GPTs) aim to retain context across work so you don’t re-brief every time .
Why it matters: Persistent context makes AI useful as a daily collaborator for writing, strategy, and synthesis—not just a “glorified search engine” .
How to apply: Start with three “foundation” assistants:
- Writing clone: upload PRDs/emails/Slack messages for drafts in your voice .
- Product strategy advisor: feed strategy docs, positioning, competitor analysis; use as a thought partner (not a replacement for judgment) .
- User research synthesizer: upload transcripts/surveys/support tickets to extract themes you can’t manually read at scale .
Tactical Playbook
1) Prototype fast without derailing the org (vibe prototyping change management)
A practical framing: call prototypes what they are—not deployable code, but a substitute for a “clickable Figma,” and pilot with one team first .
Step-by-step:
- Prototype for yourself first: expect to revise requirements 5–15 times after seeing the first version and noticing what you forgot to specify .
- Bring it to the team: use it to get engineering/design feedback without claiming it replaces their work .
- Use it for stakeholders: prototypes create shared understanding; senior stakeholders often won’t read PRDs, but they will react to a working flow .
- Then validate with users: test with customers to learn quickly .
Prompting discipline (avoid “degrees of freedom”):
- Provide enough context so the tool doesn’t guess across thousands of possibilities .
- Include an object model (high-level entities/relationships) so the prototype isn’t built on wrong assumptions .
- Use a “Goldilocks” amount of context—too little causes wrong guesses; too much can overwhelm context windows .
Build advice for speed: stay front-end as long as possible; delay auth/DB, and “fake it” with sample data (CSV/local storage) until needed . If you realize you’re on the wrong path, restart (“nuke from orbit”) because regenerating is cheap .
2) A quick-start map for the “vibe coding” tool landscape
Dan Olsen’s “Vibe Coding Spectrum” organizes tools from less technical (browser/UI-first) to more technical (IDE/CLI/code-first), with suggested entry points by role :
- Designer-friendly: Figma Make, Magic Patterns
- PM-friendly: Lovable, Bolt, Base44
- More technical: Replit, V0
- Developer tools: Cursor, GitHub Copilot (and others)
How to apply: start on the left where you can iterate quickly, then migrate right only when you hit constraints .
3) Build a Gem (persistent copilot) with a PM-friendly workflow
Step-by-step:
- Write detailed instructions: a full page of context (role, audience, format preferences). Avoid vague prompts like “help me write better” .
- Upload your knowledge files: PRDs, emails, competitor teardowns, roadmaps; Gemini Gems rely strictly on instructions + files, so update the files as context changes .
- Iterate like a mini product: refine instructions/knowledge over time .
Lisa Huang’s suggested scale: a PM may end up with ~20 Gems/Projects across workflows .
4) Measuring AI agents: a three-layer scoreboard (in order)
- Quality: ask “is the AI doing what it’s supposed to do?” via evals, human annotators, and LLM judges (each scales differently) .
- Product metrics: adoption, usage, retention, CSAT; also track qualitative signals (social, customer conversations, support tickets) .
- Business impact: revenue attribution, retention influence, ARR contribution—tracked consistently on the business scorecard .
The sequence matters: jumping to business impact without a quality foundation is unstable measurement .
Case Studies & Lessons
1) Building AI into hardware changes the design space (Meta Ray-Ban)
Lisa Huang describes constraints that “pure software” teams often don’t face: weight, battery life, privacy, bystander concerns, and even partner pace differences (e.g., Luxottica vs. a Silicon Valley engineering org) . She flags an important trade-off: cloud processing is the default today, but on-device is positioned as the future—especially because “privacy wins over performance” for a device worn on your face all day .
Takeaway: Don’t “fall in love with the technology.” The best AI products sit at the intersection of what users need and what the tech can reliably do today; build fast, observe behavior, and update assumptions .
2) Standing out in AI PM interviews: do the work before you’re asked
Aakash Gupta relays a hiring story from Lisa Huang: a candidate with zero AI experience stood out by watching three hours of TikTok videos from coaches working with small businesses, then bringing synthesized user needs into the interview. No other candidate did comparable pre-work .
Takeaway: The differentiator wasn’t AI credentials—it was initiative and user-centric research depth .
3) Even agents need aligned context (Momental’s pivot)
Momental’s founders described building a “product team of agents” (developer agent, PM agent doing slides/sprint planning), but discovered the agents asked endless sensible questions—mirroring the same alignment problems real teams have. The insight: they hadn’t solved alignment; they needed a context foundation first .
Takeaway: Multi-agent systems can amplify the demand for clear shared context—speed doesn’t remove coordination problems .
4) Cheap code can lead to “shipping slop” unless strategy and focus stay sharp
Casey Winters argues code is now “incredibly cheap,” which can become an excuse for a lack of strategic thinking about what’s worth building . He describes incumbents “DDoS’ing” customers with too many features and notes that running multiple agents doesn’t guarantee value—often it produces “slop” without product sense and business strategy .
Takeaway: Higher build throughput increases the penalty for weak focus: customers get overwhelmed and teams lose clear signal on what’s working .
Career Corner
1) The PM role is shifting toward hybrid builders (judgment stays core)
Aakash Gupta’s framing: AI won’t replace PMs, but it will automate or accelerate execution work (PRDs, mocks, roadmaps, data pulls). Product judgment—deciding in ambiguity what’s worth doing—remains core . Structural changes follow: PM-to-engineer ratios compress and PM expectations shift toward prototyping/design/coding enough to communicate intent .
How to act on it: choose one build-adjacent skill (rapid prototyping, lightweight coding, or system prompt + eval design) and ship artifacts regularly .
2) Breaking into AI PM: remove the “I don’t work on AI” excuse
Gupta’s roadmap includes:
- Get direct AI experience in-role if possible; otherwise build on the side .
- Invest in network and referrals (he emphasizes referrals still matter) .
- Treat interview prep as a skill: practice out loud, get mocks, drill the format (product sense, execution, behavioral, case questions) .
He also argues you don’t need permission, budget, or a team to build AI products—consumer tools provide access to the same models many companies build on, and many companies aren’t fine-tuning at all .
3) Product-manage your career (and keep empathy as the strategy anchor)
Deb Liu recommends treating your career with the same intentionality PMs apply to product roadmaps . She also anchors product strategy in empathy—“vision without customer pain is theater” .
4) Job market signal (EU): Technical Project Manager (AI & Web Infrastructure), Frankfurt
A Frankfurt-based technology startup is hiring a Technical Project Manager to coordinate product strategy and execution for the European market and translate technical capabilities into market-ready products . Responsibilities include market/competitor research, structuring product priorities, coordinating development cycles, supporting validation/evaluation, and exploring AI-based workflow tools . The post lists requirements like CS/technical background, web/cloud fundamentals, structured thinking, and interest in emerging AI tools . Apply via careers@novada.com.
Tools & Resources
Claude Code webinar recording (Sachin Rekhi): Rekhi hosted a live session with 1,500 PMs, covering why he views Claude Code as highly productive for PMs, showing 13 automation skills, and walking through setup (editors/terminals/voice tools) . Video link: https://www.youtube.com/watch?v=zsAAaY8a63Q.
Gemini Gems masterclass (Lisa Huang): Podcast episode URL: https://www.news.aakashg.com/p/lisa-huang-podcast. (Key build steps: detailed instructions, upload knowledge, iterate) .
Vibe Brief template + tool starting point: Dan Olsen recommends starting with Lovable by default and sharing a lightweight “vibe coding brief” at “bitly slash vibebrief” .
AI PM feedback loop (community writeup): Reddit thread link includes: https://www.clawrapid.com/en/blog/ai-pm-feedback-loop. One described workflow: rough requirements doc for Claude → prototyping → experimenting → PRD (source of truth) → ship .
A caution on “auto-invoked” AI skills: Rekhi noted that installing an auto-invoked “frontend-design” skill made his monthly NPS trend visualizations harder to read, and he prefers skills he can invoke manually .