We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
PM Daily Digest
by avergin 100 sources
Curates essential product management insights including frameworks, best practices, case studies, and career advice from leading PM voices and publications
Tony Fadell
scott belsky
John Cutler
Big Ideas
1) Product fundamentals are holding up in the AI era
Tony Fadell says the fundamentals have not changed: start with the user, not the tech; focus by saying no; make hardware, software, and services work together; sweat the details; debate hard and then commit; build for the long term. He adds that winning companies build things people actually use and cannot imagine living without .
“Start with the user, not the tech... Focus is everything... Details are the product.”
Scott Belsky makes a similar point through WHOOP: the team lived and breathed the product and its aspirational customer, stayed centered on physical health, readiness, and performance despite pressure to broaden, and cared about design from day one .
Why it matters: New tooling changes how teams build, but these sources still point back to the same advantage: sharp focus on a real user problem, a coherent end-to-end experience, and disciplined scope .
How to apply: Define the user problem before discussing the technology, write down what you will not build, and review the full experience end to end—including the small details users will remember .
2) PM work is moving from mockups and PRDs to working, instrumented artifacts
Sachin Rekhi argues that customer discovery with functional prototypes plus PostHog is much better than asking for feedback on Figma mockups because it reveals what users actually do, not what they guess they might do . His Walkman example is blunt: people called a yellow device sporty, but when they could take one home, they chose black .
A related community write-up described an Anthropic-style loop: skip the PRD, build a working prototype in hours, ship it internally, watch usage, and iterate based on what people actually do. The same post cites claims that 90% of code is AI-written and engineers ship 2,000-3,000-line AI-generated PRs . Product School’s discussion of the new Figma/Codex integration points in the same direction: design and code can now move back and forth in a round trip workflow, with faster iteration and cleaner handoffs .
Why it matters: Discovery, planning, and handoff get stronger when the team can react to behavior in a working product instead of opinions about a static artifact .
How to apply: Use working prototypes when you need to learn quickly, observe usage before over-polishing, and keep the loop tight between design and code. But keep the scope situational: commenters warned that this model fits controlled environments better than every organization, and that shipping more can create adoption fatigue if users cannot absorb the changes .
3) AI is increasing leverage, but also raising the bar on judgment
At Block, product leadership says a given roadmap will need fewer engineers, designers, and PMs; PMs and designers are already shipping PRs; and BuilderBot is building some features to 100% completion and many to 85-90% before a human finishes the last 10% . Workflows are also changing from linear execution to managing many agents in parallel, and the organization has shifted to smaller squads of 1-6 people with fewer layers .
John Cutler adds the caution: AI lowers unproductive cognitive load and helps more people make progress, but it can also let people get ahead of their skis, while experienced builders feel their days compress into nonstop hard decisions . Product School makes a similar point from the design side: as coding gets faster, the bottleneck can move to thoughtful design and deciding what to build in the first place .
Why it matters: The scarce skill is shifting from doing every task by hand to deciding what deserves attention, review, and slower thinking .
How to apply: Limit the number of concurrent AI threads you can realistically review, match task difficulty to the skill level of the person using the tools, and protect time for the harder product choices that speed alone will not answer .
Tactical Playbook
1) Run behavior-based discovery in five steps
- Build a functional prototype in a tool such as Bolt, Lovable, Reforge Build, Magic Patterns, or Claude Code .
- Integrate PostHog .
- Instrument the key user actions you care about .
- Review return behavior and action data through DAUs/WAUs, retention curves, and action metrics dashboards .
- Add qualitative context with post-usage surveys, session replays, and heatmaps .
Why it matters: This stack turns discovery from ‘what do users say?’ into ‘what did users actually do?’ .
How to apply: Use surprises—drop-offs, unexpected clicks, low return rates, or mismatches between surveys and behavior—as the starting point for the next round of interviews or iteration .
2) Use a facts-vs-stories loop in stakeholder conflict
Pippa Topp describes a common PM failure mode: defensiveness in meetings, where a request triggers a private narrative and the PM reacts emotionally instead of getting curious . Her coaching move is to separate facts from the story you are telling yourself, then create space between stimulus and response .
Why it matters: This is a practical way to stop reactive communication before it creates an unproductive back-and-forth .
How to apply: When a request bothers you, pause and ask: what happened factually, what meaning am I adding, and what question would help me understand the request better? In forced decisions, acknowledge the discomfort, invite challenge, and if the decision stands, ask for ‘disagree and commit’ rather than fake agreement. The same empathy skill can be extended beyond customers to stakeholders and peers .
3) Fix decision-loss after meetings by standardizing where work lives
One product design discussion described a familiar post-meeting mess: notes, summaries, Jira or Linear tasks, Notion or Docs decisions, and Slack follow-ups all live in different places. Later, teams forget why a decision was made, who owns what, what the next step was, and even which meeting produced the outcome . A related remote-team post says the root problem is often not low communication volume, but the lack of a shared communication system—status updates in random places, decisions in private chats, unclear async rules, and escalation only after something is already late .
Why it matters: Teams often lose execution quality after the meeting, not during it .
How to apply: Give status updates one home, record every decision with rationale, owner, next step, and meeting origin, make async expectations explicit, and define escalation rules before work slips .
4) Coach soft-skill gaps like any other capability
Topp uses the conscious competence ladder to move people from unconscious incompetence to conscious awareness, then practice, and eventually automatic skill . Her methods include life-story reflection, real-time observation, and assessment tools that reveal behavioral patterns and empathy misses . At the team level, she suggests choosing a behavior such as curiosity, defining what good and bad versions of it look like, and measuring it through observation .
Why it matters: Emotional intelligence becomes coachable when it is translated into visible behaviors instead of vague feedback .
How to apply: Pick one behavior to improve this quarter—curiosity is a good candidate—write down what it looks like in meetings, and review examples in coaching or retros until the pattern becomes easier to notice in real time .
Case Studies & Lessons
1) WHOOP stayed narrow and became category-defining
Scott Belsky says WHOOP became an industry-defining product by staying close to its aspirational customer, focusing on physical health, readiness, and performance even when outside pressure pushed broader feature requests, and caring about design from day one .
Takeaway: Protecting a narrow performance focus can beat feature sprawl when the target user is clear .
2) Block changed both how products ship and how teams are shaped
At Block, PMs and designers shipping PRs is routine, BuilderBot can autonomously merge PRs and often build features to 85-100%, and the company now uses smaller, more fluid squads, fewer layers, and a functional product structure under one head of product .
Takeaway: If AI changes throughput, org structure and operating cadence need to change with it—not just individual tool use .
3) The Walkman study showed why behavior beats stated preference
In Sony’s Walkman study, users described the yellow version as sporty and the black version as boring, but when they had to choose one to take home, everyone picked black .
Takeaway: Treat user language as signal, but trust real choice behavior more when the two conflict .
4) A junior PM’s growth came from self-awareness, not more process
Topp described coaching a junior PM who wanted to retreat back to delivery management because the discomfort of product work felt like a sign she was in the wrong role. By unpacking the stories she was telling herself and connecting them to earlier life experiences, the PM gained confidence, communicated more clearly, built trust, navigated stakeholders better, and later took a more senior role .
Takeaway: What looks like role mismatch can sometimes be unprocessed discomfort in a new responsibility set .
Career Corner
1) Curiosity is becoming a practical PM skill
Product School’s advice to PMs was simple: stay curious and play with the tools. The speakers pointed to concrete PM use cases: collating customer feedback into a prioritized backlog, inspecting merged code changes to draft personal release notes, and querying code behavior to understand business implications . At Block, that shift is already concrete enough that PMs shipping PRs is described as routine .
Why it matters: Practical tool fluency now changes how independently a PM can investigate, prototype, and communicate .
How to apply: Start with real work you already own—feedback synthesis, release notes, or product logic questions—before expanding into bigger builds or agent workflows .
2) Resilience starts with self-belief and balanced empathy
Topp says the missing foundation for stressed product leaders is often self-belief, and that resilience improves when people stop tying every challenge directly to their identity . She also describes her own arc from being judgy and competitive, to becoming so empathetic that leadership suffered, and then finding a better balance between compassion and accountability .
Why it matters: PMs need empathy, but not at the cost of avoiding hard calls or accountability .
How to apply: Look for places where external validation is driving your behavior, and check whether your empathy is helping the team understand a decision—or just helping you avoid conflict .
3) Interview companies as hard as they interview you
One PM candidate described turning down a role at a company that wanted to build a SaaS-plus-AI product but had no roadmap, no firm plan, no target audience, and no real product specifics, while expecting the PM to think up the idea and implement it . The candidate’s concern was that this setup pushes too much uncertainty and politics onto one hire without even a rough product direction .
Why it matters: Undefined ownership can look like autonomy at first and impossible scope later .
How to apply: Before accepting a role, ask what problem the company already understands, who the target user is, what rough product direction exists, and what decisions the PM will actually own .
Tools & Resources
1) PostHog + functional prototype stack
This is the most concrete discovery stack in the notes: prototype in Bolt, Lovable, Reforge Build, Magic Patterns, or Claude Code, then layer in PostHog instrumentation, action metrics, retention curves, surveys, session replays, and heatmaps .
Use it for: early product discovery where behavior matters more than opinions .
2) Figma ↔ Codex round-trip workflow
OpenAI and Figma’s new integration allows builders to start in design or code and move back and forth, with faster iteration and a cleaner design-to-ship handoff . Product School’s discussion also notes that a full design can appear in the product from a single prompt, speeding team velocity and shortening handoffs .
Use it for: teams trying to compress design-to-code cycles without losing the ability to iterate in both directions .
3) OpenClaw multi-agent setup
OpenClaw lets you run multiple agents on one machine, each with its own identity, tools, crons, and workspace . The onboarding flow includes files such as soul.md, tools.md, and user.md, and new agents can be added from the terminal . Example workflows in the notes include project-management support for launches, PLG lead qualification, and converting repeated support tickets into documentation issues .
Use it for: recurring PM operations where the work is repetitive, schedulable, and easy to route to a specialized agent .
4) Collaboration options for Claude-generated docs
A community thread highlighted the pain of drafting docs in repo-based markdown, then needing richer formatting and comments for feedback . Suggested solutions included exporting to Word on OneDrive, pushing directly to Confluence, generating target formats such as tables for Google Docs or Sheets, and using scripts to sync markdown to Google Drive while collaborators work in shared docs . One PM also built a custom tool to get Coda/Notion-style collaboration without leaving the repo .
Use it for: teams that want AI-assisted drafting without forcing every reviewer into GitHub-native workflows .
Product Management
Hiten Shah
Big Ideas
1) Evaluate GenAI products beyond accuracy
Accuracy is a trap.
Accuracy describes model performance, but not whether users trust the product, find it useful, return to it, or whether it creates business value . The framework described in the Product School session evaluates GenAI products across trust, usefulness, adoption, and business impact. That matters because a product can be reliable but useless, useful but risky, or well-used but economically unsustainable .
How to apply
- Make each AI feature prove itself on all four dimensions, not just model quality
- Give each dimension concrete metrics, owners, and review cadences before launch
2) Use a two-question screen before automating PM work
Sachin Rekhi’s heuristic is simple: ask whether a workflow is worth building and possible to build with AI . It is worth building when AI has a clear advantage, such as synthesizing customer interviews faster and more comprehensively, or when the task is frequent and time-consuming, such as weekly status updates . It is possible to build when AI can access the right context, the work can be broken into discrete steps, and human judgment is limited enough that the workflow will not stall .
How to apply
- Start with recurring PM tasks where AI already outperforms manual effort on speed or coverage
- Reject automations that depend on hidden context or undefined judgment calls
3) QR codes are becoming a measurable offline growth channel
QR codes can connect packaging, receipts, events, and out-of-home placements to product experiences with very little friction . The more interesting shift is measurement: dynamic tools such as ME-QR let teams update links without reprinting, track sources, segment traffic, and run experiments, effectively bringing performance-style analytics into offline surfaces . The recurring failure modes are basic but important: no clear reason to scan, weak mobile UX, and no tracking .
How to apply
- Use QR only when it clearly makes a user job easier; onboarding, retention, support, referrals, and promos are the cited use cases
- Treat offline scans like any other channel: instrument source, segment traffic, and test destinations
Tactical Playbook
1) Roll out a GenAI evaluation system in five steps
- Week 1: define the top three metrics per dimension, set baselines, and choose realistic and stretch targets
- Weeks 2-4: instrument the product, set up dashboards, establish human evaluation, and build feedback collection into the experience
- Weeks 5-8: run a pilot with 50-200 users, gather quantitative and qualitative data, and iterate on the gaps
- Post-launch: monitor trust and safety daily, engagement weekly, business impact monthly, and review the product comprehensively each quarter
- Keep iterating: use A/B tests, user feedback, and updated evaluation criteria as the product changes
Why it matters: the speaker’s lesson is that pilot data should drive launch decisions, and multi-dimensional evaluation surfaces issues that accuracy alone misses .
2) Add learning, memory, and evaluation to Claude with three CLAUDE.md blocks
The Product Compass article proposes three blocks that make Claude more useful for product work: a Knowledge Architecture, a Decision Journal, and a Quality Gate.
How to apply
-
Before each task, review domain rules and hypotheses; after each task, store learnings in
/knowledge/{domain}/knowledge.md,/hypotheses.md, and/rules.md, and maintain a/knowledge/INDEX.md - Promote a hypothesis to a rule only after 3+ confirmations, and demote it if new data contradicts it
-
Before major choices, search prior decisions; if none exists, log the decision, context, alternatives, reasoning, trade-offs, and any superseded choice in
/decisions/YYYY-MM-DD-{topic}.md - Add explicit evaluation criteria outside the generation step, because agents tend to praise their own work even when quality is mediocre
Why it matters: after one month, the author reports Claude was automatically applying 24 project-specific rules, and the decisions with three written alternatives were right 80% of the time .
3) Turn repeated support questions into documentation work every week
A simple Friday workflow from Lenny’s Newsletter: review resolved support tickets, and if a question appeared 3+ times that week, flag it as a docs or FAQ candidate, create a Linear issue assigned to @agent, and include the standard answer as the starting point .
Why it matters: it converts recurring support questions into docs or FAQ candidates and ready-to-assign issues .
How to apply
- Set a weekly review cadence, not an ad hoc one
- Use the recurrence threshold to reduce noise and focus only on patterns
- Include the existing answer so documentation starts from something already working in support
4) Pressure-test your product story in 20 minutes
Open a blank document and write down your company’s story in 200 words. What job are customers hiring you to do? Why does your approach work when others fail?
Why it matters: Hiten Shah frames this as a basic clarity test, and says most founders cannot do it in the allotted time .
How to apply
- Limit yourself to 200 words and 20 minutes
- Answer only two questions: the customer job and why your approach works better than alternatives
- Use the exercise as a quick internal clarity check
Case Studies & Lessons
1) Amazon’s AI assistant companion: trust features and utility metrics moved together
In the AI assistant example, every response included source attribution, some responses included confidence scores, and human evaluation kept the hallucination rate below 2%. On usefulness and adoption, the related prompt library drove 40% faster prompt creation, about 85% thumbs-up, 3x higher engagement for manager-specific prompts, 2x retention for users with community prompt access, and 85%+ returning users within a few months .
It’s a game changer for my workflow and results.
Key takeaway: trust mechanisms such as source attribution are more valuable when they are paired with clear evidence that the product saves time and keeps users coming back .
2) Amazon’s B2B purchase guardrails: business impact was measurable quickly
The purchase guardrails example generated several millions in annualized revenue, served thousands of business customers, reduced manual budget tracking by 80%+, and reached positive ROI within 3 months.
Key takeaway: when an AI product is tied directly to completing a workflow faster and with less manual tracking, PMs can measure business impact in revenue, productivity, and ROI rather than relying on model-centric metrics alone .
3) A structured Claude workspace improved through use
The Product Compass author says that after a month, Claude had generated and was automatically applying 24 project-specific rules extracted from patterns across dozens of sessions . The same write-up says the decisions the author felt most confident about had the worst hit rate, while decisions where three alternatives were written down were right 80% of the time .
Key takeaway: persistent knowledge capture and explicit alternatives can beat confidence-based decision-making .
Career Corner
1) PM job search is shifting from volume to precision
Aakash Gupta argues that mass-applying with AI does not work. His recommendation is to apply to fewer, surgically targeted roles, stack referrals before submitting, and run the search in about 20-30 minutes a day instead of three hours .
How to apply
- Build the referral path before the application: Gupta says cold application callback rates are around 2-4%, while warm intros are 5x higher, and every candidate he coached into a top-company offer had a referral on file before the resume went in
- Send 25 personalized connection requests per week, rotate across target companies, follow up on days 3, 7, and 14, and ask for the referral only after context is established
2) Tailored resumes only help if they stay truthful
Gupta’s warning on AI resumes is blunt: many tools either invent experience or produce generic keyword swaps, and invented experience can backfire when interviewers check it . His recommended standard is a JD-specific resume built only from real experience .
How to apply
- Restructure the resume around the specific job description, but only with evidence you can defend in interview
- Treat fabrication as a risk, not a shortcut
3) Specific work products and interview prep still create separation
Gupta highlights a 90-minute work product: a one-pager analyzing the company’s product plus a working prototype of the recommendation . He also emphasizes company-specific prep, including interview formats, reported questions, and screening signals across 250 companies, plus mock interviews that identify weak areas over time . On the back end, he recommends negotiation research and counter-offer drafts because the compensation impact can be meaningful .
How to apply
- Use a work product when a standard application is not creating enough signal, but make it specific enough that it could only have been written for that company
- Build interview prep around the target company’s actual format and questions, not a generic PM script
Tools & Resources
1) The CLAUDE.md blocks from Product Compass
What it is: a reusable set of three blocks for learning across sessions, logging decisions, and evaluating output quality .
Use it for: ongoing product domains where patterns emerge slowly, teams re-debate the same choices, or AI output needs a separate quality bar .
2) Prompt patterns from Lenny’s Newsletter
What they are: ready-made automation prompts for PLG lead qualification, recurring support-to-docs conversion, and launch management .
Use them for: workflows with clear cadence and routing rules. The examples also show when specialization helps: Sage handles course operations and reminders, while Kelly checks Linear daily, starts a branch, and opens a PR for assigned dev tasks .
3) Dynamic QR tools such as ME-QR
What it is: a way to change destinations without reprinting codes, track sources, segment traffic, and run experiments from offline touchpoints .
Use it for: packaging, receipts, events, support, referrals, and promo mechanics where you want a measurable bridge from offline to product .
4) An AI prototyping checklist from r/ProductManagement
What it covers: the integration layer between LLM APIs, vector databases, and preprocessing; state and context handoffs in RAG systems; token-cost monitoring; and the practical shift toward CLIs for Claude workflows .
Use it for: early planning before a PM-led prototype or side project so the first blockers are visible before implementation starts .
Aakash Gupta
Sachin Rekhi
Teresa Torres
Big Ideas
1) Claude Code is moving from assistant to PM operating system
Aakash Gupta’s core argument: the best Claude Code users are not relying on one-off chats. They build persistent file-based operating systems with skills, sub-agents, hooks, workflows, and markdown knowledge that improve every future prompt . He positions this as the operating-system layer for people spending 8-10 hours a day in the tool, with the potential to move from roughly 80/100 to 95/100 proficiency .
That is what an operating system is. Not a folder full of files. A system where every interaction makes the next one better.
Why it matters: PM work is highly contextual. A persistent workspace lets stakeholder context, project history, goals, and prior fixes survive beyond one chat window .
How to apply:
-
Start with
CLAUDE.mdandGOALS.md; the source says those two files deliver 80% of the value on day one . -
Keep
CLAUDE.mdcurrent weekly so Claude inherits your role, tools, priorities, and recurring instructions in every message . - Add persistent people files and project folders so meeting notes, stakeholder preferences, PRDs, research, and launch results compound over time .
- Use sub-agents for research and CLIs instead of MCPs to protect context: one example dropped a research task from about 10% of the main context window to 0.5% .
2) The product trio is compressing into product builders
Teresa Torres argues product management, design, and engineering are not dead, but the classic PM-design-engineering trio is collapsing toward a broader product-builder foundation with specialties layered on top . In her framing, AI now gives people a base level of programming, design, product management, and business-context capability, so 1-2 product builders can handle much of the routine 80% of feature work while specialists focus on the harder 20% .
Why it matters: This changes team design and individual expectations. Torres expects smaller, more cross-functional teams, while still arguing that human strengths in alignment, trade-off decisions, organizational context, and innovation remain important .
How to apply:
- Build horizontal AI skills alongside your core craft; Torres describes this as a modern T-shaped product-builder foundation .
- Learn to specify what you want and plan with an agent; she says that base foundation no longer requires direct exposure to code for many common web-app tasks .
- Keep investing in your specialty. Her argument is not that expertise disappears, but that expertise is increasingly paired with AI fluency inside the function itself .
- If you lead teams, start thinking about safety infrastructure now, including security, accessibility, and code-review agents, because broader participation in building raises review demands .
3) Pricing architecture is becoming core PM territory
The Product Compass makes a blunt case: as AI compresses time spent on Jira, PRDs, and standups, PMs are increasingly responsible for business outcomes, and pricing sits near the center of that shift . Its thesis is simple:
Pricing should live in config, not code.
The article contrasts companies that can change pricing in hours with teams that still need quarters. It cites Vercel shipping 5-6 pricing changes per month, while many companies make 1-2 changes per year and consume a quarter of engineering time for each .
Why it matters: If plans, entitlements, usage limits, and experiments are hardcoded, pricing becomes an engineering bottleneck rather than a product lever .
How to apply: Use the four-pillar test for monetization agility :
- Unified product catalog: one schema for plans, features, entitlements, and prices .
- Decoupled entitlements: central runtime rules instead of scattered
if (plan == ...)checks . - Real-time metering: usage visibility for customers, sales, and finance before the invoice surprise .
- Control plane: a dashboard where non-engineers can run pricing experiments and adjust limits without code deploys .
Tactical Playbook
1) Stand up a lightweight PM operating system in Claude Code
-
Create
CLAUDE.mdwith your role, work style, installed tools, current priorities, and references to your skills . -
Add
GOALS.mdfor quarterly priorities; the source recommends starting here before building more structure . -
Set up
knowledge/people/and update it after meetings so stakeholder preferences and recent context are reusable in future communication . - Create one folder per active project, then archive completed projects for reuse on similar work later .
-
Monitor
/status lineand/context, and push research to sub-agents instead of the main session when context starts climbing . - Use Jupyter notebooks for CSV analysis when you need transparent, reviewable methodology, and use the ask-user-questions tool when requirements or decision criteria are still fuzzy .
Why this matters: The operating model turns scattered PM work into reusable context and lowers the cost of repeating research, analysis, meeting prep, and writing from scratch .
2) Close the gap between acceptance criteria and actual testing
A Reddit post surfaced a familiar failure mode: a PM wrote the checkout flow step by step in the PRD, but QA backlog, outdated scripts after a UI change, and mutual assumptions meant the flow still shipped broken . The PM’s takeaway was that knowing the flow well was not enough because the knowledge never became an executable test .
How to apply:
- Identify flows where a broken handoff would create visible customer damage, such as checkout or onboarding .
- Convert plain-English acceptance criteria into something that runs against the actual product, not just a documentation artifact .
- Review screenshots or pass/fail evidence before sprint review, rather than assuming regression coverage exists .
- If QA ownership is fragmented, treat PM participation in testing as a temporary control, not an exception .
- Do not rely on documentation alone to solve the problem; one community response argued the handoff gap still comes back to direct communication .
3) Put pricing on a monthly operating cadence
The Product Compass suggests a two-hour monthly pricing meeting with four blocks: customer data, learnings scan, product-to-pricing roadmap sync, and decisions/actions .
How to apply:
- Review usage, billing, and approaching-limit customers to spot expansion candidates and churn risk .
- Add cross-functional input from sales, CS, finance, marketing, and growth on win/loss patterns and pricing friction .
- For every feature shipping in the next 30-90 days, decide its monetization stance up front; the rule proposed is that no feature ships without one .
- Leave with 1-3 local experiments, each with an owner, hypothesis, timeline, and expected impact .
Why this matters: It separates infrequent global pricing changes from continuous local experiments, giving PMs a repeatable way to connect product roadmap and revenue decisions .
4) When engineering relationships are political, build trust before trying to redirect the roadmap
Community advice in a discussion about resistant developers was consistent on one point: trust comes before leverage. The recommended pattern was to listen first, find the influential developer, make small suggestions once you are situated, and avoid upending a team’s plan immediately as a newcomer .
How to apply:
- Treat developers as partners, not order takers; commenters framed weak PM-engineering trust as the root problem in these scenarios .
- Build credibility by representing the existing roadmap before advocating major changes .
- If your manager reassigns you or inserts themselves into the work, ask directly what pattern they are seeing and what feedback you need to hear .
Case Studies & Lessons
1) Monetization architecture changed shipping speed at Zep, Plotly, and Automox
- Zep: modeled plans and entitlements, went from trial start to production in 4 days, and later used limit enforcement to improve free-to-paid conversion while giving sales earlier visibility into usage .
- Plotly: launched two AI products two quarters faster because catalog and entitlements were already modeled centrally .
- Automox: after years of hardcoded monetization logic across two billing systems, it cut time-to-launch for new pricing tiers by 75% and freed two full-time engineers from maintenance work .
Lesson: Pricing agility is not only a packaging problem. It is an architectural capability that determines how quickly PMs can test monetization ideas .
2) A broken checkout flow showed that a PRD is not a test plan
One PM’s postmortem described a flow that was written clearly in a Notion PRD, demoed repeatedly, and still shipped with a production bug because no one converted that knowledge into an updated test . After adopting a plain-English testing tool that ran on real devices and returned screenshots plus step-level pass/fail, the PM says they caught two production-bound issues in the first week .
Lesson: The verification loop breaks when documentation, QA scripts, and ownership drift apart. The fix is executable validation, not better prose alone .
3) Horizontal expansion can damage the core product
Teresa Torres says Zapier’s expansion into adjacent products has coincided with degradation in the core automation experience, citing repeated failures where zaps did not trigger . Her workaround has been to ask Claude to build custom webhook listeners because she finds the resulting code more reliable and easier to control for error handling . She adds that she is slowly moving off both Zapier and Airtable because of persistent quality issues .
Lesson: New surface area can hide declining reliability in the core workflow. PMs expanding horizontally need to watch quality metrics on the original product, not just adoption of the new bets .
Career Corner
1) The safest career move right now is becoming a stronger product builder
Torres’ career advice is direct: build horizontal AI skills while continuing to deepen your functional expertise . She argues that if you do not learn how to use AI inside your function, you will no longer be seen as an expert in that function, and she notes that job descriptions and interview processes are already changing .
How to apply: Practice two skills now: specifying what you want clearly and planning work with agents, then pair that with deeper expertise in your primary craft .
2) Early-career PMs should optimize for signal, not resume mythology
Advice to an APM with informal startup experience was straightforward: include the work on the resume, but focus on what you did, the problems you solved, and your responsibilities, not on ownership structure or proprietorship details . The same commenter suggested staying in the APM role for at least 1-2 years to build clearer, more relevant product experience before making the next move .
3) Domain switches are harder in an oversupplied market
A PM with about four years in data and analytics product management said they were reaching final rounds for customer-facing roles but losing out to candidates with more direct domain experience, despite feedback that their core PM skills were transferable . They also pointed to candidate oversupply as part of the problem .
Takeaway: In the current market, transferable PM skill is still valuable, but it may not beat direct domain familiarity when employers have many candidates to choose from .
Tools & Resources
1) PM OS starter repos
- Carl’s Product OS: a lighter starting point for a Claude Code workspace .
- Aakash’s PM Claude Code setup: a larger setup with 41 skills and 7 sub-agents .
Why explore them: Both are meant to reduce setup friction and give PMs a concrete file structure, skills layout, and workflow starting point .
2) Jupyter notebooks for auditable analysis
The recommendation here is to ask Claude to analyze data in a Jupyter notebook so every query, output, and chart is preserved as code cells and rendered results .
Use it when: you need analysis that a manager or data scientist can verify step by step, rather than a black-box summary .
3) The ask-user-questions tool
Claude can generate a terminal UI with checkboxes and input fields to gather requirements, fill context gaps, or support decisions instead of guessing .
Use it when: assumptions are the main failure mode in discovery or planning .
4) A prompt-optimization loop for recurring agent workflows
Aakash Gupta describes a Karpathy-style loop for prompts: pick the prompt to improve, use 2-3 realistic test inputs and 3-6 binary quality checks, run repeated evaluations, mutate one variable at a time, keep winners with version control, and revert losers . He cites a pace of about 12 experiments per hour and roughly 100 overnight .
Use it when: you have a prompt or system instruction that is already good enough, but not yet reliable, in workflows like support, internal automations, extraction, or code review .
5) Reforge AI Productivity
Sachin Rekhi says the updated live sessions are focused on what has become most actionable for PMs over the last six months: automating PM workflows with Claude Code, the AI prototyping mastery ladder, AI-powered customer discovery, and AI-enhanced product strategy and execution .
andrew chen
Julie Zhuo
Big Ideas
1) PMs need an agent-management operating model
“2005: growth teams optimized funnels / 2015: growth teams optimized loops / 2026: growth teams optimize agents”
Claire Vo and Julie Zhuo describe the same shift from different angles. Agents work best when PMs use classic management skills: define purpose clearly, scope a role, give the right context and tools, onboard progressively, and split work across specialized agents when context gets too large. Humans still need to define what great looks like and curate the final outcome .
Why it matters: PM leverage is moving beyond prompting. The harder questions are who does what, with what permissions, in what context, and how success is judged .
How to apply:
- Define success before setup: spell out the outcome, and what an excellent vs. mediocre result looks like .
- Start with narrow roles and limited access, then expand trust over time—for example, calendar access before email drafting or sending .
- Use multiple specialized agents instead of one overloaded generalist when work spans different contexts .
- Treat onboarding as product design: make it simple, conversational, and oriented around helping the user feel like a winner .
2) Better product decisions come from higher understanding, not perfect certainty
Julie Zhuo describes Meta’s product culture as hypothesis-led and experiment-based: use data to test whether a belief is true, and change course if it is not. She defines a high-quality decision by whether it drives the intended outcome over time, and by whether the process was rigorous enough to trust in the short term. Her framing at Sundial is similar: if AI can raise understanding from roughly 30% to 50-60%, teams can make better, faster decisions without waiting for perfect information .
Why it matters: PMs do not need certainty to move; they need clearer hypotheses, better validation, and enough confidence to act before the window closes .
How to apply:
- Start with the outcome you want, not the feature you want to build .
- List the assumptions behind the decision and validate them with customer conversations or data .
- Compare ideas by building and testing them, especially in small teams, instead of resolving everything in alignment meetings .
- Make the call once understanding is materially better, even if it is still incomplete .
3) Availability is a leadership framework, not a personality trait
Availability is framed here as being reachable, present, engaged, and reliable in follow-through—and as a common trait among the best leaders, even though it rarely appears in standard leadership frameworks . The operating behaviors are simple: make time, make space, make calls, and make good on commitments .
Why it matters: Chronic unavailability does more than slow execution. Teams stop escalating issues, trust erodes through unclosed loops, and people start proposing smaller, safer bets .
How to apply:
- Protect calendar space for decisions and people who actually need your input .
- In meetings, reduce context-switching and be fully present .
- Decide when the decision is needed, rather than deferring to another meeting or more data by default .
- Audit your own follow-through, and ask specific questions such as “What took too long from me?” or “When did you wish I’d been around?” .
Tactical Playbook
1) Set up an agent like a new teammate
- Define the purpose: what outcome matters, and what excellent vs. mediocre performance looks like .
- Scope one role at a time, with the right tools, context, and style guidance .
- Start with progressive trust instead of full autonomy on day one .
- Learn the agent’s strengths and weaknesses through repeated interactions, just as you would with a new teammate .
- Split work into specialized agents when context gets crowded .
Why it matters: This turns vague AI experimentation into a repeatable operating model that matches how managers already create clarity and accountability .
2) Raise decision quality without slowing the team down
- Write the outcome you want in plain language .
- Note the assumptions that must be true for that outcome to happen .
- Validate those assumptions quickly with users, data, or both .
- Stress-test the downside and likely misinterpretations before shipping .
- Build competing ideas in parallel when possible, then judge them on evidence rather than debate .
- Decide when you have enough understanding to move, not when uncertainty disappears .
Why it matters: The goal is not perfect process. It is a process rigorous enough to trust, while still moving before timing or relevance disappears .
3) Run weekly execution reviews around outcomes, not activity
- Require each weekly update to include only three lines: Decision, Owner, and Definition of Done.
- Clarify who is Leader Accountable, Responsible, Consult, and Inform, then pair that with Objectives, Key Results, and Actions.
- At the end of the week, review whether the objective was achieved and whether the key result proved it—not whether a list of actions was completed .
- Keep ownership and decisions visible as work evolves, using systems like GitHub and Slack if that is where the work already lives .
Why it matters: This is a lightweight way to reduce the common drift between code merged and customer outcome shipped in small teams .
4) When driving change, show more than you tell
- Start by asking what you actually want: validation, change, or both .
- Assume challenged beliefs will trigger defensiveness on both sides; that is a human reaction, not proof of bad intent .
- Avoid treating workplace relationships like deep personal bonds; many are situational and will not satisfy a deeper need to feel fully understood .
- Use evidence and working examples to create movement, instead of relying on abstract comparisons or logic alone .
Why it matters: PMs often carry the burden of proof when proposing a better way of working. Showing a better path can be more effective than trying to win the argument outright .
Case Studies & Lessons
1) ChatPRD’s sales agent turned manual founder work into a daily loop
Claire Vo’s agent Sam runs a daily PLG sweep across new signups, checks for company domains, uses Exa people search to find likely decision-makers, sends soft personalized outreach, routes some accounts back to Claire, handles international accounts differently, cleans up the CRM, flags stale deals, drafts customer emails, and runs QBRs . She says it replaced a part-time salesperson who had been doing the work 10 hours per week, and she highlights that the setup is highly tunable as the workflow changes .
Why it matters: This is a concrete example of an agent creating real economic value in a PM-adjacent growth workflow .
How to apply: Pick one recurring revenue or ops workflow with clear inputs, routing rules, and review points before expanding scope .
2) Agent support made a new course viable before extra hiring
Vo says she and her co-instructor built an executive course in Claude code, then added Sage to project-manage launch prep. Sage tracks the launch timeline, nudges them to post on LinkedIn, ingests research via API, stores it in the repo, and decides where it belongs in the syllabus. She says this let them spin up the first version without hiring an ops person, content manager, or software engineer .
Why it matters: Small teams can use agents to stand up real operating capacity before they can justify full-time hires .
How to apply: Start with coordination-heavy work—timelines, reminders, content filing, and draft generation—where the value of consistency is easy to see .
3) Meta did not ship the feature users asked for most
Julie Zhuo described the post-like-button demand for a dislike button as one of the top user requests. The team still rejected it after thinking through best- and worst-case outcomes and how the feature could be misinterpreted. They eventually pivoted to reactions, which felt more neutral and expressive .
Why it matters: Strong demand is not enough. PMs still need to test whether a feature advances the product’s intent and what negative interpretations it might introduce .
How to apply: When a request is popular, pair request volume with an explicit downside review before prioritizing it .
Career Corner
1) Management skill is becoming product leverage again
Claire Vo’s view is direct: management skills matter more than technical skills for getting agents to work well. Her argument is that leaders already know how to onboard, role-scope, and set people up for success; those same skills transfer to agent systems . Julie Zhuo’s Purpose / Process / People framework points to the same muscle: define success, define how the work gets done, then learn the strengths and weaknesses of the humans or agents doing it .
Why it matters: This is a durable skill set that applies across product leadership, AI delegation, and team design .
How to build it: Start with one internal workflow and practice writing clearer success criteria, tighter role scopes, and better review loops .
2) Senior PMs may need to get more hands-on, not less
Zhuo says she now asks whether she really needs to interrupt another human, or whether she can use the available tools and do it herself. She also expects future teams to be much smaller, with fewer alignment meetings and more execution per person .
Why it matters: Smaller teams raise the execution bar for every individual contributor and leader .
How to build it: Prototype, analyze, or instrument the first version yourself before turning it into a staffed project .
3) Durable careers are built on problems, not just solutions
“fall in love with a problem, not a solution.”
Zhuo argues that the most durable work is anchored in universal problems, while solutions should be treated as hypotheses to validate quickly with users or buyers .
Why it matters: Problems outlast individual tools, interfaces, and product shapes .
How to build it: When describing your work, lead with the underlying user or business problem you are pursuing, then show how you tested candidate solutions .
4) If you want more influence, audit your availability
When leaders repeatedly fail to close loops, teams stop bringing them issues and stop expecting useful follow-through . Over time, that also shrinks the team’s ambition because people learn to avoid bold ideas that need leader support .
Why it matters: Influence decays quickly when the organization learns to route around you .
How to build it: Review a recent week’s commitments and ask your team where your response time or follow-through slowed them down .
Tools & Resources
1) OpenClaw
Vo describes OpenClaw as a strong reference point for agent product design: easy onboarding, self-learning and self-improving behavior, and open-source visibility into how the system works .
Why explore it: It is both a tool and a live product example for PMs learning how agent experiences should be structured .
How to use it: Inspect the docs or code to study onboarding, task scheduling, and other design choices you may want to copy into your own agent workflows .
2) Sundial
Sundial’s premise is to productize analytics expertise with AI so decision-makers can move from roughly 30% understanding to 50-60% understanding and make better, faster decisions .
Why explore it: It is a concrete example of an AI product aimed at decision quality rather than just report generation .
How to use it: Consider it when your bottleneck is understanding ambiguous data fast enough to support a product or business decision .
3) The weekly Decision / Owner / Definition of Done template
A lightweight weekly update format—paired with LARCI and OKRA—forces teams to define ownership, success, and review against outcomes instead of status prose .
Why explore it: It is simple enough to adopt immediately in startup or small-team execution rhythms .
How to use it: Make it the default weekly update format, then review the output against objectives and key results at week’s end .
4) Julie Zhuo’s 3Ps framework
Purpose, Process, and People provide a compact rubric for both team management and agent management: what success is, how the work should happen, and who or what is best suited for it .
Why explore it: It gives PMs one reusable frame for people leadership, AI delegation, and execution design .
How to use it: Run every new project, workflow, or agent setup through the same three questions: what outcome matters, how should the work happen, and who or what should do it .
Product Management
andrew chen
Big Ideas
1) AI-native products are defined by workflow dependence, not AI window dressing
Andrew Chen's distinction is simple. Bolted-on AI products tend to revolve around an AI button or chat pane, with no memory beyond one chat, and users often try the feature once and then return to the normal way of using the product . AI-native products show different signals: the workflow is impossible without AI, usage can support $100-$1000 in token spend, the product gets substantially better as base models improve, and users change behavior after trying it .
"core workflow is impossible without AI, not just enhanced by it"
Why it matters: This is a better roadmap filter than asking whether a feature has AI in it. It forces PMs to ask whether AI changes the product's core value and usage pattern, or simply decorates an existing flow .
How to apply:
- Ask whether the user can complete the job without AI. If yes, the feature may be optional rather than core
- Check whether the experience remembers anything beyond a single session
- Watch for reversion: if users try the feature once and go back to the old flow, treat that as a product signal
- Favor concepts that should improve materially as base models improve
2) PM hiring is improving, but opportunity is concentrating around the Bay Area
PM openings are at their highest level in more than three years . But nearly one in four open PM roles are now in the Bay Area, up 50% over the last four years, and more than one in five engineering and design roles are there as well . Remote opportunities continue to decline .
Why it matters: The topline market can improve while many candidates still feel constrained. Geography is becoming a bigger part of access to opportunity .
How to apply: Treat location strategy as part of job strategy. If Bay Area roles are feasible for you, search and network accordingly. If not, assume remote-only filters are excluding a larger share of openings than before .
Tactical Playbook
Use an AI-native review before approving an AI bet
- Write down the workflow you want to change.
- Ask whether the workflow is impossible without AI, or whether AI is simply an add-on to an existing flow .
- Flag concepts that rely mainly on an AI button or a generic chat pane .
- Decide what memory or personalization should persist beyond one chat, since lack of persistence is a warning sign in bolted-on AI experiences .
- Define success as behavior change, not one-time trial. If users revert to the normal app flow, treat that as a weak signal .
- Prioritize concepts that should get substantially better as base models improve, and where usage value can justify meaningful token spend .
Why it matters: This review helps separate genuinely new product workflows from demo-friendly features that do not alter user behavior .
Case Studies & Lessons
1) A 6+ month job search was broken by making PM evidence explicit
One PM said it took more than six months to land a role, and that quantifying impact, surfacing relevant duties, and showing experience in a small agile team helped . In the same discussion, another candidate with sales and marketing background plus product experience said getting interviews was still difficult .
Lesson: In a tighter market, adjacent experience is not always enough on its own. The PM-shaped part of the work has to be obvious .
2) E-commerce PMs report heavy competitor imitation
A practitioner note on e-commerce product work says there is extensive copying of competitor flows and product offerings .
Lesson: When a proposal borrows from competitors, say that plainly in review materials so the team can distinguish copied patterns from original hypotheses .
Career Corner
1) Quantified impact remains the clearest interview currency
The strongest practical advice from the hiring thread was to quantify impact, highlight relevant PM duties, and explain experience in a small agile team .
How to apply:
- Rewrite resume bullets around outcomes, not responsibilities
- Make the PM parts of mixed-function roles explicit
- Be ready to describe team size and operating style, since that context was part of what helped
2) Adjacent backgrounds need stronger translation into PM signal
A candidate with sales and marketing background plus product experience said interviews were still hard to secure .
How to apply: Do not assume recruiters will infer PM readiness from adjacent work. Make product decisions, impact, and collaboration scope explicit in resumes and interview stories .
3) Use Teamblind as a company-specific research tool, not a feed
The shared tactic was to ignore the trending page and search the companies you are interviewing with. That is where users reported finding offer details, interview questions, and work-life-balance opinions .
Why it matters: It turns the platform into a targeted prep source .
Tools & Resources
- State of the Product Job Market: Lenny's full report behind the current hiring signals on PM openings, Bay Area concentration, and remote decline
- AI-native checklist: Save Chen's four tests for roadmap reviews—token spend of $100-$1000 during use, model-driven improvement, impossible-without-AI workflow, and behavior change
- Teamblind company search: Useful for offer details, interview questions, and work-life-balance opinions when you search specific employers rather than relying on the trending page
Aakash Gupta
Tony Fadell
Elena Verna
Big Ideas
1) Prototype literacy is becoming a core PM skill
"Instead of doing some case study and presentation, you need to be ready to build a full blown app as part of the interview."
Elena Verna argues that functional prototyping is becoming standardized across roles, not just PM . She also says turning a PRD into an interactive artifact improves the PRD itself, helps sell the idea faster, and gives engineers and designers a clearer shared vision .
Why it matters: The expected PM artifact is expanding from documents to clickable experiences. In Verna's workflow, the written spec is intentionally kept to a one-pager, with more detail discovered through prototyping .
How to apply: Write the shortest useful spec, prototype it immediately, and use the gaps you find to tighten the hypothesis before engineering starts. Keep engineering in the ideation loop rather than treating the handoff as closed .
2) AI gives PMs leverage unevenly: strongest in critique, synthesis, analysis, and execution hygiene
The Exponent framework splits PM work into vision, strategy, design, and execution . In that model, AI is already useful for customer insight synthesis, AI-moderated interviews, natural-language data analysis, prototyping, meeting agendas and summaries, and critiquing an existing strategy . Verna frames the same pattern another way: AI can do the first 30-50% of baseline work across PRDs, prototypes, and marketing plans, so PMs react and refine instead of starting from a blank page .
Why it matters: The fastest gains are in compressing time-to-insight and time-to-artifact, not outsourcing the hardest judgment calls .
How to apply: Use AI to research, critique, summarize, query data, and draft prototypes. Do not outsource direct customer contact, product vision, or the final strategic bet .
3) North star metrics need a pressure test before they turn into local optimizers
Run the Business offers four meta-questions to pressure-test north star metrics: Would you be proud of the behavior in 18 months? Can the team explain in one sentence how the metric makes a customer's life better? If every team's north star sat on one page, would any of them compete? What happens if you 10x the metric—would that be incredible or terrible?
Why it matters: The framework is explicitly designed to catch anti-patterns before a metric starts driving the wrong behavior or creating cross-team conflict .
How to apply: Run these four questions in your next review. If the customer-value sentence is weak or a 10x outcome sounds bad, treat that as a metric-design problem, not just a reporting issue .
Tactical Playbook
1) Use AI as a strategy critic, not a strategy author
- Create a project with explicit devil's-advocate instructions and tell the model not to be nice .
- Load opinionated strategy best practices into project knowledge, such as course material or a strategy book summary .
- Paste in your strategy and ask for critique .
- Look for concrete issues like an audience that is too broad or a claimed moat that is not actually defensible .
- Rewrite the strategy yourself; AI can critique the bet, but it is not the owner of the bet .
Why it matters: The demo shows critique quality improves when the model is grounded in a specific standard of good work, not a generic prompt .
How to apply: Save this as a reusable review step before leadership reviews. If the output feels generic, add exemplars rather than more vague instructions .
2) Build a weekly insight loop from surveys, NPS, and customer data
- Upload raw survey or NPS data to Claude and ask for the basics first: score, promoter/passive/detractor mix, trends, and segment cuts .
- Add segmentation fields you already have, such as email type, plan type, or usage intensity .
- Check significance where the analysis provides it .
- Turn the output into an executive readout if needed, including an AI-generated deck in Gamma .
- Once you trust the workflow, move from occasional reporting to a recurring cadence .
Why it matters: In the example, work that previously took about a week became fast enough to support weekly reporting instead of quarterly review .
How to apply: Start with one recurring customer metric and three segments. Expand only after you can verify the numbers and logic .
3) Improve AI data analysis with example pairs
- Let PMs ask data questions in plain English and inspect the generated SQL when needed .
- Expect early errors when schemas are messy or outdated .
- Give the model past natural-language question and SQL pairs as project knowledge .
- Reuse the same exemplar pattern for other PM work, including interview guides and strategy critique .
Why it matters: Rekhi's example is not just faster query writing; it expands the number of questions a PM can realistically ask from a few to all of them .
How to apply: Build a small internal library of approved examples, then graduate from ad hoc queries to scheduled dashboards .
4) Make post-mortems blameless but operational
"Work the problem"
- Name the single core reason the launch failed .
- Add three supporting reasons or evidence showing how that problem appeared in data or feedback .
- Use screenshots or other specifics to show what happened .
- Model accountability by taking blame yourself so others can do the same .
- End with process changes for next time, without overreacting to one incident .
Why it matters: This was shared as a practical structure for leadership-facing post-mortems, where the goal is reuse and learning rather than blame .
How to apply: Use the structure as the spine, then connect any survey data or stakeholder feedback back to the core reason and its supporting evidence .
5) When something ships broken, own the conversation without creating a product-vs-engineering culture
PMs are often the first stop for bugs, delays, and quality issues from stakeholders and customers . The discussion emphasizes owning the conversation rather than deflecting, avoiding a product-vs-engineering culture, giving the team credit when things go well, and taking blame when they do not .
Why it matters: Public ownership builds trust and keeps momentum, while quality issues can also signal deeper team problems .
How to apply: Acknowledge the issue, state the recovery plan, protect the team externally, and then use the failure to inspect process or coordination problems internally .
Case Studies & Lessons
1) Onboarding by doing beat onboarding by explaining
One PM spent about 40% of initial development time on a polished onboarding with animations, progress indicators, and tooltips, yet day-1 retention was 21% because users skipped through the flow, reached the main app confused, and left . Rebuilding the first-use experience so users performed the core action lifted day-1 retention to 44% .
Lesson: Users can complete onboarding without learning anything. What mattered in the follow-up comment was core-task success on day one, not tutorial completion .
How to apply: Watch the first session, find the first meaningful action, and redesign onboarding around doing that action instead of explaining it first .
2) Faster analysis changed the team's learning cadence
In the NPS demo, a CSV upload produced score summaries, monthly trends, segment comparisons, and statistical significance checks . Because the analysis became much faster, the team could move from quarterly NPS review to continuous weekly reporting and get more fresh insights .
Lesson: AI changes operating cadence when the bottleneck is analysis time .
How to apply: Look for recurring insight work that is currently too slow or too rare, and automate the end-to-end workflow rather than just one step .
3) Spec, prototype, and GTM draft can now happen in parallel
Verna's nonprofit feature demo starts with a ChatGPT tech spec intended to get 30-50% of the way to a usable draft . She then turns it into a one-pager , generates a prototype in Lovable and spends the next couple of hours editing structure, content, and visuals , while ChatGPT works in parallel on ICP, distribution partners like TechSoup, and examples to tear down such as Google and Slack nonprofit programs . She says this gets her to spec, prototype, and GTM thinking in roughly three hours, with engineering pulled into ideation rather than a final handoff .
Lesson: The power is parallel drafting plus human reaction, not expecting a one-shot answer .
How to apply: Run product, UX, and GTM thinking in parallel, then use human taste to edit hard and involve engineering before priorities are locked .
Career Corner
1) The job market is improving, but not evenly
PM openings are at the highest levels seen in more than three years . At the same time, Bay Area importance is rising, remote opportunities are declining, and recruiter demand is surging as a leading indicator of sustained hiring demand .
Why it matters: Better hiring volume does not mean an easier search if your location or remote requirements are narrow.
How to apply: Broaden Bay Area-targeted searches if possible, reset expectations on remote-only roles, and watch recruiter activity as a sign that demand is holding .
2) The skills that still compound are not the ones AI can average away
Verna's non-automated list is direct customer interaction , setting the vision and destination , understanding marketing and distribution as software commoditizes , and building functional prototypes .
Why it matters: Her warning is explicit: if everyone uses AI to choose direction, products converge .
How to apply: Protect time for customer calls and social listening, build a stronger point of view on where the product should go, and learn enough GTM and prototyping to turn that point of view into something concrete .
3) AI-native experimentation is becoming a hiring signal
Verna recommends bringing AI-native employees into teams, often including new grads, because they already treat AI as a normal part of work . She also argues that teams need bottom-up tool adoption and repeated experimentation because model performance changes quickly .
Why it matters: The advantage is not one favorite tool; it is an operating habit of trying, judging, and retrying workflows as the tools improve .
How to apply: Show concrete AI workflows in your portfolio or resume, push for lightweight experimentation on your team, and revisit tasks that did not work a month ago .
4) Senior leadership means more delegation and more accountability
"When the team succeeds, it's their fault. When we fail, it's my fault."
Tony Fadell's management advice is to let go of doing the work yourself, trust the team, and give people room to be creative . The Reddit thread adds the complementary leadership behavior: be the circuit breaker first when something goes wrong .
Why it matters: Advancement is not just better judgment. It is creating space for the team to do great work while absorbing external pressure yourself .
How to apply: Delegate real ownership, resist the urge to re-do the work, and take the first uncomfortable conversation when outcomes disappoint .
Tools & Resources
1) Perplexity Computer for deliverables, not just answers
Aakash Gupta argues that Perplexity's Computer produces finished outputs: research reports with citations, deployed dashboards, cleaned datasets with charts, and launch kits with positioning docs and email drafts . He highlights cloud execution, parallel agents, and persistent memory as the main differences . His example: a 28-page Notion messaging audit across five criteria, benchmarked against Coda and Slite, with per-page recommendations in about 20 minutes .
Why it matters: This is positioned as a tool for bounded PM work where the output itself matters more than chat .
How to apply: Start with a constrained audit or launch-prep task, and use the full guide for six PM use cases, exact prompts, and the prompt spec that Gupta says cuts cost by 60%+ .
2) Reusable Claude projects beat blank prompts
The same set of examples shows three high-value project templates for PMs: a strategy critic with devil's-advocate instructions , a customer-insight workflow that turns CSVs into reports , and a natural-language data analyst that answers plain-English questions with SQL, charts, and tables . The shared prompting rule is to give the model exemplars and a clear definition of good work .
Why it matters: PM leverage increases when the model has project-specific context instead of starting from zero every time .
How to apply: Save one reusable project per recurring workflow and feed each one examples from your own team rather than generic prompting advice .
3) Gamma can turn analysis into an executive-ready readout
In the NPS example, Gamma generated an executive summary deck, selected visuals, and structured the presentation automatically from the analysis .
Why it matters: It shortens the path from raw insight to stakeholder-ready communication .
How to apply: Pair it with a verification step on the underlying analysis so presentation speed does not outrun analysis quality .
4) Keep a north star metric review checklist handy
The four-question pressure test from Run the Business is simple enough to reuse as a standing template in roadmap, OKR, or quarterly business reviews .
Why it matters: It forces teams to connect metrics to customer value and cross-team alignment, not just target movement .
How to apply: Add the four questions as a required review section before approving a new north star .
Strategyzer
One Knight in Product
Product Management
Big Ideas
1) Evidence should be treated as a ladder, not a binary
Strategyzer frames evidence on a 0-5 scale: level 1 is what customers say in interviews or surveys, while stronger levels come from behavior such as clicks, co-creation, purchases, or real-world use. The operating rule is to raise the evidence bar as investment rises .
Why it matters: Honeywell found that some projects that looked mature were still grounded mostly in voice-of-customer inputs. Moving toward deeper behavioral evidence helped teams stop risky projects, reduce R&D waste, and give leaders a better basis for investment conversations .
How to apply: Score evidence by hypothesis, not by enthusiasm. A large number of interviews or surveys is still light evidence if all you have is what people said .
2) The right PM playbook depends on who owns the company
PMs need strong commercial acumen because PE and VC backing create different product environments. In PE-backed companies, the owner is a financial institution with a 3-5 year exit horizon and a value creation plan, which pushes teams toward delivery speed and certainty. In VC-backed companies, founder control and a longer horizon make discovery and experimentation more acceptable .
Why it matters: Process arguments are often context arguments. A discovery-heavy motion that feels normal in one company can feel misaligned in another .
How to apply: Before introducing a framework, clarify the ownership model, time horizon, and tolerance for uncertainty. Then adapt or combine methods rather than importing them whole .
3) AI product strategy is constrained by both economics and trust
"ARPU > Average Inference Cost Per User."
Andrew Chen argues that AI-native consumer apps are still more than 10x away from broad viability in many cases, with monthly ARPU around $2-5 versus $20-50 in token costs for AI-heavy apps. He also points to global consumer economics, rising user expectations, and the need for small models or new mobile hardware as additional constraints . In parallel, Julie Zhuo says AI analysis agents are still not trustworthy enough for wide business use because the hardest 15-30% is selecting reliable metrics, adding business context, framing the problem well, and learning from prior outcomes .
Why it matters: These two notes point to a narrower near-term opportunity set: higher-ARPU use cases and workflows where humans still close the trust gap. That helps explain why many teams focus on prosumers and productivity products that can support $100s to $1000s of ARPU .
How to apply: Model inference economics early, and keep human review in any workflow where metric choice, context, or scoping determines decision quality .
4) Competitive intelligence is a differentiation input, not a copying habit
Competitive intelligence is described here as an undervalued part of the product stack. The goal is not to copy competitors but to understand what you must differentiate from, while also borrowing inspiration from adjacent categories such as using Revolut's UX patterns as reference points for a darts app .
Why it matters: Teams cannot articulate differentiated value if they only study themselves .
How to apply: Review both direct competitors and adjacent-category exemplars on a regular cadence, and log what each one teaches you about positioning, UX, and unmet gaps .
Tactical Playbook
1) Run an evidence ladder before you fund the next bet
- Write down the key unknowns across customer, value proposition, business model, and execution .
- Treat interviews and surveys as early, light evidence about what customers say.
- Move next to behavioral tests such as brochures or landing pages with CTAs, co-creation workshops, Wizard of Oz tests, and pre-sales .
- Raise the required evidence level as spending rises .
- End each review by naming what is still unknown before authorizing more build or GTM investment .
Why it matters: This keeps teams from confusing sample size with evidence quality. Even a large number of interviews stays in the same evidence category if nobody has done anything yet .
How to apply: Make evidence level and next experiment part of every opportunity review, not an optional appendix .
2) Replace slide theatre with artifact-based leadership reviews
- Give teams structured pre-work: a customer ecosystem map, customer profile, value scenes, a simple 3-year business model sketch, and a list of known unknowns .
- Put the work in a shared platform and comment asynchronously as teams go, rather than waiting for the end to dump feedback .
- Ban custom slides for the final review. Honeywell teams had 2.5 minutes to present the big idea, customer and evidence, value proposition and evidence, business model and evidence, and remaining unknowns .
- Have leaders question the evidence, not just the technology .
Why it matters: Honeywell said this broke work into digestible steps, reduced shadow work, sped up feedback, and created a shared language between teams and leaders .
How to apply: If portfolio reviews still revolve around polished decks, test one cycle with shared artifacts plus a time-boxed evidence review and compare the quality of discussion .
3) Build an early-warning loop for negative feedback
- Centralize feedback logs and meeting notes; manual pattern-finding gets slower as volume grows .
- Use automation to surface sentiment, recurring themes, and clearly negative alerts instead of waiting for a human to notice them .
- Treat those signals as proactive inputs, especially in fast-moving projects where delays affect schedules or stakeholder alignment .
Why it matters: One commenter summarized the current state bluntly: most teams catch negative feedback late unless they actively look for it .
How to apply: Even a simple workflow that flags recurring complaints and obviously negative language is better than relying only on ad hoc manual review .
Case Studies & Lessons
1) Honeywell turned growth reviews into evidence reviews
Honeywell used a playbook to prepare growth-project teams over three weeks, with async feedback on customer maps, value scenes, business model sketches, and known unknowns before a one-day symposium . At the event, teams pitched without slides and leaders pushed on customer, value proposition, business model, and supporting evidence . The reported results were stronger evidence, killed risky projects, reduced R&D waste, faster discovery, and a shared language across teams and leaders .
Key takeaway: If leaders will challenge projects anyway, give both sides a common evidence framework so the conversation does not collapse into technical detail by default .
2) Three experiments show how to match the test to the risk
American Family Insurance used a fake brochure with a CTA at a trade show to see which segment responded, then adjusted the value proposition and marketing accordingly . Fireflies used a Wizard of Oz approach - manual note-taking behind an AI facade - and after 100 meetings and enough revenue to cover rent, decided there was enough evidence to automate . Tesla moved through competitor research, mashups, landing pages, and pre-sales; the Model 3 reached 325,000 reservations with a $1,000 refundable deposit in its first week, showing a much stronger demand signal than early low-commitment tests .
Key takeaway: Choose the cheapest experiment that answers the next important uncertainty. Do not jump straight from interviews to full build when a CTA, manual service, or pre-sale can answer the question first .
3) The digital darts team chose acceleration over purity
One interim product leader chose to buy rather than build from scratch, acquiring a small company that had already solved part of the problem and speeding up the path forward . The team then grew to 12 people and stayed as one focused squad because the immediate priority was shipping a new software experience on the same timeline as hardware with long manufacturing lead times and a hard deadline . Only after that does the plan shift toward organizing around user-journey stages .
Key takeaway: Team topology should follow stage and constraints. When hardware timelines dominate, a single delivery-focused team can be more useful than a multi-squad model designed for a later stage .
Career Corner
1) The fastest career leverage may come from commercial fluency
The clearest career advice in the set is to stay close to the money: understand the P&L, balance sheet, and how your area affects the broader business . That starts with knowing who really owns the company and what they expect, which is why the PE-versus-VC distinction matters so much for PMs .
Why it matters: The further you are from the commercial conversation, the harder it is to make informed product decisions or influence major tradeoffs .
How to apply: Ask to sit in on planning or finance conversations tied to your area, and map your roadmap to the business model, not just user needs .
2) Hiring signals: framework fluency, startup scars, and domain pull
One product leader looks for book smarts and street smarts: formal exposure to good product practices plus experience figuring things out without much support . He also prefers PMs, designers, and engineers who genuinely care about the domain, arguing that passion makes it easier to feel user pain and go the extra mile .
Why it matters: Adaptability comes from being able to use frameworks without becoming trapped by them, and empathy is stronger when the team actually cares about the product space .
How to apply: If you are early in your career, build both sides deliberately: get formal training, then test yourself in messier startup or scale-up environments .
3) PM tech rounds are screening for systems thinking
A PM interview candidate reported repeatedly failing technical rounds on system design and API deep dives . The practical advice from the thread was straightforward: study the System Design Primer, read gRPC and REST docs, and practice writing fake APIs in a document; the commenter added that they had bombed five interviews before improving, and that hiring remains rough .
Why it matters: In a tougher market, PM interview prep has to cover technical fluency as well as product judgment .
How to apply: Practice explaining API behavior and system design clearly on paper before you try to do it live in an interview .
Tools & Resources
- Strategyzer artifact stack: customer ecosystem map, customer profile, value scenes, a simple 3-year business model sketch, and a known-unknowns list. Use these as a lightweight template pack for opportunity reviews or discovery sprints .
- How Honeywell prioritizes growth projects: a concrete walkthrough of playbooks, evidence levels, and no-slide review mechanics .
- What it actually takes to trust AI: Julie Zhuo's linked essay on why the last stretch of trustworthy AI analysis is difficult .
- PM tech-round study stack: System Design Primer, gRPC docs, REST docs, plus the habit of drafting fake APIs in a doc before interviews .
- Competitor-intelligence dashboards: the standard to aim for is ongoing tracking of competitors and adjacent-category references, not one-off teardown decks; one leader cited building Outfox for this purpose .
The community for ventures designed to scale rapidly | Read our rules before posting ❤️
Melissa Perri
Big Ideas
1) Real validation still beats simulated certainty
The strongest discovery theme this cycle: PMs should not confuse simulated insight with customer validation. The Mom Test recommends avoiding leading questions, asking about past behavior, and listening more than pitching. In the 'virtual customer' discussion, commenters argued that real validation still comes from customers' willingness to spend time or money, not from synthetic personas or LLM stand-ins.
'Who is this for?'
Why it matters: This is the shortest path to lower product waste and better prioritization. PMs using the Mom Test well can reduce wasted development time and build products people actually want.
How to apply: Start each new idea with a target-user hypothesis, interview on past behavior, then move quickly to prototypes or A/B tests with real users rather than persona debates.
2) Strategy quality is showing up as sharper constraints
Across Viator, Xero, and eBay, the pattern is the same: fewer bets, tighter segments, and clearer differentiation. Viator cut annual 'big bets' from roughly 30 to 3 after years of OKR tightening and reported better progress by doing fewer things. Xero's CPTO argues that serving too many customer segments creates a hodgepodge product, while eBay's turnaround required accepting that it was not Amazon and focusing on its own value proposition.
Why it matters: Many roadmap problems are really strategy-definition problems. Teams slow down when they try to satisfy every stakeholder, every segment, and every competitor at once.
How to apply: Limit top-level company problems, define the segment you serve best, and explicitly explain what you are not trying to be. Then connect each priority to that story so teams can optimize in the same direction.
3) In marketplaces, the flywheel should come before the roadmap
Yelp's product leaders frame two-sided product work around a clear conflict-resolution model and a single marketplace metric: connections between consumers and local businesses. They define the flywheel as the self-sustaining growth mechanism, warn that teams can optimize the wrong thing if they start with revenue instead, and ground demand around concrete needs: consumers care about quality, price, and timing; businesses care about high-intent leads in their service area.
Why it matters: Without a flywheel model, PMs can ship features that move local metrics while weakening the network.
How to apply: Write down how conflicts between the two sides get resolved, pick the metric that best represents a meaningful marketplace match, and use that as the filter for roadmap choices.
4) AI product design is moving toward workflow fit, trust, and visible value
Several notes point to the same AI pattern. One startup founder found that leading with 'AI' hurt conversions, while outcome-led positioning worked better. The same founder also saw better retention from simple agents with obvious weekly value, and even experienced churn when value became invisible. Another founder concluded that standalone AI products can miss product-market fit when customers still want WhatsApp access or in-person reassurance, so founders need clarity on whether they are replacing humans or augmenting them. Xero's CPTO argues that SaaS is shifting toward conversational, insight-oriented interfaces, but pairs that with guardrails and human review for high-accuracy workflows.
'People only care what it does for them.'
Why it matters: AI novelty is not enough. Adoption depends on whether the experience fits existing behavior, preserves trust, and keeps value legible over time.
How to apply: Decide first whether AI is augmenting or replacing a human workflow, design around channels users already trust, and add review loops or guardrails anywhere accuracy matters.
Tactical Playbook
1) A five-step discovery loop before you fund a feature
- Start with a sharp user hypothesis: who is this for?
- Run interviews that avoid leading questions, focus on past behavior, and force you to listen more than you pitch.
- Put one real user in front of the team and let them try to solve real tasks. This is the fastest 'show, don't tell' mechanism in the set of notes.
- Prototype and ask customers directly; if you can, simulate the experience with A/B testing tools before full rollout.
- Treat LLMs and synthetic personas as aids, not proof. The notes are explicit that they do not replace real customer commitment.
Why it matters: It compresses discovery while keeping it grounded in behavior instead of opinion.
How to apply: Use this loop in roadmap planning whenever confidence is being inferred from internal debate instead of external evidence.
2) A prioritization model for large teams: big bets + door types + clear owners
- Narrow annual focus to a small set of company problems. Viator's progression from 30 bets to 3 is the clearest data point here.
- Assign cross-functional pods to those problem spaces, and use lightweight charters for smaller bottom-up work.
- Separate one-way-door decisions from two-way-door decisions. Move fast on reversible experiments; slow down on hard-to-reverse calls like pricing.
- Name a decision owner for each initiative and measure them on adoption, utilization, and whether customers see value.
Why it matters: This keeps teams fast without pretending every decision deserves the same process.
How to apply: In planning docs, add two explicit fields: reversibility and decision owner.
3) How to build influence when you do not control the org chart
- Run a listening tour across engineering, marketing, sales, and other key partners before pushing process changes.
- Earn permission to influence; the Mind the Product interview makes the point that PM influence is not a birthright.
- Explain strategy in a way even the lowest-level PM can connect to priorities, sequence, and tradeoffs; unclear strategy communication destroys trust.
- Teach through evidence: a customer observation session or a company-wide talk can show product's value faster than abstract process language.
- Improve brick by brick, not through big-bang transformation plans.
Why it matters: The notes repeatedly show that PM operating models fail when they are announced before they are understood.
How to apply: If you are introducing discovery, start by creating one visible win that another function can feel immediately.
Case Studies & Lessons
1) Wish: fix marketplace health before chasing growth
When Wish began its turnaround, the problems were basic marketplace failures: poor quality, roughly 30-day shipping, and unvetted merchants. The team closed the marketplace to new merchants, made onboarding invite-only, introduced seller standards and penalties, and pushed shipping toward a 24-hour ship window. Delivery times dropped from 28-30 days to 10 days in some places and 15 days in most markets. NPS moved from -4 to 36, refund rates fell below industry standards, and retention plus average transaction value improved. Only after that did the company shift to phase two: differentiating around discovery shopping and hobbies rather than just low price.
Takeaway: Turnarounds often require sequencing. Fix the trust and operations floor first; differentiate second.
2) A German industrial firm saved six months by doing two weeks of discovery
A traditional Mittelstand company had a polished request-to-delivery conveyor belt, but little visibility into whether shipped work created impact. Its engineering teams behaved more like a service center than problem owners. A two-week discovery sprint on a budgeted feature revealed that the client already had a workable workaround, saving the team about six months of effort. The company then rolled out the new approach gradually; about two years later, more than half the teams were working this way.
Takeaway: Discovery is not a delay to delivery. Sometimes it is the highest-ROI delivery work you can do.
3) Deep tech startup Decentric found traction only after narrowing the use case
Decentric had strong confidential-computing IP but no PM discipline and no clear application focus. Product discovery reframed the problem around finding a use case customers would pay for, and the company narrowed away from many possible industries toward edtech. The outcome, according to the interview, was a successful edtech business working with major European publishers.
Takeaway: Strong technology does not rescue weak problem selection. Discovery is often the mechanism that converts invention into a market.
Career Corner
1) Communication clarity improves when you shorten the first answer, not when you eliminate every pause
One PM described losing flow in behavioral interviews by pausing mid-story to think. The replies added useful nuance: pausing can be positive unless it is constant, mid-sentence, or overly long, so the first step is to get feedback from multiple senior people without priming them. Another suggestion was to explain things as if speaking to a tired 5-year-old: keep the first answer short, then add detail as questions come. A commenter also pointed out that overly terse answers force the audience to fill in gaps themselves.
How to apply: Practice concise first-pass answers, then expand only when asked. ChatGPT can help refine stories to a point, but mock interviews without thoughtful pushback may not surface real problems.
2) If a promotion comes with turnaround expectations, quantify the upside before discussing pay
One PM facing a possible director promotion was being asked to turn around a stagnant product line in a competitive market, with an estimated $20-30M in added annual profit if successful. The proposed negotiation structure was: take the standard 8-10% bump, but ask for an additional proportional reward if a defined 3-year profit goal is met, with no payout if it is missed. A commenter added a more basic step first: benchmark director-level PM roles at similar companies in your area.
How to apply: Before negotiating, write down the expected business impact, the time horizon, and the comparable market rate for the role you are stepping into.
3) AI is changing the boundary between PM and adjacent roles, but expertise still compounds
Sachin Rekhi notes that PMs are starting to use AI for work historically done by researchers, designers, analysts, and marketers. His advice to those disciplines is not to defend the old boundary, but to become the team expert in applying AI well and in defining where human involvement is still needed. His examples are practical: designers using AI prototyping tools produce better outputs than PMs because they bring design expertise, and research teams using AI-moderated interviews let PMs test far more concepts than before.
How to apply: Build AI fluency inside your functional specialty, not apart from it. Rekhi's bottom line is that the AI-fluent are most likely to endure.
Tools & Resources
- momtest.io — a practice resource for learning the Mom Test approach to unbiased customer interviews. Use it when your team needs a shared discovery language before solutioning.
- Optimizely and Split.io — cited as ways to simulate new experiences with real users before full rollout. They are not 'virtual customers,' but they are closer to real validation than persona-only debate.
- Small-team feedback stack check — one founder researching Canny alternatives argued that products like Frill, Featurebase, Hellonext, and Productboard often expand into AI roadmap synthesis and stakeholder dashboards that may be irrelevant for small teams. The useful template here is the question set: do you mainly need to collect and retain feedback, should customers see each other's requests, do you need a public roadmap, and would you pay $9-19/month for that versus using Notion or informal methods?
- Customer empathy kit — Xero's CPTO described a simple but strong operating stack for B2B learning: advisory boards, day-in-the-life shadowing, support exposure, and demo orgs for regular product use. Treat these as ongoing instruments, not one-off research events.
Product Management
Lenny's Reads
Sachin Rekhi
Big Ideas
1) South Star Metrics gives teams a better way to debug green-dashboard/bad-product situations
The framework breaks north-star failure into seven recurring types: detrimental, out-of-reach, incomplete, pressure, inconsequential, nonsensical, and incongruent metrics .
Why it matters: It gives PMs language for cases where a metric improves while customer value, team control, or strategic coherence gets worse. The article’s test for a healthy metric is straightforward: it should have a long enough time horizon, connect customer and business value, account for the full journey, and survive a stress test .
How to apply: Add a monthly scorecard review that asks which south-star pattern, if any, is showing up before you celebrate a win.
2) The new PM bar is AI fluency plus high standards
Strong PMs are expected to know how to use AI across strategy, customer research, data analysis, prototyping, validation, and daily execution, while also knowing each tool’s strengths, limits, and best use cases . But AI fluency alone is not enough: PMs also need product taste and the discipline to refine or abandon AI-assisted work when the output is weak .
Why it matters: The failure modes are already visible. Some PMs use AI for everything and generate low-quality output; others keep their standards but fail to meet the new pace and burn out .
"Learning both becomes critical in this next era of product management."
How to apply: For each part of your workflow, define two things explicitly: where AI should accelerate the work, and what good still has to look like before it ships.
3) In AI markets, shipping velocity is becoming strategy
One analysis of Claude releases counted 74 launches in 52 days across four parallel surfaces: developer tools, desktop automation, API/infrastructure, and models/platform .
Why it matters: The takeaway is not just that one company shipped a lot. It is that PMs can miss the real competitive signal if they compare point features instead of the rate of improvement across teams. The post argues that this creates a compounding gap for users who start building systems on top of fast-moving surfaces early .
How to apply: Track competitors with a shipping calendar, not a quarterly memory. Include dates, features, and which team shipped what so you can see operating model changes early .
Tactical Playbook
1) Run a 5-step metric stress test before locking a north star
- Pair the main metric with a customer-experience guardrail and a single view of qualitative signals, not just the quantitative dashboard .
- Build a metrics ladder so the team owns a lever it can directly move, then prove the causal link to the business outcome .
- Map the end-to-end journey and use the rule own one, watch all, ideally through one shared dashboard and one funnel-health owner .
- Protect future capacity with an explicit split such as 70/20/10 so urgent quarterly work does not crowd out discovery, technical investment, or exploration .
- Add sanity constraints and surface cross-team tensions on one page so optimization does not create absurd outcomes or metric conflict .
Why it matters: This sequence directly addresses the most common failure modes in the South Star taxonomy, from customer harm to team misalignment . One especially useful warning: silence is not always a good sign; customers may have simply given up and moved on .
2) If the org feels chaotic, document the strategy that is currently living in people’s heads
A community response to a post-merger B2B SaaS situation with no clear strategy, roadmap, or role boundaries recommends a simple recovery pattern :
- Draft a 1-2 page product and GTM strategy from what you have observed and researched .
- Review it with stakeholders using a calibration question: Where does this line up, and where am I off?
- Write a one-page role definition for PMM responsibilities and priorities to reduce PM/PMM overlap .
- Narrow the ICP and tie work to real outcomes instead of shipping noise for its own sake .
- Research competitors and customer pain points, then build small proof points that create traction even in a noisy environment .
Why it matters: The alternative described in the thread is familiar: fragmented ICP, low retention, antiquated software, and a feature cadence with little customer impact .
3) Use builder feeds for competitive intelligence
Instead of waiting for polished changelogs, one PM tactic is to map a competitor’s releases by following the feeds of the people actually shipping, then logging dates, features, and team attribution in a calendar .
Why it matters: It helps you spot whether a competitor is shipping in parallel across multiple surfaces and whether teams are blocked by interdependencies .
How to apply: Make the calendar a standing artifact in quarterly planning. Look for repeated themes, acceleration by surface, and which capabilities appear to be compounding.
Case Studies & Lessons
1) Anthropic’s Claude release cadence is a lesson in parallel execution
The 74 releases in 52 days were split across developer tools (28), desktop automation (15), API/infrastructure (18), and models/platform (13), with the explicit observation that teams were shipping in parallel rather than waiting on one another .
Lesson: Competitive advantage can come from the operating system behind the roadmap, not just the roadmap itself.
Apply it: When you benchmark competitors, assess how many product surfaces are improving at once and whether your own org design is creating unnecessary dependencies.
2) Fintech users are shifting from theoretical upside to realized value
A discussion in r/ProductManagement describes a move away from flashy reward percentages with complex hurdles toward products that deliver accessible value, seamless cash-out, and reward structures aligned with how users already spend .
Lesson: If the effort to unlock value exceeds the value actually realized, the product is creating UX friction, not delight .
Apply it: In pricing, rewards, and activation design, test the realized user-value path—not just the headline number shown in marketing.
3) A metric win can still be a product loss
The South Star framework uses examples like forced Windows Update reboots or rising ad load: adoption or impressions go up, but the customer experience gets worse .
Lesson: A green metric is not enough if support complaints rise, renewals get harder, or time-to-value deteriorates .
Apply it: Treat qualitative signals and renewal health as first-class evidence before scaling a metric-winning change .
Career Corner
1) The PM market is improving on openings, but the search is still hard
TrueUp-based analysis tracking more than 9,000 tech companies says there are over 7,300 open PM roles at tech companies globally, up 75% from the early 2023 low and nearly 20% year to date—the highest level since 2022 . The same analysis argues the broader data is telling a growth story despite 184 tech layoffs affecting 57,606 people so far in 2026, with big-tech headcount flat or slightly up and hiring outpacing layoffs overall .
The composition of demand is shifting. PM demand is now 1.27x design roles after flipping in mid-2023 , AI PM openings stand at 1,135 and are up 465% from the low , more than 23% of open PM roles are in the Bay Area , and the top PM job locations called out were the SF Bay Area (1,442), Remote US (864), and NYC (673) . At the same time, remote-optional PM roles have fallen to 25% from a 35% peak .
There is still a real disconnect between opening counts and lived experience. Lenny notes that more openings do not automatically mean faster hiring , while Reddit commenters point to applicant oversupply, ghost jobs, internal transfers, and roles that look open but are not truly available to external candidates .
How to apply: Treat the market as better than 2023 on demand signals, but tighten your search around location flexibility, AI-adjacent roles, and whether a posting is actually budgeted and open to external hiring.
2) The skill stack to invest in is clear: AI fluency plus standards
Rekhi’s framing is useful for career planning: knowing AI tools and where they fit in the workflow is valuable, but it only compounds when paired with strong product taste and a willingness to reject weak output .
How to apply: Build evidence of both. Show how you use AI to speed up research, analysis, or prototyping, and show where you improved, rewrote, or discarded low-quality AI output rather than shipping it.
3) For early-career PMs, structured entry points still matter
One hiring insight from the Reddit discussion is that hiring college grads straight into PM roles is generally a bad idea unless the setup is closer to an APM or rotational program with oversight .
Why it matters: The missing piece is usually not raw intelligence. It is the informal influence, coaching, and exposure needed to operate effectively as a PM .
How to apply: If you are early in your career, optimize for mentorship and role structure, not just the PM title.
Tools & Resources
- South Star Metrics, Revisited — a diagnostic toolkit for seven metric anti-patterns, with spotting signals and fixes
- State of the product job market in early 2026 — the full PM market breakdown, plus job-search resources at the end
- TrueUp PM roles and TrueUp jobs — browsable PM and tech-role listings across the dataset used in Lenny’s analysis
- AI Productivity and Mastering Product Management — Sachin Rekhi’s recommended courses for building AI fluency and product standards
- Claude Dispatch Guide for PMs, Claude Cowork Guide for PMs, Claude Code: The Complete PM Guide, and The Self-Improving Claude AI System — practical guides for PMs experimenting with Claude workflows while the platform is shipping quickly
Product Growth
andrew chen
Sachin Rekhi
Big Ideas
1) AI PM is splitting into clearer lanes
AI PM roles now break across two axes: traditional PMs adding AI features versus AI-native PMs building products where AI is the product, and application / platform / infra layers in the stack .
- What the market looks like: Traditional PM with AI features is 80% of roles, while AI-native PM is 20%. The traditional category has 4x more open roles.
- Where the technical bar rises: Application PMs account for 60% of roles, platform PMs 30%, and infra PMs 10%; the deeper the layer, the harder the technical bar .
Why it matters: Resume positioning, interview prep, portfolio choices, and target companies change depending on which lane you choose .
How to apply: Pick one role type and one stack layer before you start building projects or rewriting your resume. If you are transitioning from a traditional PM background, application roles are the clearest entry point .
2) Good AI product strategy starts with saying no
Aakash Gupta’s decision rule is simple: use AI for pattern recognition in complex data, prediction from historical data, and personalization at scale. Prefer heuristics or rules when explainability is non-negotiable, clear domain rules exist, data is limited, or speed matters more than sophistication.
The best AI PMs know when to say no to AI. That judgment is more valuable than knowing how to build a RAG system.
Why it matters: Teams often over-apply LLMs to problems that would be faster, cheaper, and more reliable with rules or simpler ML approaches .
How to apply: Treat whether a problem should use AI at all as the first product decision, not the last. If the answer is yes, match the technique to the job: traditional ML for structured prediction and explainability, deep learning for image/video/audio tasks, and GenAI for conversational, generative, or synthesis-heavy work .
3) Non-AI-native startups are now making portfolio-level strategy calls
Andrew Chen notes that many non-AI-native startups funded in the 2020-2025 window are deciding whether to reinvent the product to be AI-native, pivot toward AI, or use AI in the back office and ride it out. His warning: opportunity cost is the hardest thing to calculate, and the most dangerous startups may be the ones with just enough revenue to keep going .
Why it matters: This is no longer just a feature-roadmap question. It is a company-level product strategy question .
How to apply: In annual planning, force an explicit comparison between the cost of reinvention, the cost of a pivot, and the cost of standing still .
Tactical Playbook
1) A practical sequence for building AI features
- Choose workflow or agent first. Use a workflow for predetermined, deterministic sequences. Use an agent when the system needs to make decisions, reason, act, and learn across steps .
- Start with prompts and examples. System prompts set behavior; few-shot examples show the model what good and bad outputs look like. The source notes that teams can double response quality by adding 3-5 strong examples instead of more instruction text .
- Engineer context deliberately. Separate immediate, session, and knowledge context, and load only what the task actually needs .
- Use RAG before fine-tuning. For enterprise or domain-grounded answers, chunk documents, convert them into vectors, store them in a vector database, retrieve the nearest matches, and pass those chunks into the LLM .
- Escalate in the right order. Optimize prompts, then context engineering, then RAG, and only then consider fine-tuning. Gupta’s claim is that 80% of use cases are solved by RAG .
Why it matters: It gives PMs a build order that avoids premature complexity and keeps the team focused on the highest-leverage fixes first .
How to apply: Turn these five steps into your default review checklist for new AI features.
2) How to set up a parallel AI workbench with Claude Dispatch
- Configure desktop first. Set up Cowork on desktop with the connectors you actually use, such as Gmail, Notion, and Slack, and keep the desktop awake .
- Start work from mobile. Open the Claude mobile app, use the Dispatch tab, and ask it to run a Cowork task .
- Give file access in a usable way. Grant folder access by describing folders naturally or by using shortcuts; start with the workspace that contains your CLAUDE.md and knowledge files .
- Load your rules before delegating. Ask Dispatch to read your CLAUDE.md before it creates subtasks so the instructions it writes are sharper .
- Solve file transfer once. Sync the Cowork workspace folder with Google Drive so files move automatically between desktop and phone .
- Run tasks in parallel. From one mobile thread, start multiple independent task sessions, check progress, redirect each one, and bridge context only when needed .
Why it matters: The setup matches how PMs actually work across multiple parallel workstreams rather than forcing one-task-at-a-time behavior .
How to apply: Use it for work that benefits from breadth and iteration while you are away from your desk: competitor tracking, research synthesis, stakeholder drafts, and visual iteration .
Case Studies & Lessons
1) A 48-hour Dispatch test suggests AI can change day design, not just task speed
In one 48-hour experiment, the author directed 60+ task sessions from a phone while producing competitor summaries, comparison tables, sponsor pages, gap analyses, and multiple infographic iterations . The reported split was roughly 25 minutes of human direction versus 3+ hours of parallel Claude execution . The author’s summary of the work split: 90% human thinking, 100% human takes and opinions, and 90% Claude research and formatting.
Use AI to amplify your thinking, not to replace it.
Why it matters: The lesson is not just faster output. It is that async direction from a phone can reshape how a PM structures the day .
How to apply: Keep judgment, prioritization, and opinion with the PM; let AI take the first pass on research, drafting, and formatting .
2) Fast AI prototypes still miss the work that makes a product usable
Sachin Rekhi argues that AI prototyping is easy to start and hard to master. His critique of many one-prompt prototypes is specific: they may look impressive at first, but often do not match the design of the existing product, lack meaningful differentiation, and fail to master the core workflows. His response is an AI Prototyping Mastery Ladder with 15 essential skills.
Why it matters: Speed to a functional demo can hide whether the prototype is actually good product work .
How to apply: Review prototypes against three gates before you get excited: design fit, differentiated value, and quality on the core workflow .
Career Corner
1) The best AI PM entry path is narrower than it looks
For PMs trying to break into AI, the highest-volume lane is still traditional PM with AI features, which represents 80% of roles and roughly 4x the openings of AI-native roles . Within the stack, application PM roles are 60% of the market and are described as the easiest entry point for someone moving from a traditional PM background .
Why it matters: You do not need to target the hardest, deepest roles first to get into AI PM .
How to apply: If you are transitioning, aim first at traditional-plus-application roles, then deepen toward platform or infra once you have shipped AI work .
2) Hiring managers want shipped products and a portfolio that proves range
Gupta’s advice is to build products, not projects: launch, get real users, and learn from what breaks . He recommends three portfolio artifacts with real users:
- a product solving a real problem you have
- an agent that demonstrates goal-oriented reasoning
- a RAG system grounded in a domain you know well
Why it matters: This portfolio shows both general product execution and AI-specific judgment .
How to apply: Replace tutorial clones with artifacts that show users, failure modes, fixes, and product decisions .
3) Evals and company environment are becoming career signals
Gupta frames AI evals in a simple structure: inputs, a task that generates outputs, and a scoring function from 0 to 1. He also says the AWS AI Practitioner certificate can complement hands-on work, but certification alone is not enough . And he highlights that different company cultures train different PM muscles: Amazon emphasizes writing and customer-backwards docs, Meta emphasizes experimentation, and Netflix emphasizes autonomy .
Why it matters: PM candidates increasingly need to show production thinking and to choose environments that develop the skill they want most .
How to apply: Add eval design to your portfolio, pair any certification with shipped work, and be intentional about the PM culture you want to learn in .
Tools & Resources
- AI PM at Netflix, Amazon and Meta - Here’s How to Become an AI PM (Fundamentals + Job Search) — a useful role taxonomy, AI decision framework, and job-search roadmap for PMs moving into AI
- The Claude Dispatch Guide: 48 Hours Running AI Agents From My Phone — practical setup, workflow examples, and lessons from running PM tasks in parallel across phone and desktop
- Cowork on your desktop — the prerequisite setup guide before using Dispatch
- The AI Prototyping Mastery Ladder — a deeper resource on the 15 skills Rekhi says matter for moving from flashy prototypes to product-quality outputs
- RAG vs fine tuning guide — helpful if your team is comparing prompt optimization, context engineering, RAG, and fine-tuning
- Claude surface selection: use Dispatch for mobile orchestration of desktop tasks, Channels for bidirectional and scheduled work inside active sessions, and Web Sessions for remote coding or prototyping
- Knowledge layer pattern: store CLAUDE.md plus templates, workflows, and knowledge files in a GitHub repo so the system compounds across surfaces; the claim is that PMs who build this layer can ship at 5x the pace of ad-hoc users