We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Big Ideas
1) In an AI-commoditized world, Product Sense becomes the career moat
Shreyas Doshi argues that as AI becomes embedded across product work (discovery, design, prototyping, coding, testing, deployment, analytics, feedback, competitive analysis, GTM, etc.) , the specific tools you use will matter less over time—tools “commoditize,” and tool choice won’t be a durable personal advantage .
The differentiator shifts to the human judgment applied on top of AI outputs—what he labels Product Sense. He breaks Product Sense into five component skills:
- Strong empathy (needs beyond what AI has already analyzed)
- Excellent simulation skills (future possibilities based on domain/tech/competition/customers/users)
- Stellar strategic thinking (segments + differentiators)
- Great taste (choose what’s optimal and explain why)
- Creative execution (conceive unique solutions competitors won’t)
He frames this as a high bar that many product people may struggle to meet .
Why it matters: If AI equalizes execution throughput, advantage concentrates in judgment: picking the right problems, seeing tradeoffs, and improving AI-generated inputs/outputs .
How to apply (weekly loop):
- Pick one recurring decision type (e.g., prioritization, positioning, UX tradeoffs).
- Use AI to generate options (not decisions), then explicitly practice the five skills: empathize, simulate, strategize, choose (taste), and propose a distinctive execution path .
- Write down what you improved beyond the AI output (your judgment delta) .
2) AI is compressing delivery cycles—PM work risks becoming the bottleneck
Björn Schotte highlights a “paradox”: engineering has become “10x faster” (2019–2025) while product management only “1.2x,” making PMs the bottleneck . He also describes a landscape split: 70–75% traditional, 20–25% hybrid, and 4–5% AI-native teams .
He argues AI-native teams connect discovery, validation, and delivery into a continuous loop (AI generating tests, deploying, measuring, reporting) .
Why it matters: If building gets radically faster, the failure mode becomes “shipping the wrong stuff faster” (see Torres below) rather than being blocked by implementation.
How to apply (start small):
- Pick one workflow where signals already exist (errors, user signals, customer emails, competitor monitoring).
- Create a daily or weekly AI-generated briefing that aggregates these signals into a short ranked list for human review .
- Make the human step explicit: review, reject, label, and sequence work (don’t auto-ship decisions) .
3) Operating models: aim for minimally viable consistency, not blanket standardization
John Cutler frames operating models as doing “8 jobs” regardless of context (value architecture, discover/prioritize, align capacity, route escalations, support execution, assess impact, circulate insights, provide financial/operational oversight, shape capacity) .
In parallel, his Substack post argues for Minimally Viable Consistency (MVC): the fewest consistent concepts/terms needed to operate, while preserving beneficial local variation . He warns that widely known frameworks (e.g., OKRs) often hide wildly different implementations—and that variation isn’t inherently bad .
Why it matters: AI adoption can tempt orgs into adding more process (or “consistency mechanisms”) to manage speed and change—but embedded rules rarely disappear .
How to apply (design MVC like a scaffold):
- Identify what risk you’re trying to reduce if something isn’t consistent (be specific) .
- Prefer lighter nudges (templates, defaults, shared artifacts) before mandates .
- Add an explicit reassessment date; plan how you’d remove the rule later .
4) AI can push teams back into “feature factory” mode—counter with discovery and alignment
Teresa Torres warns that “AI features dominating roadmaps” can lead teams back to feature factory behavior:
“All we are doing is shipping the wrong stuff faster.”
She argues you can’t win opinion battles with stakeholders; you can bring information they don’t have (customer interview insights, assumption-test data, patterns in the opportunity space) .
Hiten Shah offers a drift diagnostic: if you ask five leaders what the company does and get five different answers, the company is drifting—and roadmap debates turn into arguments .
Why it matters: Faster delivery increases the cost of misalignment and weak discovery.
How to apply:
- Start roadmap discussions with shared outcomes (not solutions) .
- Continuously “show your work” so decisions are less about opinions and more about evidence and reasoning .
- Use drift checks: periodically ask leaders to explain what the company does; treat divergence as an upstream problem to fix before prioritization fights .
5) Accessibility is both a product quality discipline and a go-to-market requirement
Konstantin Tieber frames disability as a mismatch between individual capacities and environmental demands , and highlights categories of impairments (visual, auditory, motor, cognitive) including situational/temporary constraints . He points to WCAG’s four principles (Perceivable, Operable, Understandable, Robust) as a practical compliance checklist .
He also connects accessibility to sales: enterprise buyers may require a VPAT/ACR (Accessibility Conformance Report) documenting WCAG conformance .
Why it matters: Accessibility expands reachable users and reduces exclusion by default; it’s also increasingly tied to procurement expectations and compliance workflows .
How to apply:
- “Shift left”: challenge UI concepts early (e.g., drag-and-drop) with “How do I operate this with a keyboard?” .
- Build with semantic HTML (avoid divs-as-buttons) .
- Test with keyboard + screen readers (e.g., VoiceOver) as part of release validation .
Tactical Playbook
1) A stakeholder-management workflow that replaces opinion battles with evidence
Torres’ tactics are structured and repeatable:
- Start with shared outcomes (not solutions) .
- Use an opportunity solution tree as a stakeholder-management tool (to visualize options and assumptions) .
- Invite contribution with: “Did we miss anything?” .
- Share assumption tests and results, not only conclusions .
- Show your work continuously—avoid “big reveals” .
Why it works: It turns stakeholder conversations into joint sense-making, anchored in information stakeholders typically don’t have direct access to .
2) Use AI where it reduces collaboration overhead—protect high-context collaboration
Cutler’s heuristic: some work is “transactional” but forced into collaboration (meetings that should have been a doc review), and AI can help by sharing context and reducing friction . But there’s also work that should be collaborative and becomes transactional due to busyness; freeing time via AI should make room for deliberate collaboration .
He also warns that AI is weaker for certain research question types: it can be strong for definitional questions but tends to produce explanations too eagerly for explanatory questions (“it wants to please you”) .
Step-by-step:
- List your team’s recurring collaborative moments.
- Tag each as either (a) transactional-but-collaborative or (b) truly high-context collaboration .
- Automate (a) first (e.g., segment-specific release note reframes) so time returns to (b) .
3) Speed without sloppiness: apply rigor to wins, not just losses
Cutler flags a common management trap: people over-index on “good news,” stop applying rigor to wins, and start relying on luck .
Step-by-step:
- After a “win,” run the same review you’d run after a miss: what worked, what was luck, what to repeat .
- Capture learnings into a lightweight shared artifact (so you don’t lose the insight in celebration mode) .
4) If you’re overwhelmed, design “lanes” (vectors for meaningful hard work)
Cutler’s “lanes” concept: teams need viable lanes with the right challenge/progress balance; when passionate people have “no vectors for hard work,” they invent work .
Step-by-step:
- Define 1–3 lanes per team (not per person) with clear boundaries and intended outcomes .
- Audit current work: remove or downgrade initiatives that don’t fit a lane.
- Re-check lane viability monthly—adjust challenge level and clarity.
Case Studies & Lessons
1) When the environment drives the outcome more than the product: an Airbnb analogy
A Reddit post describes two similar Airbnb listings (photos, reviews, price) with different booking outcomes; the winner was surrounded by 15–20 nearby restaurants/cafes/bars, while the other was in a quiet residential area . The host can optimize the listing, but not the surrounding ecosystem, even if the interface looks identical .
Takeaway: Sometimes your “product” competes on the broader experience system—not just on-screen features.
2) Retention dropped because value and pricing didn’t match (mobile gaming)
Laura Teclemariam describes launching a “Modifications” feature (microtransactions ~$1–$5) and seeing retention drop after v2 because the feature’s pricing didn’t match the value it delivered . She adjusted pricing structures to better align value and price .
Takeaway: Retention problems can be value-to-price mismatches, not just UX issues .
3) “High-quality MVPs” and pixel-level rigor in animation production
Teclemariam compares animation development to product development: storyboards as prototypes, animatics as MVPs, with a higher quality bar at the MVP stage (less tolerance for “ugly baby” shipping) . She also highlights editorial rigor over details (every moment/pixel) as analogous to PM obsession with craft .
Takeaway: Speed isn’t the only lever—some domains require higher minimum quality to learn effectively.
4) Accessibility failure after heavy investment: Bild Zeitung’s readout feature
A cautionary example: Bild Zeitung launched a readout feature after significant engineering investment, then asked an accessibility influencer to test it; the trigger button wasn’t accessible via screen readers .
Takeaway: “Shift accessibility left”—validate operability (keyboard/screen reader) before launch .
5) Translating dry WCAG reports into stories (with a warning about false confidence)
A ProductTank Cologne talk describes using synthetic personas (data-driven archetypes that can “act and speak”) to translate technical WCAG accessibility reports into experiential narratives via RAG (accessibility report + site metadata + persona data) . They found AI stories can significantly foster empathy and urgency for accessibility measures .
However, they caution synthetic personas can create false confidence and should complement, not replace, real user research (“there are no stereotypes”) .
Career Corner
1) A practical AI-era career hedge: build Product Sense (and treat it as upstream)
Doshi’s framing is that the durable advantage isn’t tool mastery; it’s your ability to improve AI outputs through empathy, simulation, strategy, taste, and creative execution .
Career action: pick one of the five skills and deliberately practice it with real artifacts (PRDs, prototypes, research plans), not just prompts.
2) GitHub as proof-of-skill for PMs (especially AI PM roles)
Aakash Gupta reports that when he interviewed 10+ AI PM hiring managers, they said they will check a linked GitHub—and only 24% of PM candidates have one . He adds that inbound recruiter outreach converts to offers at 37% vs 22% for outbound applicants; a strong GitHub can shift you toward inbound .
He recommends treating pinned repos as a portfolio (“two good ones is the MVP”) with clear READMEs and meaningful contribution activity . He also warns against copy-pasted AI code without tradeoffs sections and empty commit “farms” .
“Your resume says you can do the job. Your GitHub proves it.”
3) Staying effective amid chaos: focus via operating model + lanes
A mid-level PM asks how senior Staff/Principal folks maintain focus as the role gets more chaotic . One concrete response across sources is to make focus structural: define lanes and a lightweight operating model rather than relying on personal heroics .
Tools & Resources
- Claude Code for Product Managers (video): Sachin Rekhi shared a recording link : https://www.youtube.com/watch?v=zsAAaY8a63Q
- Claude Code workflows (agentic capabilities): Rekhi describes autonomous workflows, local markdown artifacts, custom tool calls (e.g., transcription), and code-writing to accomplish tasks .
- Product Sense course reference: Doshi links to a mindmap he created for a Product Sense course (link as provided): https://preview.kit-mail3.com/click/dpheh0hzhm/aHR0cHM6Ly9tYXZlbi5jb20vc2hyZXlhcy1kb3NoaS9wcm9kdWN0LXNlbnNl
- Accessibility testing basics: keyboard + screen readers (including VoiceOver) and automated tooling like axe DevTools are listed as practical testing approaches .
- Operating model prompts for “temporary consistency”: use expiration dates and plan removals for new rules added during strategic shifts .