ZeroNoise Logo zeronoise
Post
Product Sense moats, AI-native operating loops, and the return of disciplined discovery
Mar 5
9 min read
51 docs
This edition synthesizes new frameworks and real-world examples on what differentiates PMs in the AI age (Product Sense), how AI-native loops are reshaping delivery and operating models, and how to avoid feature-factory failure modes through stakeholder evidence and minimally viable consistency. It also includes practical playbooks for focus, accessibility, and career signaling (GitHub), plus concrete case studies across B2B SaaS, gaming, and consumer marketplaces.

Big Ideas

1) In an AI-commoditized world, Product Sense becomes the career moat

Shreyas Doshi argues that as AI becomes embedded across product work (discovery, design, prototyping, coding, testing, deployment, analytics, feedback, competitive analysis, GTM, etc.) , the specific tools you use will matter less over time—tools “commoditize,” and tool choice won’t be a durable personal advantage .

The differentiator shifts to the human judgment applied on top of AI outputs—what he labels Product Sense. He breaks Product Sense into five component skills:

  • Strong empathy (needs beyond what AI has already analyzed)
  • Excellent simulation skills (future possibilities based on domain/tech/competition/customers/users)
  • Stellar strategic thinking (segments + differentiators)
  • Great taste (choose what’s optimal and explain why)
  • Creative execution (conceive unique solutions competitors won’t)

He frames this as a high bar that many product people may struggle to meet .

Why it matters: If AI equalizes execution throughput, advantage concentrates in judgment: picking the right problems, seeing tradeoffs, and improving AI-generated inputs/outputs .

How to apply (weekly loop):

  1. Pick one recurring decision type (e.g., prioritization, positioning, UX tradeoffs).
  2. Use AI to generate options (not decisions), then explicitly practice the five skills: empathize, simulate, strategize, choose (taste), and propose a distinctive execution path .
  3. Write down what you improved beyond the AI output (your judgment delta) .

2) AI is compressing delivery cycles—PM work risks becoming the bottleneck

Björn Schotte highlights a “paradox”: engineering has become “10x faster” (2019–2025) while product management only “1.2x,” making PMs the bottleneck . He also describes a landscape split: 70–75% traditional, 20–25% hybrid, and 4–5% AI-native teams .

He argues AI-native teams connect discovery, validation, and delivery into a continuous loop (AI generating tests, deploying, measuring, reporting) .

Why it matters: If building gets radically faster, the failure mode becomes “shipping the wrong stuff faster” (see Torres below) rather than being blocked by implementation.

How to apply (start small):

  1. Pick one workflow where signals already exist (errors, user signals, customer emails, competitor monitoring).
  2. Create a daily or weekly AI-generated briefing that aggregates these signals into a short ranked list for human review .
  3. Make the human step explicit: review, reject, label, and sequence work (don’t auto-ship decisions) .

3) Operating models: aim for minimally viable consistency, not blanket standardization

John Cutler frames operating models as doing “8 jobs” regardless of context (value architecture, discover/prioritize, align capacity, route escalations, support execution, assess impact, circulate insights, provide financial/operational oversight, shape capacity) .

In parallel, his Substack post argues for Minimally Viable Consistency (MVC): the fewest consistent concepts/terms needed to operate, while preserving beneficial local variation . He warns that widely known frameworks (e.g., OKRs) often hide wildly different implementations—and that variation isn’t inherently bad .

Why it matters: AI adoption can tempt orgs into adding more process (or “consistency mechanisms”) to manage speed and change—but embedded rules rarely disappear .

How to apply (design MVC like a scaffold):

  1. Identify what risk you’re trying to reduce if something isn’t consistent (be specific) .
  2. Prefer lighter nudges (templates, defaults, shared artifacts) before mandates .
  3. Add an explicit reassessment date; plan how you’d remove the rule later .

4) AI can push teams back into “feature factory” mode—counter with discovery and alignment

Teresa Torres warns that “AI features dominating roadmaps” can lead teams back to feature factory behavior:

“All we are doing is shipping the wrong stuff faster.”

She argues you can’t win opinion battles with stakeholders; you can bring information they don’t have (customer interview insights, assumption-test data, patterns in the opportunity space) .

Hiten Shah offers a drift diagnostic: if you ask five leaders what the company does and get five different answers, the company is drifting—and roadmap debates turn into arguments .

Why it matters: Faster delivery increases the cost of misalignment and weak discovery.

How to apply:

  1. Start roadmap discussions with shared outcomes (not solutions) .
  2. Continuously “show your work” so decisions are less about opinions and more about evidence and reasoning .
  3. Use drift checks: periodically ask leaders to explain what the company does; treat divergence as an upstream problem to fix before prioritization fights .

5) Accessibility is both a product quality discipline and a go-to-market requirement

Konstantin Tieber frames disability as a mismatch between individual capacities and environmental demands , and highlights categories of impairments (visual, auditory, motor, cognitive) including situational/temporary constraints . He points to WCAG’s four principles (Perceivable, Operable, Understandable, Robust) as a practical compliance checklist .

He also connects accessibility to sales: enterprise buyers may require a VPAT/ACR (Accessibility Conformance Report) documenting WCAG conformance .

Why it matters: Accessibility expands reachable users and reduces exclusion by default; it’s also increasingly tied to procurement expectations and compliance workflows .

How to apply:

  1. “Shift left”: challenge UI concepts early (e.g., drag-and-drop) with “How do I operate this with a keyboard?” .
  2. Build with semantic HTML (avoid divs-as-buttons) .
  3. Test with keyboard + screen readers (e.g., VoiceOver) as part of release validation .

Tactical Playbook

1) A stakeholder-management workflow that replaces opinion battles with evidence

Torres’ tactics are structured and repeatable:

  1. Start with shared outcomes (not solutions) .
  2. Use an opportunity solution tree as a stakeholder-management tool (to visualize options and assumptions) .
  3. Invite contribution with: “Did we miss anything?” .
  4. Share assumption tests and results, not only conclusions .
  5. Show your work continuously—avoid “big reveals” .

Why it works: It turns stakeholder conversations into joint sense-making, anchored in information stakeholders typically don’t have direct access to .


2) Use AI where it reduces collaboration overhead—protect high-context collaboration

Cutler’s heuristic: some work is “transactional” but forced into collaboration (meetings that should have been a doc review), and AI can help by sharing context and reducing friction . But there’s also work that should be collaborative and becomes transactional due to busyness; freeing time via AI should make room for deliberate collaboration .

He also warns that AI is weaker for certain research question types: it can be strong for definitional questions but tends to produce explanations too eagerly for explanatory questions (“it wants to please you”) .

Step-by-step:

  1. List your team’s recurring collaborative moments.
  2. Tag each as either (a) transactional-but-collaborative or (b) truly high-context collaboration .
  3. Automate (a) first (e.g., segment-specific release note reframes) so time returns to (b) .

3) Speed without sloppiness: apply rigor to wins, not just losses

Cutler flags a common management trap: people over-index on “good news,” stop applying rigor to wins, and start relying on luck .

Step-by-step:

  1. After a “win,” run the same review you’d run after a miss: what worked, what was luck, what to repeat .
  2. Capture learnings into a lightweight shared artifact (so you don’t lose the insight in celebration mode) .

4) If you’re overwhelmed, design “lanes” (vectors for meaningful hard work)

Cutler’s “lanes” concept: teams need viable lanes with the right challenge/progress balance; when passionate people have “no vectors for hard work,” they invent work .

Step-by-step:

  1. Define 1–3 lanes per team (not per person) with clear boundaries and intended outcomes .
  2. Audit current work: remove or downgrade initiatives that don’t fit a lane.
  3. Re-check lane viability monthly—adjust challenge level and clarity.

Case Studies & Lessons

1) When the environment drives the outcome more than the product: an Airbnb analogy

A Reddit post describes two similar Airbnb listings (photos, reviews, price) with different booking outcomes; the winner was surrounded by 15–20 nearby restaurants/cafes/bars, while the other was in a quiet residential area . The host can optimize the listing, but not the surrounding ecosystem, even if the interface looks identical .

Takeaway: Sometimes your “product” competes on the broader experience system—not just on-screen features.


2) Retention dropped because value and pricing didn’t match (mobile gaming)

Laura Teclemariam describes launching a “Modifications” feature (microtransactions ~$1–$5) and seeing retention drop after v2 because the feature’s pricing didn’t match the value it delivered . She adjusted pricing structures to better align value and price .

Takeaway: Retention problems can be value-to-price mismatches, not just UX issues .


3) “High-quality MVPs” and pixel-level rigor in animation production

Teclemariam compares animation development to product development: storyboards as prototypes, animatics as MVPs, with a higher quality bar at the MVP stage (less tolerance for “ugly baby” shipping) . She also highlights editorial rigor over details (every moment/pixel) as analogous to PM obsession with craft .

Takeaway: Speed isn’t the only lever—some domains require higher minimum quality to learn effectively.


4) Accessibility failure after heavy investment: Bild Zeitung’s readout feature

A cautionary example: Bild Zeitung launched a readout feature after significant engineering investment, then asked an accessibility influencer to test it; the trigger button wasn’t accessible via screen readers .

Takeaway: “Shift accessibility left”—validate operability (keyboard/screen reader) before launch .


5) Translating dry WCAG reports into stories (with a warning about false confidence)

A ProductTank Cologne talk describes using synthetic personas (data-driven archetypes that can “act and speak”) to translate technical WCAG accessibility reports into experiential narratives via RAG (accessibility report + site metadata + persona data) . They found AI stories can significantly foster empathy and urgency for accessibility measures .

However, they caution synthetic personas can create false confidence and should complement, not replace, real user research (“there are no stereotypes”) .


Career Corner

1) A practical AI-era career hedge: build Product Sense (and treat it as upstream)

Doshi’s framing is that the durable advantage isn’t tool mastery; it’s your ability to improve AI outputs through empathy, simulation, strategy, taste, and creative execution .

Career action: pick one of the five skills and deliberately practice it with real artifacts (PRDs, prototypes, research plans), not just prompts.


2) GitHub as proof-of-skill for PMs (especially AI PM roles)

Aakash Gupta reports that when he interviewed 10+ AI PM hiring managers, they said they will check a linked GitHub—and only 24% of PM candidates have one . He adds that inbound recruiter outreach converts to offers at 37% vs 22% for outbound applicants; a strong GitHub can shift you toward inbound .

He recommends treating pinned repos as a portfolio (“two good ones is the MVP”) with clear READMEs and meaningful contribution activity . He also warns against copy-pasted AI code without tradeoffs sections and empty commit “farms” .

“Your resume says you can do the job. Your GitHub proves it.”


3) Staying effective amid chaos: focus via operating model + lanes

A mid-level PM asks how senior Staff/Principal folks maintain focus as the role gets more chaotic . One concrete response across sources is to make focus structural: define lanes and a lightweight operating model rather than relying on personal heroics .


Tools & Resources

  • Claude Code for Product Managers (video): Sachin Rekhi shared a recording link : https://www.youtube.com/watch?v=zsAAaY8a63Q
  • Claude Code workflows (agentic capabilities): Rekhi describes autonomous workflows, local markdown artifacts, custom tool calls (e.g., transcription), and code-writing to accomplish tasks .
  • Product Sense course reference: Doshi links to a mindmap he created for a Product Sense course (link as provided): https://preview.kit-mail3.com/click/dpheh0hzhm/aHR0cHM6Ly9tYXZlbi5jb20vc2hyZXlhcy1kb3NoaS9wcm9kdWN0LXNlbnNl
  • Accessibility testing basics: keyboard + screen readers (including VoiceOver) and automated tooling like axe DevTools are listed as practical testing approaches .
  • Operating model prompts for “temporary consistency”: use expiration dates and plan removals for new rules added during strategic shifts .
Product Sense moats, AI-native operating loops, and the return of disciplined discovery
Summary
Coverage start
Mar 4 at 7:00 AM
Coverage end
Mar 5 at 7:00 AM
Frequency
Daily
Published
Mar 5 at 8:04 AM
Reading time
9 min
Research time
5 hrs 45 min
Documents scanned
51
Documents used
14
Citations
64
Sources monitored
99 / 100
Insights
Skipped contexts
Source details
Source Docs Insights Status
rahulvohra 0 0
Paul Graham 1 0
Tony Fadell 0 0
Patrick Collison 0 0
Daniel Ek 0 0
Gustaf Alströmer 0 0
Stewart Butterfield 0 0
PM Diego Granados 0 0
👨🏻‍💻☕️ 0 0
scott belsky 2 0
Ryan Hoover 0 0
Janna Bastow simplybastow.bsky.social 0 0
Jackie Bavaro 0 0
Sachin Rekhi 2 1
Dan Olsen 0 0
The community for ventures designed to scale rapidly | Read our rules before posting ❤️ 2 0
Will Lawrence 0 0
Product Marketing 4 0
Ami Vora 0 0
PM Interview: Practice Group for Product Manager Case Interviews 0 0
One Knight in Product 0 0
Aakash Gupta 2 1
Shreyas Doshi's Product Almanac | Substack 1 1
Lenny Rachitsky 0 0
Acquired 0 0
a16z 0 0
Exponent 0 0
Product Alliance 0 0
Product Management Exercises 0 0
rocketblocks 1 0
Product Design 0 0
ProductManagementJobs 1 0
Product Management 6 2
Product Management - The place for all things product 0 0
Product Management 1 1
Aspiring and current tech PM's 0 0
Masters of Scale 0 0
Product Science Group 0 0
How I built This 0 0
SaaStr AI 1 0
productized io 0 0
Lenny's Reads 0 0
The Product Folks 0 0
Strategyzer 0 0
Lenny's Podcast 0 0
AJ&Smart 0 0
Y Combinator 0 0
Product School 1 1
Mind the Product 5 5
@andrewchen 0 0
The Looking Glass 0 0
Kyle Poyar’s Growth Unhinged 0 0
Leah’s ProducTea 0 0
Run the Business 0 0
Product Managers at Work 0 0
The Product Compass 0 0
Ravi on Product 0 0
Productify by Bandan 0 0
Product Thinking with Melissa Perri 0 0
Product Talk Daily 0 0
The Beautiful Mess 1 1
Gibson Biddle's "Ask Gib" Product Newsletter 0 0
Casey Accidental 0 0
Hiten Shah 4 1
Product Growth 1 0
Perspectives 0 0
Lenny's Newsletter 0 0
andrew chen 2 0
Brian Balfour 0 0
Casey Winters 0 0
elena verna 0 0
Kevin Weil 🇺🇸 4 0
April Underwood 1 0
Julie Zhuo 0 0
Marty Cagan 0 0
Lenny Rachitsky 4 0
Christian Idiodi 0 0
John Cutler 0 0
Teresa Torres 1 1
Gibson Biddle 0 0
Shreyas Doshi 1 0
Adam Nash 0 0
Merci Grace 0 0
Jackie Bavaro 0 0
Hunter Walk 0 0
Brian Balfour 0 0
Scott Belsky 0 0
Nir Eyal 1 0
Teresa Torres 0 0
Julie Zhuo 0 0
Andrew Chen 0 0
John Cutler 1 1
Ken Norton 0 0
Gibson Biddle 0 0
Elena Verna 0 0
Casey Winters 0 0
Shreyas Doshi 0 0
Lenny Rachitsky 0 0
Melissa Perri 0 0
Marty Cagan 0 0