ZeroNoise Logo zeronoise
Post
Product Sense moats, AI-native operating loops, and the return of disciplined discovery
Mar 5
9 min read
51 docs
This edition synthesizes new frameworks and real-world examples on what differentiates PMs in the AI age (Product Sense), how AI-native loops are reshaping delivery and operating models, and how to avoid feature-factory failure modes through stakeholder evidence and minimally viable consistency. It also includes practical playbooks for focus, accessibility, and career signaling (GitHub), plus concrete case studies across B2B SaaS, gaming, and consumer marketplaces.

Big Ideas

1) In an AI-commoditized world, Product Sense becomes the career moat

Shreyas Doshi argues that as AI becomes embedded across product work (discovery, design, prototyping, coding, testing, deployment, analytics, feedback, competitive analysis, GTM, etc.) , the specific tools you use will matter less over time—tools “commoditize,” and tool choice won’t be a durable personal advantage .

The differentiator shifts to the human judgment applied on top of AI outputs—what he labels Product Sense. He breaks Product Sense into five component skills:

  • Strong empathy (needs beyond what AI has already analyzed)
  • Excellent simulation skills (future possibilities based on domain/tech/competition/customers/users)
  • Stellar strategic thinking (segments + differentiators)
  • Great taste (choose what’s optimal and explain why)
  • Creative execution (conceive unique solutions competitors won’t)

He frames this as a high bar that many product people may struggle to meet .

Why it matters: If AI equalizes execution throughput, advantage concentrates in judgment: picking the right problems, seeing tradeoffs, and improving AI-generated inputs/outputs .

How to apply (weekly loop):

  1. Pick one recurring decision type (e.g., prioritization, positioning, UX tradeoffs).
  2. Use AI to generate options (not decisions), then explicitly practice the five skills: empathize, simulate, strategize, choose (taste), and propose a distinctive execution path .
  3. Write down what you improved beyond the AI output (your judgment delta) .

2) AI is compressing delivery cycles—PM work risks becoming the bottleneck

Björn Schotte highlights a “paradox”: engineering has become “10x faster” (2019–2025) while product management only “1.2x,” making PMs the bottleneck . He also describes a landscape split: 70–75% traditional, 20–25% hybrid, and 4–5% AI-native teams .

He argues AI-native teams connect discovery, validation, and delivery into a continuous loop (AI generating tests, deploying, measuring, reporting) .

Why it matters: If building gets radically faster, the failure mode becomes “shipping the wrong stuff faster” (see Torres below) rather than being blocked by implementation.

How to apply (start small):

  1. Pick one workflow where signals already exist (errors, user signals, customer emails, competitor monitoring).
  2. Create a daily or weekly AI-generated briefing that aggregates these signals into a short ranked list for human review .
  3. Make the human step explicit: review, reject, label, and sequence work (don’t auto-ship decisions) .

3) Operating models: aim for minimally viable consistency, not blanket standardization

John Cutler frames operating models as doing “8 jobs” regardless of context (value architecture, discover/prioritize, align capacity, route escalations, support execution, assess impact, circulate insights, provide financial/operational oversight, shape capacity) .

In parallel, his Substack post argues for Minimally Viable Consistency (MVC): the fewest consistent concepts/terms needed to operate, while preserving beneficial local variation . He warns that widely known frameworks (e.g., OKRs) often hide wildly different implementations—and that variation isn’t inherently bad .

Why it matters: AI adoption can tempt orgs into adding more process (or “consistency mechanisms”) to manage speed and change—but embedded rules rarely disappear .

How to apply (design MVC like a scaffold):

  1. Identify what risk you’re trying to reduce if something isn’t consistent (be specific) .
  2. Prefer lighter nudges (templates, defaults, shared artifacts) before mandates .
  3. Add an explicit reassessment date; plan how you’d remove the rule later .

4) AI can push teams back into “feature factory” mode—counter with discovery and alignment

Teresa Torres warns that “AI features dominating roadmaps” can lead teams back to feature factory behavior:

“All we are doing is shipping the wrong stuff faster.”

She argues you can’t win opinion battles with stakeholders; you can bring information they don’t have (customer interview insights, assumption-test data, patterns in the opportunity space) .

Hiten Shah offers a drift diagnostic: if you ask five leaders what the company does and get five different answers, the company is drifting—and roadmap debates turn into arguments .

Why it matters: Faster delivery increases the cost of misalignment and weak discovery.

How to apply:

  1. Start roadmap discussions with shared outcomes (not solutions) .
  2. Continuously “show your work” so decisions are less about opinions and more about evidence and reasoning .
  3. Use drift checks: periodically ask leaders to explain what the company does; treat divergence as an upstream problem to fix before prioritization fights .

5) Accessibility is both a product quality discipline and a go-to-market requirement

Konstantin Tieber frames disability as a mismatch between individual capacities and environmental demands , and highlights categories of impairments (visual, auditory, motor, cognitive) including situational/temporary constraints . He points to WCAG’s four principles (Perceivable, Operable, Understandable, Robust) as a practical compliance checklist .

He also connects accessibility to sales: enterprise buyers may require a VPAT/ACR (Accessibility Conformance Report) documenting WCAG conformance .

Why it matters: Accessibility expands reachable users and reduces exclusion by default; it’s also increasingly tied to procurement expectations and compliance workflows .

How to apply:

  1. “Shift left”: challenge UI concepts early (e.g., drag-and-drop) with “How do I operate this with a keyboard?” .
  2. Build with semantic HTML (avoid divs-as-buttons) .
  3. Test with keyboard + screen readers (e.g., VoiceOver) as part of release validation .

Tactical Playbook

1) A stakeholder-management workflow that replaces opinion battles with evidence

Torres’ tactics are structured and repeatable:

  1. Start with shared outcomes (not solutions) .
  2. Use an opportunity solution tree as a stakeholder-management tool (to visualize options and assumptions) .
  3. Invite contribution with: “Did we miss anything?” .
  4. Share assumption tests and results, not only conclusions .
  5. Show your work continuously—avoid “big reveals” .

Why it works: It turns stakeholder conversations into joint sense-making, anchored in information stakeholders typically don’t have direct access to .


2) Use AI where it reduces collaboration overhead—protect high-context collaboration

Cutler’s heuristic: some work is “transactional” but forced into collaboration (meetings that should have been a doc review), and AI can help by sharing context and reducing friction . But there’s also work that should be collaborative and becomes transactional due to busyness; freeing time via AI should make room for deliberate collaboration .

He also warns that AI is weaker for certain research question types: it can be strong for definitional questions but tends to produce explanations too eagerly for explanatory questions (“it wants to please you”) .

Step-by-step:

  1. List your team’s recurring collaborative moments.
  2. Tag each as either (a) transactional-but-collaborative or (b) truly high-context collaboration .
  3. Automate (a) first (e.g., segment-specific release note reframes) so time returns to (b) .

3) Speed without sloppiness: apply rigor to wins, not just losses

Cutler flags a common management trap: people over-index on “good news,” stop applying rigor to wins, and start relying on luck .

Step-by-step:

  1. After a “win,” run the same review you’d run after a miss: what worked, what was luck, what to repeat .
  2. Capture learnings into a lightweight shared artifact (so you don’t lose the insight in celebration mode) .

4) If you’re overwhelmed, design “lanes” (vectors for meaningful hard work)

Cutler’s “lanes” concept: teams need viable lanes with the right challenge/progress balance; when passionate people have “no vectors for hard work,” they invent work .

Step-by-step:

  1. Define 1–3 lanes per team (not per person) with clear boundaries and intended outcomes .
  2. Audit current work: remove or downgrade initiatives that don’t fit a lane.
  3. Re-check lane viability monthly—adjust challenge level and clarity.

Case Studies & Lessons

1) When the environment drives the outcome more than the product: an Airbnb analogy

A Reddit post describes two similar Airbnb listings (photos, reviews, price) with different booking outcomes; the winner was surrounded by 15–20 nearby restaurants/cafes/bars, while the other was in a quiet residential area . The host can optimize the listing, but not the surrounding ecosystem, even if the interface looks identical .

Takeaway: Sometimes your “product” competes on the broader experience system—not just on-screen features.


2) Retention dropped because value and pricing didn’t match (mobile gaming)

Laura Teclemariam describes launching a “Modifications” feature (microtransactions ~$1–$5) and seeing retention drop after v2 because the feature’s pricing didn’t match the value it delivered . She adjusted pricing structures to better align value and price .

Takeaway: Retention problems can be value-to-price mismatches, not just UX issues .


3) “High-quality MVPs” and pixel-level rigor in animation production

Teclemariam compares animation development to product development: storyboards as prototypes, animatics as MVPs, with a higher quality bar at the MVP stage (less tolerance for “ugly baby” shipping) . She also highlights editorial rigor over details (every moment/pixel) as analogous to PM obsession with craft .

Takeaway: Speed isn’t the only lever—some domains require higher minimum quality to learn effectively.


4) Accessibility failure after heavy investment: Bild Zeitung’s readout feature

A cautionary example: Bild Zeitung launched a readout feature after significant engineering investment, then asked an accessibility influencer to test it; the trigger button wasn’t accessible via screen readers .

Takeaway: “Shift accessibility left”—validate operability (keyboard/screen reader) before launch .


5) Translating dry WCAG reports into stories (with a warning about false confidence)

A ProductTank Cologne talk describes using synthetic personas (data-driven archetypes that can “act and speak”) to translate technical WCAG accessibility reports into experiential narratives via RAG (accessibility report + site metadata + persona data) . They found AI stories can significantly foster empathy and urgency for accessibility measures .

However, they caution synthetic personas can create false confidence and should complement, not replace, real user research (“there are no stereotypes”) .


Career Corner

1) A practical AI-era career hedge: build Product Sense (and treat it as upstream)

Doshi’s framing is that the durable advantage isn’t tool mastery; it’s your ability to improve AI outputs through empathy, simulation, strategy, taste, and creative execution .

Career action: pick one of the five skills and deliberately practice it with real artifacts (PRDs, prototypes, research plans), not just prompts.


2) GitHub as proof-of-skill for PMs (especially AI PM roles)

Aakash Gupta reports that when he interviewed 10+ AI PM hiring managers, they said they will check a linked GitHub—and only 24% of PM candidates have one . He adds that inbound recruiter outreach converts to offers at 37% vs 22% for outbound applicants; a strong GitHub can shift you toward inbound .

He recommends treating pinned repos as a portfolio (“two good ones is the MVP”) with clear READMEs and meaningful contribution activity . He also warns against copy-pasted AI code without tradeoffs sections and empty commit “farms” .

“Your resume says you can do the job. Your GitHub proves it.”


3) Staying effective amid chaos: focus via operating model + lanes

A mid-level PM asks how senior Staff/Principal folks maintain focus as the role gets more chaotic . One concrete response across sources is to make focus structural: define lanes and a lightweight operating model rather than relying on personal heroics .


Tools & Resources

  • Claude Code for Product Managers (video): Sachin Rekhi shared a recording link : https://www.youtube.com/watch?v=zsAAaY8a63Q
  • Claude Code workflows (agentic capabilities): Rekhi describes autonomous workflows, local markdown artifacts, custom tool calls (e.g., transcription), and code-writing to accomplish tasks .
  • Product Sense course reference: Doshi links to a mindmap he created for a Product Sense course (link as provided): https://preview.kit-mail3.com/click/dpheh0hzhm/aHR0cHM6Ly9tYXZlbi5jb20vc2hyZXlhcy1kb3NoaS9wcm9kdWN0LXNlbnNl
  • Accessibility testing basics: keyboard + screen readers (including VoiceOver) and automated tooling like axe DevTools are listed as practical testing approaches .
  • Operating model prompts for “temporary consistency”: use expiration dates and plan removals for new rules added during strategic shifts .
Product Sense moats, AI-native operating loops, and the return of disciplined discovery
John Cutler
Profile 1 doc

Product Operating Models

John Cutler frames operating models as performing 8 core jobs regardless of context: maintain value architecture; discover/prioritize what/how; align capacity to strategy; surface/route escalations; support execution; assess impact; circulate insights; provide financial/operational oversight and shape capacity . Implementation varies: startups use lightweight heuristics (e.g., limit to 3 initiatives, EBITDA estimates with confidence scores, opportunity-solution tree) ; enterprises use annual plans/allocations .

2x2 Framework: Strategic vs. Transactional × Maximalist (many irons in fire, 'why not more?') vs. Architect (minimal viable to goal) . Adapt to founder style (e.g., maximalist founders common) .

Toolkit View of PM: Stitch design, data, technology for differentiated growth; applicable beyond tech (e.g., non-digital firms) .

AI Impact on PM Workflows

  • Nature of Work: Automate collaborative transactional tasks (e.g., AI-rewrite release notes for 10 segments) ; enable more deliberate collaboration by freeing time .
  • Cynefin + Context: AI excels at context-free simple/complicated (e.g., meeting notes) or synthesis (V1 customer journey); avoid for high-context complex human interactions (e.g., group feedback processing) .

Discovery & Research

Varies by question type (exploratory, evaluative, explanatory, etc.); AI strong on definitional, weak on explanatory (hallucinates) .

Career Insights

  • Team Motivation: Design 'lanes'—viable vectors for hard work with right challenge/progress balance .
  • Writing Workflow: Output over outcomes; write/start/finish in one go using Flow State app .
  • Stakeholder Management: Play long game; co-design shared mental models; seek 'stakeholder PMF' (pull signals) .
  • Habit to Unlearn: Over-indexing on good news—apply equal rigor to wins/losses for consistent introspection .

Product Ops Benefits: Reduce friction (painkilling) for high-leverage time; improve decision quality via data/strategy/customer access .

Teresa Torres
x 1 doc

Product teams risk feature factory mode with AI-dominated roadmaps, "shipping the wrong stuff faster" .

Break the cycle by bringing stakeholders on the discovery journey instead of battling opinions .

Strategies to transform stakeholders into partners:

  • Start with shared outcomes, not solutions 🎯
  • Use opportunity solution tree as stakeholder tool 🗺️
  • Invite input: “did we miss anything?” 💬
  • Share assumption tests and results, not conclusions 📊
  • Show work continuously, not big reveals 🔄
  • Tailor communication to stakeholder needs ⚖️

Key insight: “You can’t win opinion battles. But you can bring information your stakeholders don’t have—insights from customer interviews, data from assumption tests, patterns in the opportunity space” .

Aakash Gupta
substack 2 docs

GitHub as a career tool for PMs, especially AI PMs: Shubham Saboo transitioned from Dev Rel to Senior AI PM at Google Cloud using his GitHub for marketing himself .

Hiring managers at top companies always check linked GitHubs; only 24% of PM candidates have one . Placed PMs at OpenAI, Anthropic, Meta had one strong project with clear documentation and consistent activity—not massive stars .

Profile anatomy:

  • Welcome message explaining role and GitHub value
  • 3-4 line headline (workplace, builds, value)
  • Pinned repos as portfolio (2 is MVP; interesting names, clear READMEs)
  • Contribution chart: meaningful > frequent
  • Tags/taxonomy (FAQ + Article + Breadcrumb)
  • USP section on top repo

Inbound recruiters convert PMs to offers at 37% vs 22% outbound; strong GitHub shifts you inbound .

Avoid: copy-pasted AI code without tradeoffs, joke projects, empty commit farms .

"Your resume says you can do the job. Your GitHub proves it."

Mind the Product
youtube 1 doc

AI Transformation of Product Management

PM Bottleneck: Engineers are 10x faster with AI, but PMs only 1.2x, making PMs the bottleneck . AI destroys Agile/Scrum as developers use agent teams .

Time Savings Survey (Lenny Rachitsky, 1,750 respondents): 55% say AI exceeded expectations; >50% save ≥½ day/week; founders save ~50% (>6 hours) .

AI-Native PM Practices:

  • Prototype First: PMs use Claude/Codex for UI prototypes directly in Git repo; pair with engineers; iterate live with stakeholders. Ideas credible as working software, not Figma/PowerPoint .
  • Parallel Experimentation: Spin up Git work trees; run 3-4 coding agents in parallel on feature variants; auto-test/A-B; merge winner .
  • Merge Discovery/Delivery: Continuous loop—AI generates tests/deploys/measures/reports; browser automation for user personas (e.g., 'test as Lisa from accounting') .

Case Studies:

  • OpenAI's Zora Android app: 18 days, 2-3 engineers .
  • Claude Cowork v1: 1 week .
  • Lovable: $200M ARR <1 year, 100 employees; refine PMF every 3 months .
  • Meta PM (no tech background): Built study app live using Claude (planning), Gemini (UI), AI peer review .

Practical Tactics for PMs:

  • Work close to code; learn principles (markdown, bash, skills over MCPs, agents.md rules, prompting/guardrails, RALPH loop: repeat 'no, self-critique') .
  • Experiment 4-8 hrs/week; pair with devs; automate SDLC .
  • Morning briefings: AI aggregates signals (errors, competitors, customer wishes), creates tickets; pair-implement top features via agents .

Trends: 70-75% traditional (Jira/roadmaps); 20-25% hybrid; 4-5% AI-native (prototype/delivery first) . Germany lags US/UK—competitive advantage to adapt .

Mind the Product
youtube 1 doc

Career Progression Insight: Embrace a 'jungle gym' nonlinear path across industries (engineering, consulting, founding, PM in gaming, ads, TV/film, LinkedIn), sharpening problem-solving tools like systems understanding and consumer tolerances, rather than ladder-climbing in one area .

Gaming Case Study (EA Star Wars Galaxy of Heroes): Developed 'Modifications' feature for RPG battlers, balancing user experience with $1-5 microtransactions. V2 launch caused retention drop due to mispriced value; fixed via pricing adjustments matching feature value, restoring engagement. Lessons: Retention critical post-PMF; hone repeat engagement for shipping clarity and career success . Gaming prioritizes honest feedback on experience worth (10-100 hours playtime), influencing DAU focus now adopted by tech .

Entertainment vs Tech PM: Data ends debates in tech; entertainment ends with taste/delight—recommend tech prioritize delight for better products . Animation analogies: storyboards=prototypes, animatics=MVPs; higher MVP quality bar (no 'ugly baby') due to stakes; editorial pixel-level rigor mirrors PM obsession . At Netflix animation, consolidated 400+ tools to 130 for efficiency; build/borrow based on projected watch hours vs production costs, closing two-sided marketplace loop .

Interviewing Tactics: Use STAR behavioral questions to assess stakeholder influence beyond data, simulating role scenarios—key for entertainment where feel trumps metrics .

LinkedIn PM Practices: Led profile/messaging/groups (core value prop: identity-communication-community). Culture prioritizes trust/risk mitigation amid massive feedback/emotional stakes (jobs/business); fosters patience, empathy, user clarity .

Future PM Trends (Berkeley Course): Hypothesis: PM converges with eng/design triad via AI (use AI for everything); lowers prototyping barriers but gaps in code-gen; shifts focus to 'should we build' (bias, consequences), more upfront planning like animation MVPs. Teams with diverse backgrounds excel in right-product questioning . Students reinforce curiosity, consumer obsession, safe experimentation for innovation .

Product School
youtube 1 doc

PM Mindset Shift with AI: Product managers should transition from designing rigid UI flows to probabilistic AI outcomes .

Career Insight: As Zapier's first PM, built Zapier for Teams after listening to customers and team, earning founder trust by matching their customer care . Product strategy inseparable from company strategy .

Product Evolution Case Study:

  • From trigger-action (pre-2016) to workflows as users demanded complexity .
  • Added Tables and Interfaces to enable full software solutions (e.g., Sheets as DB) .
  • Code Red on GPT-3: Reinvented for AI/no-code paradigm .

Frameworks:

  • Agent vs Workflow: Agents access knowledge, act, reason/loop adaptively; workflows deterministic .
  • Orchestration: Hire AI employees—onboard (context engineering), assign jobs, connect to data/tools/triggers for async work .
  • MCP vs API: MCP standardizes tool access/descriptions for agents, beyond raw API calls .

AI in PM Work (Zapier: 800 agents):

  • Shared context engineering across eng/prod/design.
  • Examples: Daily agenda (calendar + research/docs) ; Weekly launch summaries from demos .

Enterprise PM Strategy:

  • Moat: Most integrations + ease + use case data to guide automation .
  • PLG to sales: Engineer org-wide value, address cultural/leadership shifts . Empower problem-owners over handoffs .

Adoption vs Transformation:

  • Adoption: Efficiently automate existing processes.
  • Transformation: Enable previously impossible work .

AI Governance: Observability (who/what/where data), enforced policies . Leadership: Chief AI Officer, exec show-and-tell, hands-on use .

Mind the Product
youtube 1 doc

Synthetic Personas powered by AI translate dry WCAG accessibility audit reports into empathetic, narrative experiences to highlight usability issues in digital platforms .

Core Concept: Artificially created, data-driven user archetypes that 'act and speak' from impaired users' perspectives, using retrieval augmented generation (RAG) with accessibility reports, site metadata, and persona data .

Implementation Steps:

  • Input website URL.
  • Axe open-source engine analyzes source code .
  • AI generates persona-specific stories of customer journey impacts, switchable to technical data .
  • Augment with internal data (e.g., support tickets) for depth .

Case Study Outcomes: Thesis research showed AI stories significantly boost empathy and urgency for accessibility fixes .

Lessons Learned: Creates initial motivation but risks 'false confidence'; complement with real user research—no stereotypes in impairments . Recommended for product optimization via AI agents like Manus .

Context: Addresses 16% global population with impairments; German sites show low keyboard accessibility (20/65) .

Product Management
reddit 2 docs

Upcoming panel features Staff PM and Principal Engineer discussing:

  • Career paths to senior levels
  • Operating effectively at that level
  • Evolution amid tech disruptions

Mid-level PM experience: Role increasingly chaotic with growing workload; thriving PMs operate smoothly despite this. Interested in senior strategies for maintaining focus

Hiten Shah
x 2 docs

Diagnostic for company drift affecting product roadmaps:

Ask five leaders to explain what the company does; differing stories indicate drift .

This misalignment turns every roadmap debate into an argument.

Shreyas Doshi's Product Almanac | Substack

In the AI age, AI will handle most product activities (discovery, design, prototyping, coding, testing, deployment, analytics, etc.), commoditizing tools which provide no long-term advantage . The key differentiator for product success and PM careers is Product Sense: human ability to improve AI outputs in customer/market insights, strategy, prioritization, etc. .

Product Sense decomposes into 5 skills enabling superior product decisions:

  • Strong empathy (needs beyond AI analysis)
  • Excellent simulation skills (future possibilities from domain/tech/competition/customers)
  • Stellar strategic thinking (target segments, differentiators)
  • Great taste (optimal recommendations explained clearly)
  • Creative execution (unique features/solutions)

This framework, developed pre-AI hype, sets the high bar for PMs competing with AI-savvy peers . Claude analysis validates: decomposition robust, Product Sense upstream of AI-era powers like data/insights/UX/distribution .

The Beautiful Mess

Minimally Viable Consistency (MVC): Design a company's operating system with the fewest consistent concepts, terms, and phrases possible while enabling needed operations. Sparks discussions on common interfaces, cognitive load, and local variation benefits .

Model Market Fit (MMF): Unexpected ideas propagate company-wide (e.g., journey maps over North Star Framework); adapt existing tools rather than being dogmatic .

Strategic Shifts: Increased cross-boundary work demands consistency; prioritize global heuristics over dependency objects to avoid premature convergence . Action: Use temporary consistency to stabilize, but avoid permanence .

OKRs Myth: Appear standardized but implementations vary (strict hierarchy, shared metrics, driver trees, financial ties) . Flexibility enables contextual adaptation.

Avoid the Pyramid: Skip ambiguous high-level terms (pillars, vision); focus on actionable consistencies like fractal rituals, bet framing, kickoffs, quirky one-pagers .

Viability & Scaffold: Assess "viable for whom?" Balance leadership/reporting needs with team friction. Use consistency as temporary teaching aid for upskilling, then remove .

Evaluation Questions:

  • Worst outcome without consistency?
  • Easier or harder for people? Justified benefit?
  • Graceful to local variability? Simplifying?
  • Cheaper nudges (examples, templates)?
  • Expiration date? Removal feasibility?
Mind the Product
youtube 1 doc

Speaker, formerly at LeanIX (SaaS) and now at TradeRepublic, shares strategies for driving web accessibility in product organizations .

Core concept: Accessibility enables all users to access web services despite disabilities (mismatch between capacities and environment demands) . Categories: visual, auditory, motor, cognitive; includes temporary situations .

Why for PMs: Larger audience, better UX, societal benefits, legal compliance (e.g., Barrierefreiheitsstärkungsgesetz for B2C like banking/ecommerce; B2B generally exempt unless white-label) .

Standards: WCAG (POUR: Perceivable, Operable, Understandable, Robust) as checklist for compliance ; ARIA for screen readers .

Implementation methodology (shift accessibility left):

  • Challenge designers early (e.g., keyboard for drag-and-drop) .
  • Use semantic HTML (avoid divs as buttons) .
  • Mouse actions must work with keyboard .
  • Checklists (A11y Project, personal list covering WCAG essentials) .

Testing: Keyboard, screen readers (NVDA, JAWS, VoiceOver), axe DevTools, user testing with impaired individuals .

Case study: Bild Zeitung's article readout feature failed accessibility (inaccessible trigger button despite engineering effort) .

Sales strategy: Publish VPAT/ACR (Accessibility Conformance Report) documenting WCAG compliance to win enterprise deals (SAP example) .

Mind the Product
youtube 1 doc

AI's Impact on Product Management Role

Product managers must learn AI topics like model context protocol (MCP) and A2A to stay relevant in 2026 .

LeanIX Case: Unlocking AI Use Cases via MCP

LeanIX hosts an MCP server (like a USB plug for agents), granting AI agents access to enterprise data without hosting LLMs, enabling non-UI interactions like lifecycle management via agents . This unlocks customer use cases, e.g., agent-driven decisions in Cloud Code landing in the enterprise UI . Implications include security/privacy challenges and pricing shifts (e.g., seat licenses disrupted by agent access) .

Key Learnings for PMs

  • Building products remains a team sport requiring shared language with eng/UX .
  • Customer focus essential: co-learn via shipping iteratively; guide customers on use cases .
  • Invest in scalable architecture, tool selection, authentication for production agents .
  • Leverage open standards/tools for rapid building, but PM knowledge (business, customers, tech) stays crucial amid AI disruption .
Product Management
reddit 1 doc

Airbnb listings succeed more based on surroundings than features alone. Two similar apartments (photos, reviews, price) differed in bookings: one near 15–20 restaurants/cafes/bars (2-min walk), the other in quiet residential area with nothing nearby .

From a product perspective, hosts optimize listing details (furniture, decor, amenities, photos), but can't control 5-min radius ecosystem, which creates superior experience despite identical app interfaces . Neighborhood builds the advantage hosts can't .

Raises PM question: when does product environment drive outcomes more than the product?

Sachin Rekhi
x 2 docs

Sachin Rekhi migrated nearly all his product work to Claude Code, boosting productivity by an additional 3x beyond prior AI gains .

Custom skills enable:

  • End-to-end customer interview synthesis
  • Autonomous NPS programs
  • Exploratory data analysis without SQL
  • Critiquing product strategy drafts

These leverage agentic capabilities:

  • Autonomous workflows
  • Local markdown artifacts
  • Custom tool calls (e.g., interview transcription)
  • Code writing for tasks

Video of Claude Code for Product Managers session: watch here.

Product Management
reddit 1 doc

Product teams frequently jump into solutions before clearly defining the problem space, producing ephemeral artifacts like decks, docs, Miro boards, and research reports . This results in repeated rediscovery of context, underscoring the need for persistent representations of the problem space—mapping business outcomes, customer outcomes, behaviors, and pain points—with greater rigor .

The post questions whether teams maintain such persistent models or rely on ad-hoc artifacts [[segment:5011196:0:0].