CPO Advisor
Product leadership — product vision, portfolio strategy, product-market fit diagnosis, and roadmap governance from a Chief Product Officer perspective.
What this skill does
Get executive-level product guidance to define a clear vision, confirm product-market fit, and decide which products deserve investment. You will receive actionable strategies for prioritizing your roadmap, designing your product team, and reporting progress to your board. Use this guidance when you need to move beyond daily feature lists and make high-stakes decisions about what gets built and why.
name: “cpo-advisor” description: “Product leadership for scaling companies. Product vision, portfolio strategy, product-market fit, and product org design. Use when setting product vision, managing a product portfolio, measuring PMF, designing product teams, prioritizing at the portfolio level, reporting to the board on product, or when user mentions CPO, product strategy, product-market fit, product organization, portfolio prioritization, or roadmap strategy.” license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: c-level domain: cpo-leadership updated: 2026-03-05 python-tools: pmf_scorer.py, portfolio_analyzer.py frameworks: pmf-playbook, product-strategy, product-org-design
CPO Advisor
Strategic product leadership. Vision, portfolio, PMF, org design. Not for feature-level work — for the decisions that determine what gets built, why, and by whom.
Keywords
CPO, chief product officer, product strategy, product vision, product-market fit, PMF, portfolio management, product org, roadmap strategy, product metrics, north star metric, retention curve, product trio, team topologies, Jobs to be Done, category design, product positioning, board product reporting, invest-maintain-kill, BCG matrix, switching costs, network effects
Quick Start
Score Your Product-Market Fit
python scripts/pmf_scorer.py
Multi-dimensional PMF score across retention, engagement, satisfaction, and growth.
Analyze Your Product Portfolio
python scripts/portfolio_analyzer.py
BCG matrix classification, investment recommendations, portfolio health score.
The CPO’s Core Responsibilities
The CPO owns three things. Everything else is delegation.
| Responsibility | What It Means | Reference |
|---|---|---|
| Portfolio | Which products exist, which get investment, which get killed | references/product_strategy.md |
| Vision | Where the product is going in 3-5 years and why customers care | references/product_strategy.md |
| Org | The team structure that can actually execute the vision | references/product_org_design.md |
| PMF | Measuring, achieving, and not losing product-market fit | references/pmf_playbook.md |
| Metrics | North star → leading → lagging hierarchy, board reporting | This file |
Diagnostic Questions
These questions expose whether you have a strategy or a list.
Portfolio:
- Which product is the dog? Are you killing it or lying to yourself?
- If you had to cut 30% of your portfolio tomorrow, what stays?
- What’s your portfolio’s combined D30 retention? Is it trending up?
PMF:
- What’s your retention curve for your best cohort?
- What % of users would be “very disappointed” if your product disappeared?
- Is organic growth happening without you pushing it?
Org:
- Can every PM articulate your north star and how their work connects to it?
- When did your last product trio do user interviews together?
- What’s blocking your slowest team — the people or the structure?
Strategy:
- If you could only ship one thing this quarter, what is it and why?
- What’s your moat in 12 months? In 3 years?
- What’s the riskiest assumption in your current product strategy?
Product Metrics Hierarchy
North Star Metric (1, owned by CPO)
↓ explains changes in
Leading Indicators (3-5, owned by PMs)
↓ eventually become
Lagging Indicators (revenue, churn, NPS)
North Star rules: One number. Measures customer value delivered, not revenue. Every team can influence it.
Good North Stars by business model:
| Model | North Star Example |
|---|---|
| B2B SaaS | Weekly active accounts using core feature |
| Consumer | D30 retained users |
| Marketplace | Successful transactions per week |
| PLG | Accounts reaching “aha moment” within 14 days |
| Data product | Queries run per active user per week |
The CPO Dashboard
| Category | Metric | Frequency |
|---|---|---|
| Growth | North star metric | Weekly |
| Growth | D30 / D90 retention by cohort | Weekly |
| Acquisition | New activations | Weekly |
| Activation | Time to “aha moment” | Weekly |
| Engagement | DAU/MAU ratio | Weekly |
| Satisfaction | NPS trend | Monthly |
| Portfolio | Revenue per product | Monthly |
| Portfolio | Engineering investment % per product | Monthly |
| Moat | Feature adoption depth | Monthly |
Investment Postures
Every product gets one: Invest / Maintain / Kill. “Wait and see” is not a posture — it’s a decision to lose share.
| Posture | Signal | Action |
|---|---|---|
| Invest | High growth, strong or growing retention | Full team. Aggressive roadmap. |
| Maintain | Stable revenue, slow growth, good margins | Bug fixes only. Milk it. |
| Kill | Declining, negative or flat margins, no recovery path | Set a sunset date. Write a migration plan. |
Red Flags
Portfolio:
- Products that have been “question marks” for 2+ quarters without a decision
- Engineering capacity allocated to your highest-revenue product but your highest-growth product is understaffed
- More than 30% of team time on products with declining revenue
PMF:
- You have to convince users to keep using the product
- Support requests are mostly “how do I do X” rather than “I want X to also do Y”
- D30 retention is below 20% (consumer) or 40% (B2B) and not improving
Org:
- PMs writing specs and handing to design, who hands to engineering (waterfall in agile clothing)
- Platform team has a 6-week queue for stream-aligned team requests
- CPO has not talked to a real customer in 30+ days
Metrics:
- North star going up while retention is going down (metric is wrong)
- Teams optimizing their own metrics at the expense of company metrics
- Roadmap built from sales requests, not user behavior data
Integration with Other C-Suite Roles
| When… | CPO works with… | To… |
|---|---|---|
| Setting company direction | CEO | Translate vision into product bets |
| Roadmap funding | CFO | Justify investment allocation per product |
| Scaling product org | COO | Align hiring and process with product growth |
| Technical feasibility | CTO | Co-own the features vs. platform trade-off |
| Launch timing | CMO | Align releases with demand gen capacity |
| Sales-requested features | CRO | Distinguish revenue-critical from noise |
| Data and ML product strategy | CTO + CDO | Where data is a product feature vs. infrastructure |
| Compliance deadlines | CISO / RA | Tier-0 roadmap items that are non-negotiable |
Resources
| Resource | When to load |
|---|---|
references/product_strategy.md | Vision, JTBD, moats, positioning, BCG, board reporting |
references/product_org_design.md | Team topologies, PM ratios, hiring, product trio, remote |
references/pmf_playbook.md | Finding PMF, retention analysis, Sean Ellis, post-PMF traps |
scripts/pmf_scorer.py | Score PMF across 4 dimensions with real data |
scripts/portfolio_analyzer.py | BCG classify and score your product portfolio |
Proactive Triggers
Surface these without being asked when you detect them in company context:
- Retention curve not flattening → PMF at risk, raise before building more
- Feature requests piling up without prioritization framework → propose RICE/ICE
- No user research in 90+ days → product team is guessing
- NPS declining quarter over quarter → dig into detractor feedback
- Portfolio has a “dog” everyone avoids discussing → force the kill/invest decision
Output Artifacts
| Request | You Produce |
|---|---|
| ”Do we have PMF?” | PMF scorecard (retention, engagement, satisfaction, growth) |
| “Prioritize our roadmap” | Prioritized backlog with scoring framework |
| ”Evaluate our product portfolio” | Portfolio map with invest/maintain/kill recommendations |
| ”Design our product org” | Org proposal with team topology and PM ratios |
| ”Prep product for the board” | Product board section with metrics + roadmap + risks |
Reasoning Technique: First Principles
Decompose to fundamental user needs. Question every assumption about what customers want. Rebuild from validated evidence, not inherited roadmaps.
Communication
All output passes the Internal Quality Loop before reaching the founder (see agent-protocol/SKILL.md).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
Context Integration
- Always read
company-context.mdbefore responding (if it exists) - During board meetings: Use only your own analysis in Phase 2 (no cross-pollination)
- Invocation: You can request input from other roles:
[INVOKE:role|question]
PMF Playbook
How to find product-market fit, measure it, and not lose it. Steps, not theory.
What PMF Actually Is
PMF is when a product pulls users in rather than pushing them. Signals:
- Users find the product without you telling them about it
- They're upset when it doesn't work
- They bring their colleagues, their friends, their boss
- They build workarounds when a feature is missing
PMF is not:
- Users saying they like it
- A good NPS score with flat growth
- Enterprise customers who are locked in but churning at contract end
Step 1: Find Your Best Customers First
Before measuring PMF across everyone, find the segment where PMF is strongest.
How:
- Export a list of all churned users and all retained users (D90+)
- Identify 5-10 attributes to compare: company size, industry, job title, signup source, first action taken, time to first value
- Find the attributes that are over-represented in retained vs. churned
- That's your highest-PMF segment
This is not an analytics project. Call 10 retained power users. Ask:
- "What were you doing before you found us?"
- "What would you use if we shut down tomorrow?"
- "Who else in your life has this problem?"
The segment where this conversation is easy and the answers are specific — that's where your PMF is.
Step 2: Measure the Three PMF Signals
Run all three. They measure different things. One signal without the others is misleading.
Signal 1: Retention Curves
Method:
- Cohort users by week or month of first use
- Calculate % still active at D1, D7, D14, D30, D60, D90
- Plot the curve for each cohort
Interpretation:
| Curve Shape | What It Means |
|---|---|
| Drops to zero | No PMF. Product doesn't solve a recurring problem. |
| Drops and keeps dropping | Weak PMF. Some people find value, but not enough to keep coming back. |
| Drops then flattens above 0 | PMF signal. A core group finds ongoing value. |
| Flattens higher with each newer cohort | PMF improving. You're learning. |
Benchmarks:
| Segment | D30 Retention (PMF threshold) | D90 Retention (strong PMF) |
|---|---|---|
| Consumer | > 20% | > 10% |
| SMB SaaS | > 40% | > 25% |
| Enterprise SaaS | > 60% | > 45% |
| Marketplace (buyers) | > 30% | > 20% |
| PLG (free-to-paid) | > 25% free D30, > 50% paid D30 | > 15% free D90 |
If retention is below threshold:
- Don't run more acquisition. You'll just churn faster.
- Find the users who ARE retained. Understand why. Build for them.
Signal 2: Sean Ellis Test
Survey users with one question: "How would you feel if you could no longer use [Product]?"
Answers:
- Very disappointed
- Somewhat disappointed
- Not disappointed (it really isn't that useful)
- N/A — I no longer use [Product]
Scoring:
- Count only "very disappointed" responses
- Divide by total non-churned respondents
- PMF threshold: > 40% "very disappointed"
Sample size requirement: Minimum 40 responses. Under 40, the signal is noisy.
When to run it:
- When you have 100-500 active users
- Quarterly for ongoing tracking
- After major product changes
What to do with "somewhat disappointed": Don't lump them with "very disappointed." The delta between "somewhat" and "very" is where your retention problem lives. Interview people in the "somewhat" group. What's missing? Why only somewhat?
When score is 20-35%: You have a segment with PMF. Find them. Ask what they love. Run a separate survey for just that segment.
When score is < 20%: Your core value proposition isn't working. This is not a retention tactics problem. Revisit the fundamental problem you're solving.
Signal 3: Organic Growth and Referral
Metric: % of new signups that came from existing user referral, word of mouth, or organic search — without a paid incentive.
Threshold: > 20% of new users are coming organically without incentive programs.
How to measure:
- Tag signup source: paid, organic search, referral (with referral code), direct/dark social
- Track monthly. Is the organic % trending up or stable?
- Interview organic signups: "How did you hear about us?" (don't trust the dropdown)
Why this matters: Paid growth can mask the absence of PMF. You can buy users who churn. You can't buy users who tell their friends.
Step 3: Run PMF Experiments (Pre-PMF)
If you're below thresholds, don't optimize — experiment. The goal is to find the version of the product where at least a small segment has PMF.
The PMF Experiment Loop
1. Pick one customer segment + one hypothesis about their job to be done
2. Remove everything from the product that doesn't serve that job
3. Run a 4-week cohort with only that segment
4. Measure retention + Sean Ellis for that cohort
5. If PMF signal: this is your beachhead. Double down.
If no signal: new hypothesis. Repeat.Time box: Each experiment 4-8 weeks. If you're running experiments for 18+ months with no signal, revisit the problem space, not just the solution.
What to Change
| Lever | Change | Expected Impact |
|---|---|---|
| Target segment | Narrow ICP from "all companies" to "Series A SaaS" | Faster learning, higher retention |
| Core job | Reframe from feature-benefit to outcome-benefit | Better product decisions |
| Onboarding | Remove steps to time-to-value | D1 retention up |
| Pricing | Move from per-seat to per-outcome | Align incentives with value |
| Channel | Switch from outbound to PLG | Different segment discovers product |
Step 4: Validate PMF (Post-Signal, Pre-Scale)
Congratulations, you have a retention curve that flattens. Before you scale:
Validate that it's real:
- Can you acquire more of the same customers? (Test CAC at 2x current volume)
- Do the retained users expand? (Are they buying more seats, upgrading?)
- Is the NPS from retained users > 40?
- Are they forgiving of bugs and slowness? (Love, not tolerance)
Validate the unit economics:
- LTV / CAC > 3x (for SaaS)
- Payback period < 18 months
- Gross margin > 60% (SaaS), > 40% (marketplace)
The danger zone: Convincing yourself you have PMF before economics are viable. High retention with terrible unit economics is not a business — it's a hobby that grows.
PMF by Business Model
B2B SaaS
Primary signal: D90 retention > 45% in target segment.
Secondary signals:
- NPS from retained users > 50
- Expansion revenue from retained accounts (NRR > 110%)
- Sales cycle shortening as word-of-mouth increases
PMF finding strategy:
- Start with one vertical, not the whole market
- Get 3-5 reference customers who use it daily and refer others
- Don't expand segment until you can replicate the reference case
Common false signals:
- Retained users who are locked in by contract, not value
- Expansion revenue from upselling, not from organic growth
- High satisfaction survey scores with flat usage data
B2C / Consumer
Primary signal: D30 retention > 20%, with a flat or rising tail at D90.
Secondary signals:
- DAU/MAU ratio > 20% (daily habit product: > 40%)
- Session depth (users exploring multiple features, not one-and-done)
- Organic referral rate > 20% of new installs
PMF finding strategy:
- Consumer PMF is about habit formation — which behavior do you own in a user's day?
- Find the "aha moment" (the action that predicts retention). Build everything to get users there faster.
- Segment ruthlessly — consumer PMF is often strong in one demographic, weak in others.
Common false signals:
- High D1 retention from email campaigns that re-engage dormant users
- Good NPS from vocal users who are power users, not typical users
- Media buzz driving installs from wrong audience
Marketplace
Primary signal: Successful transaction rate and repeat buyer rate.
Secondary signals:
- Supply-side retention (sellers/providers coming back)
- Liquidity score: % of demand requests matched within acceptable time
- Referral: both sides sending others
PMF challenge: You have two customers (supply and demand). PMF can exist on one side and not the other.
PMF finding strategy:
- Start with constrained geography or category — don't try to be national before local works
- Measure GMV per cohort, not just transaction count
- Find the "magic moment" for both buyer and seller. Optimize for both.
PLG (Product-Led Growth)
Primary signal: Free-to-paid conversion rate + paid retention.
Secondary signals:
- Time to activation (reaching the "aha moment" in free tier)
- PQL (product-qualified lead) conversion to paid
- Team invites from individual users (virality coefficient)
PMF finding strategy:
- The free tier must have genuine value — not a crippled trial
- Track activation milestone (the action that predicts conversion)
- Optimize activation before conversion — conversion optimizations don't work if nobody activates
After PMF: The Scaling Trap
Most companies that fail after PMF weren't ready to scale. They scaled the wrong thing.
The Scaling Trap
You have PMF with segment A. You hire sales and start selling to segment B. Segment B doesn't retain. NPS drops. Engineers chase segment B feature requests. Segment A users feel abandoned.
This is the most common way early-stage companies die after PMF.
What to Do After PMF
First 90 days after confirming PMF:
- Document your best customer profile in extreme detail
- Build the playbook to replicate the reference customer, not to expand the ICP
- Hire sales to replicate, not to expand
- Instrument everything — you need to know what's driving retention for every new cohort
- Don't launch new features. Remove friction from the path that's already working.
The expansion question: Only expand ICP when:
- You can replicate the reference customer at 3x volume with same retention
- CAC is declining (word of mouth in the reference segment)
- You've exhausted density in the reference segment
Don't expand ICP to save the business. Expanding ICP when retention is declining is panic, not strategy.
How to Know When PMF Is Slipping
PMF is not a binary state. It can degrade. Watch for:
| Signal | What's Happening | Response |
|---|---|---|
| D30 retention declining across cohorts | Product changes or market change are eroding value | Run Sean Ellis test immediately. Interview churned users. |
| Sean Ellis score dropping | Users less passionate about the product | Feature gap opening. Competitive pressure. |
| NPS dropping for retained users | Power users seeing degraded experience | Product quality or performance issues. |
| Organic referral rate declining | Satisfied users less enthusiastic | Product becoming commoditized. Moat eroding. |
| Support tickets shifting from feature requests to bug reports | Technical debt catching up | Engineering quality investment needed. |
| Sales cycles lengthening | ICP no longer self-evident. Positioning drift. | Re-run positioning exercise. Sharpen ICP. |
The PMF quarterly check: Run Sean Ellis test every quarter. Track D30 retention by cohort every month. Put both on the CPO dashboard. These are your vital signs.
Quick Reference
| Test | Threshold | Frequency |
|---|---|---|
| Sean Ellis | > 40% very disappointed | Quarterly |
| D30 retention (B2B SaaS) | > 40% | Monthly (by cohort) |
| D30 retention (consumer) | > 20% | Monthly (by cohort) |
| D90 retention (B2B SaaS) | > 45% | Monthly (by cohort) |
| Organic signup % | > 20% | Monthly |
| NPS (retained users) | > 40 | Quarterly |
| DAU/MAU (if daily product) | > 20% | Weekly |
Use scripts/pmf_scorer.py to run all dimensions together with weighted scoring.
Product Org Design Reference
How to structure, hire, and run product organizations at different stages. No generic advice — stage-specific, role-specific, and honest about what breaks.
1. Team Topologies for Product Orgs
Matthew Skelton and Manuel Pais defined four team types. Here's how they map to product organizations.
Four Team Types
Stream-Aligned Teams
Own a continuous flow of customer-facing work. They take problems all the way from discovery to delivery to measurement.
Product org equivalent: Feature teams, growth teams, customer journey teams.
Characteristics:
- Long-lived (not project teams)
- Full-stack: PM + Designer + 3-7 Engineers + QA
- Can deploy independently without asking another team
- Own their backlog, their metrics, their outcomes
Health signals:
- Ships without waiting on other teams more than 20% of the time
- Can define their own north star and trace it to company metric
- PMs spend > 50% of time in discovery, not coordination
Warning signs:
- Every sprint has "dependencies" blocking progress
- Team has PMs but engineers don't know the customer problems
- Roadmap is handed to them, not co-created
Platform Teams
Build and maintain shared capabilities so stream-aligned teams don't reinvent them.
Product org equivalent: Platform product team, internal tools, shared infrastructure.
Characteristics:
- Serve internal customers (other teams), not end users directly
- Measure success by stream-aligned team velocity, not feature count
- Self-service is the goal — stream teams should be unblocked without filing tickets
Health signals:
- Stream-aligned teams can do 80% of their work without filing a ticket to platform
- Platform has a public API and documentation, not just engineers who know how it works
- Platform team metrics include "number of teams using X without assistance"
Warning signs:
- Platform team has a 6-week SLA for new features
- Stream teams fork the platform to avoid waiting
- Platform team's backlog is driven by platform's own ideas, not stream team pain
The platform product manager role: Platform PMs are not feature PMs. They manage internal customers. Key skills:
- Developer experience empathy (they're building for engineers)
- API and infrastructure intuition (you can't PM what you don't understand)
- Saying "no" gracefully when requests are misuses of the platform
Enabling Teams
Temporarily help other teams upskill in a domain. Not permanent.
Product org equivalent: UX research team, data literacy evangelism, accessibility experts.
Duration: Time-boxed. 3-6 months. Then they leave and the skill stays.
Failure mode: Enabling teams that never leave become coordination bottlenecks.
Complicated Subsystem Teams
Deep expertise required. Minimal interaction.
Product org equivalent: ML/AI product team, compliance product, payments, internationalization engine.
Characteristics:
- Specialists who can't be split across stream-aligned teams
- Interact via well-defined interface, not collaboration
- Have their own PM who understands the domain deeply
2. Org Models at Each Stage
Pre-Seed / Seed (1-20 engineers)
Structure: Founder/CEO or founder/CTO is the PM. Maybe one hired PM at 15+ engineers.
Don't build: Process, specialization, hierarchy.
Do build: Direct customer access, fast iteration loops, written learning from every experiment.
PM role at this stage:
- Not shipping features. Talking to customers.
- Not writing specs. Running experiments.
- Not managing engineers. Being managed alongside them.
Hiring mistake: Hiring a "process PM" who builds Jira templates before you have PMF.
Series A (20-60 engineers)
Structure: 2-4 PMs, organized by product area or customer journey.
CPO / Head of Product
├── PM — Core Product (the thing customers pay for)
├── PM — Growth / Acquisition (how more customers get there)
└── PM — Platform (as soon as engineering says they need it)What you add: One embedded designer. Analytics shared.
First PM hire criteria:
- Has shipped something users use, not just wrote a spec
- Comfortable with ambiguity and no process
- Will talk to customers without being asked
- Understands the technical constraints intuitively
What breaks at Series A:
- Verbal communication stops working. First thing to document: the roadmap, the north star, who decided what.
- Engineers start asking "why are we building this?" — good. Answer it.
- Customer requests multiply faster than capacity. You need a prioritization framework.
Series B (60-150 engineers)
Structure: 4-8 PMs, head of product, first design hire, embedded or dedicated analytics.
CPO
├── Head of Product
│ ├── PM — [Team 1] (stream-aligned)
│ ├── PM — [Team 2] (stream-aligned)
│ ├── PM — [Team 3] (stream-aligned)
│ └── PM — Platform (if engineering > 40)
├── Head of Design (or Senior Designer × 2-3)
└── Analytics (shared, or 1 embedded per team)What you add at Series B:
- Head of Product (frees CPO from backlog, runs PM team)
- First Head of Design hire (if not already)
- Dedicated growth team (PLG or acquisition)
What breaks at Series B:
- PMs start optimizing their own team's metrics instead of company metrics
- Design and engineering don't talk until sprint planning
- Data team is a ticket queue — PMs can't self-serve
Fix: OKR alignment across teams. Design in discovery, not in handoff. Analytics tool self-serve access for every PM.
Series C (150-400 engineers)
Structure: 8-15 PMs, multiple PM leads / directors, specialized functions.
CPO
├── VP / Director of Product
│ ├── PM Lead — [Product Line 1]
│ │ ├── PM
│ │ └── PM
│ ├── PM Lead — [Product Line 2]
│ │ ├── PM
│ │ └── PM
│ └── PM Lead — Platform
├── Head of Design
│ ├── UX Design
│ ├── Product Design
│ └── UX Research
├── Head of Data / Analytics
│ ├── Product Analytics
│ └── Data Science
└── Head of Product OperationsWhat you add at Series C:
- PM leads / directors (PMs managing PMs)
- Dedicated UX research
- Head of Product Operations (roadmap tooling, PM hiring, analytics standards, product community)
- Possible Chief of Staff (Product)
What breaks at Series C:
- Coordination overhead becomes the primary job
- PMs become project managers managing handoffs instead of product decisions
- Consistency across teams: 5 different ways to write a spec, 5 different analytics setups
- CPO loses touch with customers
Fix: Product principles (written, opinionated, used in reviews). Embedded researchers. Regular CPO customer calls (monthly minimum). Product ops to solve consistency without bureaucracy.
3. PM:Engineer Ratios
By Stage
| Stage | Engineers | PMs | Ratio | Notes |
|---|---|---|---|---|
| Seed | 5 | 0-1 | 1:5 | Founder PM common |
| Series A | 20-40 | 2-4 | 1:8 | First real PMs |
| Series B | 60-100 | 5-8 | 1:10 | Platform PM emerges |
| Series C | 150-250 | 12-18 | 1:12 | PM leads required |
| Growth | 300+ | 20+ | 1:12-15 | Specialization high |
By Team Type
| Team Type | Ratio | Rationale |
|---|---|---|
| Stream-aligned (feature) | 1:6-8 | High discovery work, many stakeholders |
| Growth / PLG | 1:8-10 | High experimentation, more autonomy per engineer |
| Platform | 1:10-15 | Lower ambiguity, more self-directed engineers |
| Complicated subsystem (ML, payments) | 1:12-20 | Technical direction from engineers, PM is translator |
The ratio trap: These are guidelines, not targets. A great PM in a bad org with 12 engineers accomplishes less than a great PM with 8 in a healthy org. Fix the org before optimizing the ratio.
4. When to Hire Key Roles
Head of Design
Not yet signal:
- Fewer than 2 full-time designers
- Product is primarily technical (API-first, developer tool with no GUI)
- Design is consistently described as "not a blocker"
Hire now signal:
- Design has become a coordination problem (who reviews what? which system? what's the standard?)
- You have 3+ designers and they're inconsistent
- CPO is spending significant time on design decisions
- Customers cite UX as a blocker to adoption
What this person does:
- Builds and maintains the design system
- Runs UX research as a function, not one-off projects
- Hires and grows the design team
- Keeps designers from becoming pixel-pushers and keeps them in discovery
Wrong hire: A senior IC who can't build process and isn't excited about it.
Head of Data / Analytics
Not yet signal:
- < 5 PMs, data team shared with engineering
- You don't have product analytics instrumentation yet (worry about that first)
- Product metrics are reviewed monthly and nobody acts on them
Hire now signal:
- PMs are filing tickets for basic metric questions (sign that data team is a bottleneck)
- Multiple products with different tracking setups — no common definitions
- You want to run experiments but don't have infrastructure
- Leadership is making product decisions without data (not from choice — from access)
What this person does:
- Defines the event taxonomy and enforces it
- Builds self-serve analytics capability for PMs
- Runs A/B testing infrastructure
- Partners with PMs on experiment design (before launch, not after)
Wrong hire: A pure data scientist who can't build product analytics infrastructure and doesn't want to.
Head of Product Operations
Hire when you have:
- 8+ PMs with inconsistent processes
- CPO spending > 30% of time on internal coordination
- No standard for roadmap tools, prioritization, or PM onboarding
- Product team can't answer "what are all teams working on this quarter?" without a 2-hour meeting
What this person does:
- PM onboarding and development program
- Roadmap and tooling standards (Jira, Linear, Notion — pick one and enforce it)
- Data pipelines from product to leadership (weekly metrics, OKR tracking)
- PM hiring and interview process
- Voice of product org in cross-functional coordination
What this person does NOT do:
- Drive product strategy (that's the CPO)
- Manage PMs (that's the Head of Product or PM leads)
- Own analytics (that's Head of Data)
5. The Product Trio
Every product team should have three roles working together from day one of discovery:
Product Manager → What to build and why
Product Designer → How users experience it
Tech Lead / Engineer → How to build it sustainablyHow the Trio Actually Works
Discovery (weeks 1-2 of any new initiative):
- All three in user interviews together
- All three reviewing competitive products
- All three in problem framing sessions
- Output: Opportunity, not solution
Ideation (days):
- All three generating solutions
- Designer prototypes 2-3 options
- Engineer provides feasibility gut check on each
- PM synthesizes against strategy
- Output: Prototype for testing
Testing (days):
- Designer and PM run tests (engineer optional but encouraged)
- Tests with 5-8 real customers
- All three review findings together
- Output: Decision: build, iterate, or kill
Delivery (sprints):
- PM writes acceptance criteria (what done looks like from user perspective)
- Engineer owns implementation
- Designer owns QA for experience quality
- All three do final review before release
Trio Anti-Patterns
| Anti-Pattern | What It Looks Like | Why It Fails |
|---|---|---|
| PM → Designer → Engineer | Waterfall disguised as agile | Late discovery of infeasibility and poor UX |
| Engineer-led | Engineers propose solutions, PM and designer polish | Builds technically correct thing nobody wants |
| PM-led dictation | PM writes detailed spec, team executes | Team has no context, can't make good trade-offs |
| Designer detached | Designers design in isolation, present to engineers | Beautiful mockup that's 8x harder to build than alternative |
| No research | Trio invents problems and solutions in a conference room | Building for themselves |
6. Remote vs. Co-located Product Teams
The debate is mostly settled. Here's what actually matters:
What Changes with Remote
| Activity | Co-located | Remote | Fix |
|---|---|---|---|
| Discovery sync | Organic, hallway | Requires scheduling | Daily async standups + weekly sync |
| Whiteboarding | Easy | Friction | Figma, Miro — async-first artifacts |
| Design review | Walk over | Calendar invite | Record reviews; written decisions |
| Relationship building | Osmotic | Deliberate | Regular 1:1s, team rituals, offsites |
| Onboarding | Shadow in person | Document-heavy | Written playbooks + buddy system |
| Difficult conversations | Easier in person | Harder | Default to video, not Slack |
The Async-First Product Team
Works well remote IF:
- Decisions are written (Notion, Confluence, not Slack threads)
- Roadmaps are accessible to everyone without a meeting
- Product reviews are recorded and linked
- Discovery artifacts are shared before the meeting, discussed in the meeting
- 1:1s are weekly and actual (not "let's skip this week")
What doesn't survive async:
- Ambiguous ownership
- Verbal agreements (write it down or it didn't happen)
- Teams where "PM wrote the spec" is the only documentation
Remote Product Org Practices
Weekly Cadence:
Monday: Async kickoff — each team posts week's focus + blockers
Tuesday: Product trio sync (30 min, per team)
Wednesday: CPO / Head of Product 1:1s
Thursday: Cross-team PM sync (30 min, rotating topics)
Friday: Async retrospective notes + week summaryMonthly:
- Full product org sync (all PMs, designers, heads)
- CPO product review (each team presents one initiative)
- Metrics review (company + team level)
Quarterly:
- In-person or virtual offsite
- Strategy and OKR setting
- Individual growth conversations
Quick Reference
| Stage | Structure | First Hire Priority |
|---|---|---|
| Seed | Founder PM | Generalist PM with customer instincts |
| Series A | 2-3 PMs, flat | First real PM, owns a product area |
| Series B | Head of Product, 4-8 PMs | Head of Design |
| Series C | Org layers, PM leads | Head of Data + Product Ops |
| Growth | Full specialization | Chief of Staff (Product) |
PM:Engineer ratio target by stage: Seed 1:5 → Series A 1:8 → Series B 1:10 → Series C 1:12 → Growth 1:15
Three things that fix most product org problems:
- Stream-aligned teams with full-stack ownership (PM + Design + Eng)
- OKRs that cascade from company to team to individual
- Product trio in discovery, not just delivery
Product Strategy Reference
Frameworks for product vision, competitive positioning, portfolio management, and board reporting. No theory — only what CPOs actually use.
1. Vision Frameworks
Jobs to Be Done (JTBD)
JTBD is not a feature framework. It's a way to understand why customers hire your product and under what circumstances.
The core insight: People don't want your product. They want to make progress in their lives, and they hire your product to help. When you understand the job, you understand competition differently.
Conducting JTBD Interviews
Who to interview: Recent buyers and recent churners. Not power users — they're already converted.
The interview script (condensed):
1. "Walk me through the last time you [started using / stopped using] this product."
2. "What were you doing the day before you decided?"
3. "What else did you consider?"
4. "What almost stopped you from doing it?"
5. "Now that you're using it, what does your day look like differently?"What you're extracting:
- Functional job: What task are they accomplishing?
- Emotional job: How do they feel during and after?
- Social job: How are they perceived?
- Timeline: What triggered the switch? (the "push" from old solution + "pull" toward new one)
- Anxieties: What almost prevented adoption?
- Competing solutions: What are they comparing you to, including "do nothing"?
JTBD Output: The Job Story
Format better than "user story" for strategic decisions:
When [situation],
I want to [motivation/job],
So I can [expected outcome].Example (healthcare scheduling):
When I'm trying to coordinate my parent's care from another city,
I want to see their upcoming appointments and have someone confirm changes,
So I can feel confident they won't miss critical treatments.This is a different product than "schedule management software." The strategic implications — care coordination, family access, confirmation workflows — flow from the job.
JTBD → Product Strategy
| Job Insight | Strategic Implication |
|---|---|
| Job is episodic (quarterly) | Engagement model must reach them before they need it |
| Job is habitual (daily) | DAU/MAU matters; build for habit formation |
| Job has high stakes | Trust and reliability > features; invest in onboarding + support |
| Job is social | Network effects possible; virality is structural, not a campaign |
| Job is delegated (done for someone else) | Two users: the buyer and the beneficiary. Design for both. |
Category Design
If you're fighting for share in an existing category, you're playing defense on someone else's field.
Category design premise: Companies that define the category typically capture 76% of the market cap of that category. Name the category, own it.
The Category Design Process
Step 1: Name the problem, not the solution.
Wrong: "We make AI-powered customer support software."
Right: "The support team doesn't need more tickets. They need fewer problems."Step 2: Define the enemy. The enemy is the old way of solving the problem, not a competitor.
- Salesforce's enemy: spreadsheets and disconnected tools (not Siebel)
- Slack's enemy: email overload (not HipChat)
- Your enemy: ___________
Step 3: Create the category name. It should be obvious in hindsight, not predictable in advance. Test it:
- Does it describe the problem, not the solution?
- Is it 2-3 words?
- Could a journalist use it without quoting you?
Step 4: Missionary selling, not mercenary selling. Category kings educate the market before they sell to it. Content, thought leadership, community, and free tools all matter here — not as marketing tactics but as category creation.
Step 5: Be the reference customer. Get the logos that define the category. The companies others look to. When others adopt, they don't want "a tool" — they want "what [Reference Customer] uses."
2. Competitive Moats
A moat is a structural advantage that compounds over time. Features are not moats. Pricing is not a moat. A moat is why, even if a competitor perfectly copies your product today, you still win.
Moat Type 1: Network Effects
The product becomes more valuable as more users join. Two subtypes:
Direct network effects: Each user makes the product better for all other users (WhatsApp, Slack).
Indirect network effects: Each user on one side makes the product better for the other side (Uber drivers + riders, App Store developers + users).
Data network effects: More users → more data → better product → more users.
Network Effect Diagnostic
Question 1: Does adding user N make the product better for user N-1?
No → You don't have direct network effects
Yes → Map exactly how and how much
Question 2: Does adding user N make the product better for users on the OTHER side?
No → You don't have indirect network effects
Yes → Identify which side is the constraint (supply or demand)
Question 3: Does using the product generate data that improves the product?
No → You don't have data network effects
Yes → What is the data flywheel? Where does it compound?Building network effects intentionally:
- Most products accidentally have weak network effects
- Design for network effects from Day 1: sharing, notifications, collaboration, integrations
- Measure network effect strength: "What % of new users were referred by existing users?"
Moat Type 2: Switching Costs
The cost — time, money, risk — of leaving your product. The highest switching costs are:
| Switching Cost Type | Example | CPO Action |
|---|---|---|
| Data lock-in | Years of history, reports, trained models | Make data the experience, not just the storage |
| Workflow integration | 23 integrations, custom automations | Every integration is a switching cost. Build them. |
| Team adoption | Entire team trained on your tool | Multi-seat training investments pay switching cost dividends |
| Contractual | Annual contracts, SLAs | Long contracts are not a moat — customers resent them |
| Process embedding | Your product IS their process | Aim here. This is the deepest moat. |
Warning: Switching costs from data lock-in without value lock-in breed resentment, not loyalty. Customers who stay because they're trapped will leave the moment a migration tool appears.
Moat Type 3: Data Advantages
Having data others can't easily get. Three subtypes:
Proprietary data: Data only you have access to (exclusive partnerships, sensor networks, unique user behavior at scale).
Data scale: Same type of data but at 10x the volume of competitors. Scale compounds model accuracy.
Data variety: Unique combination of data types. Not just usage data — usage + outcome data + external context.
Testing your data moat:
1. What data do we have that competitors don't?
2. At what volume does our data create a meaningfully better product?
3. Are we at that volume? If not, when?
4. Could a competitor buy or partner their way to equivalent data?
5. Is our data improving the product automatically, or only when we analyze it manually?Moat Type 4: Economies of Scale
Unit economics improve as you scale. Infrastructure costs drop per unit. Brand recognition lowers CAC. Negotiating power increases.
This is a real moat but the weakest one for product strategy — it doesn't keep faster-moving competitors from attacking while you're small.
Moat Scorecard
Score each moat type 0-3 for your current product:
0 = Not present
1 = Weak / easily replicated
2 = Meaningful / takes 12-18 months to replicate
3 = Strong / structural advantage
Network effects (direct): __/3
Network effects (indirect): __/3
Network effects (data): __/3
Switching costs (data): __/3
Switching costs (workflow): __/3
Switching costs (team): __/3
Data advantages (exclusive): __/3
Data advantages (scale): __/3
Economies of scale: __/3
Total: __/27
< 9: No meaningful moat. Compete on execution speed.
9-15: Early moat. Identify and reinforce 1-2 strongest types.
16-21: Real moat. Invest to compound it.
> 21: Strong moat. Defend and expand.3. Product Positioning
Positioning is not messaging. Positioning is the choice of: Who is this for, what does it replace, and on what dimension do we win?
The Positioning Canvas (after April Dunford)
1. Competitive Alternatives
What would customers do if your product didn't exist?
(This is your real competition, not just your vendor category)
2. Unique Attributes
What capabilities do you have that alternatives lack?
(Features, but described neutrally, not as marketing)
3. Value (Outcomes)
What does each unique attribute enable for customers?
(Bridge from feature → outcome, not feature → feature)
4. Customer Who Cares
Who values those outcomes enough to pay for them?
(The customer segment for whom this value is highest)
5. Market Category
Where does the customer put you when comparing options?
(Frame the category to win, not to be fair)
6. Relevant Trends
What's changing in the world that makes this more valuable now?
(Why this moment? Urgency enabler.)Positioning Against Three Competitors
Positioning vs. direct competitor: Identify one dimension where you structurally win. "Better" is not a position.
- Win on depth: more powerful in one scenario
- Win on simplicity: fewer decisions, fewer steps
- Win on integration: works with what they already use
- Win on price/value: same outcome, lower cost or risk
Positioning vs. indirect alternative: The customer's current solution (spreadsheet, manual process, point solution).
- Make switching cost obvious (what are they giving up per week?)
- Make the switch simple (migration, onboarding, no data loss)
- Find the "aha moment" fast (value before they revert)
Positioning vs. doing nothing: The hardest competitor. Status quo has zero switching cost.
- Quantify the cost of inaction (time, risk, revenue, competitive risk)
- Find the trigger event that makes inaction intolerable
- Show the risk is higher than the switch cost
Positioning Failure Modes
| Failure | Description | Fix |
|---|---|---|
| For everyone | No segment. "Any company that needs X." | Name the best-fit customer. |
| Feature positioning | "The only tool with [feature X]" | Features are table stakes. Lead with outcome. |
| Vague differentiation | "Easier, faster, better" | Measurable, specific, or don't say it. |
| Category misfit | In a category where you can't win | Either own the category or name a new one |
| Lagging positioning | Positioned for who you were, not who you are | Reposition every 18-24 months or after major product change |
4. Portfolio Management
Applying BCG Matrix to Product Lines
BCG matrix was designed for business units. Applied to product lines:
Inputs:
- Market growth rate (industry growth, not your growth)
- Relative market share (your share vs. largest competitor)
- Revenue contribution (absolute)
- Investment level (engineering + sales + marketing per product)
Calculation:
Market share ratio = Your market share / Largest competitor's market share
Growth rate = Market CAGR (next 3 years estimate)
Stars: share ratio > 1.0, growth > 10%
Cash Cows: share ratio > 1.0, growth < 10%
Question Marks: share ratio < 1.0, growth > 10%
Dogs: share ratio < 1.0, growth < 10%Portfolio Allocation Rules
Star products:
- Invest at or above market growth rate
- Goal: maintain share leadership as market grows
- Don't extract cash — reinvest
- Metrics: market share trend, NPS, retention, feature velocity
Cash Cow products:
- Minimum investment to maintain market position
- Goal: maximize free cash flow
- Resist the urge to innovate — incremental improvements only
- Metrics: gross margin, churn rate, support cost per customer
Question Mark products:
- Binary decision: invest to win or exit
- "Maintain" is not a strategy for question marks — you lose share every quarter you're neutral
- Set a deadline (2 quarters) and a threshold for investment decision
- Metrics: share gain rate, customer acquisition efficiency
Dog products:
- Decision: sell, sunset, or bundle
- Never "fix" a dog with more investment
- Timeline to sunset: 6-12 months, migration plan for existing customers
- Metrics: customer migration rate, revenue retained
Portfolio Review Template
Run quarterly. One slide per product.
Product: [Name]
Current Quadrant: [Star/Cash Cow/Question Mark/Dog]
Revenue this quarter: $___
Revenue growth QoQ: ___%
Market share estimate: ___%
Investment level (% of eng capacity): ___%
Investment posture: [Invest / Maintain / Kill]
Key metric: [Name] → [Current value] → [QoQ trend]
Top risk: [One thing that could change this assessment]
Decision required: [Yes/No] | [What decision?]The Honest Portfolio Conversation
Questions CPOs avoid but boards ask:
- "Which product would we kill if we had to? What's stopping us?"
- "Are we funding dogs because the team is attached or because there's a real plan?"
- "What would our margins look like if we stopped investing in the bottom 2 products?"
- "What's the dependency between our products? Are we a platform or a bundle of unrelated tools?"
5. Board-Level Product Reporting
What Good Looks Like
Board product updates fail in three ways:
- Too much roadmap detail (feature list masquerading as strategy)
- No trend context (showing a number without showing if it's getting better or worse)
- No risks (all good news = no credibility)
The 5-Slide Board Product Update
Slide 1: North Star Metric
Title: Product Health — [Quarter]
[Chart: North star metric over last 12 months, quarterly cohorts]
This quarter: [Value] | Prior quarter: [Value] | YoY: [Value]
Target: [Value] | Status: On track / At risk / Behind
Drivers (2-3 bullets):
• What's driving improvement: ___
• What's dragging: ___
• What we're doing about the drag: ___Slide 2: Retention and PMF
Title: Product-Market Fit Evidence
[Chart: D30 retention by cohort, last 6 cohorts]
[Callout: Sean Ellis score = XX% (target: > 40%)]
PMF status: Achieved / Approaching / Not yet
Best segment: [Describe — where retention is strongest]
Weakest segment: [Describe — and what we're doing about it]Slide 3: Portfolio Status
Title: Portfolio — Invest / Maintain / Kill
| Product | Quadrant | Revenue | Growth | Posture | Risk |
|---------|---------|---------|--------|---------|------|
| [A] | Star | $___ | +XX% | Invest | ___ |
| [B] | Cash Cow| $___ | +X% | Maintain| ___ |
| [C] | Dog | $___ | -X% | Kill Q3 | ___ |
Changes since last quarter: ___
Decisions needed from board: ___Slide 4: Strategic Bets
Title: Bets This Half — [H1/H2]
Bet 1: [Name]
Hypothesis: If we [do X], [segment Y] will [do Z]
Evidence so far: [Data]
Confidence: [Low / Medium / High]
Decision point: [When do we know?] [What will we measure?]
Bet 2: [Name]
[Same structure]Slide 5: Top Risks
Title: Product Risks — [Quarter]
Risk 1: [Name]
What it is: ___
Probability: [Low/Med/High]
Impact if realized: ___
Mitigation: ___
Risk 2: [Name]
[Same structure]
Risk 3: [Name]
[Same structure]Delivering in the Board Meeting
- Never read the slide
- Lead with the conclusion, not the data
- Prepare for "what if that assumption is wrong?" for every bet
- When something underperformed: say it, own it, explain what changed
- Never present a number you can't explain 3 levels deep
Example of bad delivery: "Our north star is up 15% QoQ, which is great. We're tracking well."
Example of good delivery: "North star is up 15% — ahead of plan. The majority of that is from the enterprise cohort activated in October, driven by the workflow automation feature we shipped in September. The consumer segment is flat, which is a concern. We're running three experiments this quarter to diagnose whether that's an acquisition problem or an activation problem — I'll have an answer for next quarter."
Quick Reference: Framework Summary
| Need | Framework |
|---|---|
| Why do customers use us? | Jobs to Be Done |
| How do we define our market? | Category Design |
| What's our structural advantage? | Moat Scorecard |
| How do we position? | April Dunford Positioning Canvas |
| Which products to fund? | BCG Matrix + Invest/Maintain/Kill |
| How to report to the board? | 5-Slide Board Update |
#!/usr/bin/env python3
"""
PMF Scorer — Multi-dimensional Product-Market Fit analysis.
Scores PMF across four dimensions:
- Retention (40%): D30 and D90 cohort retention
- Engagement (25%): DAU/MAU, session depth, key action rate
- Satisfaction(20%): Sean Ellis score, NPS
- Growth (15%): Organic signup rate, referral rate
Usage:
python pmf_scorer.py # Run with built-in sample data
python pmf_scorer.py --input data.json # Run with your data
JSON input format: see sample_data() function below.
"""
import json
import sys
import argparse
import math
from typing import Optional
# ---------------------------------------------------------------------------
# Data structures
# ---------------------------------------------------------------------------
def sample_data() -> dict:
"""
Sample input data. Replace with your own values.
All fields are optional — missing fields score 0 for that sub-metric
and a note is added to recommendations.
"""
return {
"product_name": "Acme SaaS",
"business_model": "b2b_saas", # b2b_saas | consumer | marketplace | plg
# Retention: D30 and D90 as decimals (e.g. 0.42 = 42%)
# Provide multiple cohorts if available. Most recent first.
"retention": {
"d30_cohorts": [0.38, 0.41, 0.44, 0.43], # newest → oldest
"d90_cohorts": [0.28, 0.30, 0.31],
"curve_flattening": True, # Does the curve flatten (vs. continuing to drop)?
},
# Engagement
"engagement": {
"dau_mau_ratio": 0.24, # Daily active / Monthly active (decimal)
"avg_sessions_per_week": 3.2, # Per active user
"key_action_rate": 0.55, # % of users who performed core value action in last 30d
"session_depth_score": 0.6, # 0-1: 0 = one page, 1 = full feature exploration
},
# Satisfaction
"satisfaction": {
"sean_ellis_very_disappointed": 0.38, # Fraction (e.g. 0.38 = 38%)
"sean_ellis_sample_size": 87, # Raw response count
"nps_score": 34, # -100 to 100
"nps_sample_size": 210,
},
# Growth
"growth": {
"organic_signup_pct": 0.27, # % of new signups from organic/referral/WOM
"referral_rate": 0.18, # % of active users who referred someone last 90d
"mom_growth_rate": 0.08, # Month-over-month new user growth (decimal)
},
}
# ---------------------------------------------------------------------------
# Thresholds by business model
# ---------------------------------------------------------------------------
THRESHOLDS = {
"b2b_saas": {
"d30_pmf": 0.40, "d30_strong": 0.60,
"d90_pmf": 0.25, "d90_strong": 0.45,
"dau_mau_pmf": 0.15, "dau_mau_strong": 0.35,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 30, "nps_strong": 50,
},
"consumer": {
"d30_pmf": 0.20, "d30_strong": 0.35,
"d90_pmf": 0.10, "d90_strong": 0.20,
"dau_mau_pmf": 0.20, "dau_mau_strong": 0.40,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 20, "nps_strong": 45,
},
"marketplace": {
"d30_pmf": 0.30, "d30_strong": 0.50,
"d90_pmf": 0.20, "d90_strong": 0.35,
"dau_mau_pmf": 0.15, "dau_mau_strong": 0.30,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 25, "nps_strong": 45,
},
"plg": {
"d30_pmf": 0.25, "d30_strong": 0.45,
"d90_pmf": 0.15, "d90_strong": 0.30,
"dau_mau_pmf": 0.20, "dau_mau_strong": 0.40,
"sean_ellis_pmf": 0.40, "sean_ellis_strong": 0.55,
"nps_pmf": 30, "nps_strong": 50,
},
}
# Weights for the four dimensions (must sum to 1.0)
DIMENSION_WEIGHTS = {
"retention": 0.40,
"engagement": 0.25,
"satisfaction": 0.20,
"growth": 0.15,
}
# ---------------------------------------------------------------------------
# Scoring helpers
# ---------------------------------------------------------------------------
def clamp(value: float, lo: float = 0.0, hi: float = 1.0) -> float:
return max(lo, min(hi, value))
def score_between(value: Optional[float], lo: float, hi: float) -> float:
"""Linear interpolation: lo → 0.0, hi → 1.0, beyond hi → 1.0."""
if value is None:
return 0.0
if value <= lo:
return 0.0
if value >= hi:
return 1.0
return (value - lo) / (hi - lo)
def cohort_trend(cohorts: list) -> float:
"""
Given cohorts newest-first, return a trend score -1 to +1.
Positive = improving. Negative = degrading.
"""
if len(cohorts) < 2:
return 0.0
# Simple: compare most recent half average vs. older half average
mid = len(cohorts) // 2
recent_avg = sum(cohorts[:mid]) / mid if mid else cohorts[0]
older_avg = sum(cohorts[mid:]) / (len(cohorts) - mid)
if older_avg == 0:
return 0.0
delta = (recent_avg - older_avg) / older_avg
return clamp(delta * 5, -1.0, 1.0) # scale: 20% improvement = score of 1.0
# ---------------------------------------------------------------------------
# Dimension scorers
# ---------------------------------------------------------------------------
def score_retention(data: dict, thresholds: dict) -> tuple[float, list]:
"""Returns (score 0-1, list of findings)."""
r = data.get("retention", {})
findings = []
scores = []
d30 = r.get("d30_cohorts", [])
d90 = r.get("d90_cohorts", [])
if not d30:
findings.append("⚠ No D30 retention data — this is the most important PMF signal. Instrument it immediately.")
return 0.0, findings
latest_d30 = d30[0]
d30_score = score_between(latest_d30, 0, thresholds["d30_strong"])
scores.append(d30_score)
if latest_d30 >= thresholds["d30_strong"]:
findings.append(f"✓ D30 retention {latest_d30:.0%} — strong PMF signal")
elif latest_d30 >= thresholds["d30_pmf"]:
findings.append(f"◑ D30 retention {latest_d30:.0%} — approaching PMF threshold ({thresholds['d30_pmf']:.0%})")
else:
findings.append(f"✗ D30 retention {latest_d30:.0%} — below PMF threshold ({thresholds['d30_pmf']:.0%}). Focus here before anything else.")
# Trend bonus
if len(d30) >= 2:
trend = cohort_trend(d30)
trend_score = (trend + 1) / 2 # normalize to 0-1
scores.append(trend_score * 0.5) # trend is bonus, not primary
if trend > 0.1:
findings.append(f"✓ D30 retention improving across cohorts — strong learning signal")
elif trend < -0.1:
findings.append(f"✗ D30 retention declining across cohorts — product changes may be hurting core users")
if d90:
latest_d90 = d90[0]
d90_score = score_between(latest_d90, 0, thresholds["d90_strong"])
scores.append(d90_score)
if latest_d90 >= thresholds["d90_strong"]:
findings.append(f"✓ D90 retention {latest_d90:.0%} — excellent long-term retention")
elif latest_d90 >= thresholds["d90_pmf"]:
findings.append(f"◑ D90 retention {latest_d90:.0%} — some long-term value demonstrated")
else:
findings.append(f"✗ D90 retention {latest_d90:.0%} — users not finding long-term value")
else:
findings.append("⚠ No D90 data. Add 90-day cohort tracking.")
flattening = r.get("curve_flattening", False)
if flattening:
scores.append(0.8)
findings.append("✓ Retention curve flattening — core retained segment exists")
else:
scores.append(0.2)
findings.append("✗ Retention curve not flattening — no stable retained segment yet")
return clamp(sum(scores) / len(scores)), findings
def score_engagement(data: dict, thresholds: dict) -> tuple[float, list]:
e = data.get("engagement", {})
findings = []
scores = []
dau_mau = e.get("dau_mau_ratio")
if dau_mau is not None:
s = score_between(dau_mau, 0, thresholds["dau_mau_strong"])
scores.append(s)
if dau_mau >= thresholds["dau_mau_strong"]:
findings.append(f"✓ DAU/MAU {dau_mau:.0%} — strong daily habit")
elif dau_mau >= thresholds["dau_mau_pmf"]:
findings.append(f"◑ DAU/MAU {dau_mau:.0%} — moderate engagement")
else:
findings.append(f"✗ DAU/MAU {dau_mau:.0%} — users not building a habit. Find the daily job or accept weekly use pattern.")
else:
findings.append("⚠ No DAU/MAU data.")
sessions = e.get("avg_sessions_per_week")
if sessions is not None:
# 5+ sessions/week = strong, 2 = threshold
s = score_between(sessions, 1, 5)
scores.append(s)
if sessions >= 5:
findings.append(f"✓ {sessions:.1f} sessions/week — high engagement")
elif sessions >= 2:
findings.append(f"◑ {sessions:.1f} sessions/week — moderate")
else:
findings.append(f"✗ {sessions:.1f} sessions/week — very low. Users not returning within week.")
else:
findings.append("⚠ No session frequency data.")
kar = e.get("key_action_rate")
if kar is not None:
s = score_between(kar, 0.10, 0.70)
scores.append(s)
if kar >= 0.60:
findings.append(f"✓ Key action rate {kar:.0%} — core value well-adopted")
elif kar >= 0.30:
findings.append(f"◑ Key action rate {kar:.0%} — improve onboarding to drive this up")
else:
findings.append(f"✗ Key action rate {kar:.0%} — most users not reaching core value. This is an activation problem.")
else:
findings.append("⚠ No key action rate. Define your 'aha moment' action and track it.")
depth = e.get("session_depth_score")
if depth is not None:
scores.append(depth)
if depth >= 0.6:
findings.append(f"✓ Session depth {depth:.1f} — users exploring the product")
else:
findings.append(f"◑ Session depth {depth:.1f} — users sticking to narrow feature set")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
def score_satisfaction(data: dict, thresholds: dict) -> tuple[float, list]:
s_data = data.get("satisfaction", {})
findings = []
scores = []
se_score = s_data.get("sean_ellis_very_disappointed")
se_n = s_data.get("sean_ellis_sample_size", 0)
if se_score is not None:
if se_n < 40:
findings.append(f"⚠ Sean Ellis n={se_n} — too small to be reliable. Need 40+ responses.")
scores.append(score_between(se_score, 0, thresholds["sean_ellis_strong"]) * 0.5) # half weight
else:
s = score_between(se_score, 0, thresholds["sean_ellis_strong"])
scores.append(s)
if se_score >= thresholds["sean_ellis_strong"]:
findings.append(f"✓ Sean Ellis {se_score:.0%} 'very disappointed' — strong PMF signal (n={se_n})")
elif se_score >= thresholds["sean_ellis_pmf"]:
findings.append(f"◑ Sean Ellis {se_score:.0%} — at PMF threshold. Push to > {thresholds['sean_ellis_strong']:.0%}.")
else:
findings.append(f"✗ Sean Ellis {se_score:.0%} — below {thresholds['sean_ellis_pmf']:.0%} threshold. Interview 'somewhat disappointed' group.")
else:
findings.append("⚠ No Sean Ellis data. Run a one-question survey to your active users now.")
nps = s_data.get("nps_score")
nps_n = s_data.get("nps_sample_size", 0)
if nps is not None:
if nps_n < 50:
findings.append(f"⚠ NPS n={nps_n} — sample too small. Need 50+ for reliability.")
# NPS ranges from -100 to 100; normalize to 0-1 against threshold
s = score_between(nps, -20, thresholds["nps_strong"])
scores.append(s)
if nps >= thresholds["nps_strong"]:
findings.append(f"✓ NPS {nps} — excellent. Promoters will drive organic growth.")
elif nps >= thresholds["nps_pmf"]:
findings.append(f"◑ NPS {nps} — acceptable. Focus on converting passives to promoters.")
elif nps >= 0:
findings.append(f"✗ NPS {nps} — low. More detractors than promoters is a warning sign.")
else:
findings.append(f"✗ NPS {nps} — negative. Active detractors outnumber promoters.")
else:
findings.append("⚠ No NPS data.")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
def score_growth(data: dict, _thresholds: dict) -> tuple[float, list]:
g = data.get("growth", {})
findings = []
scores = []
organic_pct = g.get("organic_signup_pct")
if organic_pct is not None:
s = score_between(organic_pct, 0.05, 0.50)
scores.append(s)
if organic_pct >= 0.30:
findings.append(f"✓ {organic_pct:.0%} organic signups — word of mouth is working")
elif organic_pct >= 0.20:
findings.append(f"◑ {organic_pct:.0%} organic — moderate. Build referral loop deliberately.")
else:
findings.append(f"✗ {organic_pct:.0%} organic — almost all paid. PMF may not be strong enough to generate word of mouth.")
else:
findings.append("⚠ No organic signup tracking. Tag all signup sources now.")
referral = g.get("referral_rate")
if referral is not None:
s = score_between(referral, 0.05, 0.35)
scores.append(s)
if referral >= 0.25:
findings.append(f"✓ {referral:.0%} of active users referring — strong viral signal")
elif referral >= 0.15:
findings.append(f"◑ {referral:.0%} referral rate — building. Add referral incentive or friction removal.")
else:
findings.append(f"✗ {referral:.0%} referral rate — users not recommending. Satisfaction or network effects missing.")
else:
findings.append("⚠ No referral rate data.")
mom = g.get("mom_growth_rate")
if mom is not None:
s = score_between(mom, 0, 0.20)
scores.append(s)
if mom >= 0.15:
findings.append(f"✓ {mom:.0%} MoM growth — strong momentum")
elif mom >= 0.08:
findings.append(f"◑ {mom:.0%} MoM growth — moderate. Identify top acquisition channel and double it.")
else:
findings.append(f"✗ {mom:.0%} MoM growth — slow. Acquisition is a bottleneck.")
if not scores:
return 0.0, findings
return clamp(sum(scores) / len(scores)), findings
# ---------------------------------------------------------------------------
# Overall scoring and recommendations
# ---------------------------------------------------------------------------
def pmf_status(overall: float) -> tuple[str, str]:
"""Returns (status label, description)."""
if overall >= 0.80:
return "STRONG PMF", "Clear product-market fit. Shift focus to scaling acquisition and defending moat."
elif overall >= 0.60:
return "PMF APPROACHING", "Meaningful signals present. Identify and remove the 1-2 friction points blocking retention."
elif overall >= 0.40:
return "EARLY SIGNALS", "Weak PMF. Some users find value. Narrow your ICP and double down on what's working."
elif overall >= 0.20:
return "PRE-PMF", "No clear PMF yet. Don't scale acquisition. Focus entirely on retention experiments."
else:
return "NO SIGNAL", "No PMF signals detected. Revisit the problem hypothesis before investing further in the solution."
def top_recommendations(dim_scores: dict, data: dict) -> list[str]:
"""Prioritized recommendations based on weakest dimensions."""
recs = []
model = data.get("business_model", "b2b_saas")
ranked = sorted(dim_scores.items(), key=lambda x: x[1])
for dim, score in ranked:
if score < 0.40:
if dim == "retention":
recs.append(
"CRITICAL — Retention: Run cohort analysis by segment. Find the cohort with highest D30. "
"Interview 10 of those users. Build for them exclusively until retention flattens."
)
elif dim == "engagement":
recs.append(
"Engagement: Define your 'aha moment' — the one action that predicts long-term retention. "
"Measure time-to-aha. Remove every friction point on that path."
)
elif dim == "satisfaction":
recs.append(
"Satisfaction: Run Sean Ellis survey immediately (need n ≥ 40). "
"Interview every 'somewhat disappointed' user — the gap between 'somewhat' and 'very' is your product gap."
)
elif dim == "growth":
recs.append(
"Growth: Track signup source for every new user. If organic < 20%, "
"you may be papering over weak PMF with paid acquisition. Fix retention first."
)
if not recs:
recs.append(
"All dimensions scoring above threshold. Focus: "
"(1) Defend moat, (2) Expand ICP carefully, (3) Build referral flywheel."
)
if model == "b2b_saas":
recs.append("B2B tip: Track NRR (Net Revenue Retention). PMF in B2B requires expansion, not just retention.")
elif model == "consumer":
recs.append("Consumer tip: Find your D7 'magic moment'. The habit window is small — optimize for it.")
elif model == "plg":
recs.append("PLG tip: Define your PQL (product-qualified lead). The activation event that predicts paid conversion.")
elif model == "marketplace":
recs.append("Marketplace tip: Measure both sides separately. PMF on demand side ≠ PMF on supply side.")
return recs
# ---------------------------------------------------------------------------
# Report renderer
# ---------------------------------------------------------------------------
def render_report(data: dict, dim_scores: dict, dim_findings: dict, overall: float) -> str:
status, description = pmf_status(overall)
recs = top_recommendations(dim_scores, data)
lines = []
lines.append("=" * 60)
lines.append(f" PMF SCORER — {data.get('product_name', 'Product')}")
lines.append(f" Model: {data.get('business_model', 'unknown').upper()}")
lines.append("=" * 60)
lines.append("")
# Overall
bar_len = 40
filled = round(overall * bar_len)
bar = "█" * filled + "░" * (bar_len - filled)
lines.append(f" Overall PMF Score: {overall:.0%}")
lines.append(f" [{bar}]")
lines.append(f" Status: {status}")
lines.append(f" {description}")
lines.append("")
# Dimension breakdown
lines.append(" DIMENSION SCORES")
lines.append(" " + "-" * 50)
for dim, weight in DIMENSION_WEIGHTS.items():
score = dim_scores.get(dim, 0.0)
dim_bar_len = 20
dim_filled = round(score * dim_bar_len)
dim_bar = "█" * dim_filled + "░" * (dim_bar_len - dim_filled)
label = dim.capitalize().ljust(12)
lines.append(f" {label} [{dim_bar}] {score:.0%} (weight: {weight:.0%})")
lines.append("")
# Findings per dimension
for dim in ["retention", "engagement", "satisfaction", "growth"]:
findings = dim_findings.get(dim, [])
if findings:
lines.append(f" {dim.upper()} FINDINGS")
for f in findings:
lines.append(f" {f}")
lines.append("")
# Recommendations
lines.append(" PRIORITIZED RECOMMENDATIONS")
lines.append(" " + "-" * 50)
for i, rec in enumerate(recs, 1):
# Wrap at 70 chars
words = rec.split()
line = f" {i}. "
for word in words:
if len(line) + len(word) + 1 > 72:
lines.append(line)
line = " " + word + " "
else:
line += word + " "
lines.append(line.rstrip())
lines.append("")
lines.append("=" * 60)
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def run(data: dict) -> dict:
"""
Score PMF from input data dict.
Returns dict with overall score, dimension scores, and findings.
"""
model = data.get("business_model", "b2b_saas")
thresholds = THRESHOLDS.get(model, THRESHOLDS["b2b_saas"])
dim_scores = {}
dim_findings = {}
ret_score, ret_findings = score_retention(data, thresholds)
dim_scores["retention"] = ret_score
dim_findings["retention"] = ret_findings
eng_score, eng_findings = score_engagement(data, thresholds)
dim_scores["engagement"] = eng_score
dim_findings["engagement"] = eng_findings
sat_score, sat_findings = score_satisfaction(data, thresholds)
dim_scores["satisfaction"] = sat_score
dim_findings["satisfaction"] = sat_findings
grow_score, grow_findings = score_growth(data, thresholds)
dim_scores["growth"] = grow_score
dim_findings["growth"] = grow_findings
overall = sum(
dim_scores[dim] * weight
for dim, weight in DIMENSION_WEIGHTS.items()
)
return {
"overall": overall,
"dim_scores": dim_scores,
"dim_findings": dim_findings,
"status": pmf_status(overall)[0],
}
def main():
parser = argparse.ArgumentParser(
description="PMF Scorer — Multi-dimensional Product-Market Fit analysis",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--input", "-i",
metavar="FILE",
help="JSON file with your product data (default: built-in sample data)",
)
parser.add_argument(
"--json",
action="store_true",
help="Output raw JSON instead of formatted report",
)
args = parser.parse_args()
if args.input:
try:
with open(args.input) as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: invalid JSON: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file provided — running with sample data.\n")
data = sample_data()
result = run(data)
if args.json:
output = {
"product_name": data.get("product_name"),
"business_model": data.get("business_model"),
"overall_score": round(result["overall"], 4),
"overall_pct": f"{result['overall']:.0%}",
"status": result["status"],
"dimensions": {
dim: {
"score": round(result["dim_scores"][dim], 4),
"pct": f"{result['dim_scores'][dim]:.0%}",
"weight": f"{DIMENSION_WEIGHTS[dim]:.0%}",
"findings": result["dim_findings"][dim],
}
for dim in DIMENSION_WEIGHTS
},
}
print(json.dumps(output, indent=2))
else:
print(render_report(data, result["dim_scores"], result["dim_findings"], result["overall"]))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Portfolio Analyzer — Product portfolio BCG matrix classification and investment analysis.
For each product, classifies into BCG quadrant (Star, Cash Cow, Question Mark, Dog)
and generates investment recommendations (Invest / Maintain / Kill).
Usage:
python portfolio_analyzer.py # Run with built-in sample data
python portfolio_analyzer.py --input data.json # Run with your data
python portfolio_analyzer.py --json # Output raw JSON
JSON input format: see sample_data() function below.
"""
import json
import sys
import argparse
from typing import Optional
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
def sample_data() -> dict:
"""
Sample portfolio. Replace with real product data.
Fields:
name Product name
revenue_quarterly Current quarter revenue (any consistent currency)
revenue_prev_q Revenue last quarter (for QoQ calculation)
market_growth_pct Annual market growth rate (percent, e.g. 12.5 for 12.5%)
your_market_share Your estimated market share (percent, e.g. 8.0 for 8%)
largest_competitor_share Largest competitor's share (percent)
eng_capacity_pct % of total engineering capacity allocated (0-100)
d30_retention Optional D30 retention rate (decimal, e.g. 0.45)
nps Optional NPS score (-100 to 100)
notes Optional free text notes for the report
"""
return {
"company": "Acme Corp",
"total_engineering_headcount": 45,
"products": [
{
"name": "CorePlatform",
"revenue_quarterly": 480000,
"revenue_prev_q": 430000,
"market_growth_pct": 22.0,
"your_market_share": 18.0,
"largest_competitor_share": 12.0,
"eng_capacity_pct": 35,
"d30_retention": 0.61,
"nps": 52,
"notes": "Our flagship. Leading market share in fast-growing segment.",
},
{
"name": "ReportingModule",
"revenue_quarterly": 290000,
"revenue_prev_q": 285000,
"market_growth_pct": 5.0,
"your_market_share": 22.0,
"largest_competitor_share": 18.0,
"eng_capacity_pct": 25,
"d30_retention": 0.58,
"nps": 38,
"notes": "Mature product, strong margins, slow market.",
},
{
"name": "MobileApp",
"revenue_quarterly": 95000,
"revenue_prev_q": 78000,
"market_growth_pct": 35.0,
"your_market_share": 3.5,
"largest_competitor_share": 24.0,
"eng_capacity_pct": 28,
"d30_retention": 0.31,
"nps": 22,
"notes": "High growth market. We're far behind on share. Bet or exit.",
},
{
"name": "LegacyConnector",
"revenue_quarterly": 62000,
"revenue_prev_q": 68000,
"market_growth_pct": -3.0,
"your_market_share": 8.0,
"largest_competitor_share": 35.0,
"eng_capacity_pct": 12,
"d30_retention": 0.42,
"nps": 14,
"notes": "Declining market. Customers are on long-term contracts.",
},
],
}
# ---------------------------------------------------------------------------
# BCG Classification
# ---------------------------------------------------------------------------
# Growth rate threshold: markets growing faster than this are "high growth"
GROWTH_THRESHOLD_PCT = 10.0
# Market share ratio threshold: ratio > 1.0 means you lead the market
SHARE_RATIO_THRESHOLD = 1.0
def bcg_quadrant(market_growth_pct: float, share_ratio: float) -> str:
high_growth = market_growth_pct >= GROWTH_THRESHOLD_PCT
leading_share = share_ratio >= SHARE_RATIO_THRESHOLD
if high_growth and leading_share:
return "Star"
elif not high_growth and leading_share:
return "Cash Cow"
elif high_growth and not leading_share:
return "Question Mark"
else:
return "Dog"
def quadrant_emoji(quadrant: str) -> str:
return {
"Star": "⭐",
"Cash Cow": "🐄",
"Question Mark": "❓",
"Dog": "🐕",
}.get(quadrant, "?")
def investment_posture(quadrant: str, qoq_growth: float, retention: Optional[float]) -> str:
"""
Invest / Maintain / Kill recommendation with nuance.
"""
if quadrant == "Star":
return "Invest"
elif quadrant == "Cash Cow":
# If cash cow is declining fast or retention is poor, consider killing
if qoq_growth < -0.10 or (retention is not None and retention < 0.30):
return "Kill"
return "Maintain"
elif quadrant == "Question Mark":
# Fast QoQ growth signals the bet might pay off → Invest
# Flat or slow QoQ with weak retention → Kill
if qoq_growth >= 0.15 and (retention is None or retention >= 0.25):
return "Invest"
elif qoq_growth < 0.05 or (retention is not None and retention < 0.20):
return "Kill"
return "Evaluate" # Needs explicit strategic decision
else: # Dog
if qoq_growth > 0.10 and (retention is None or retention >= 0.35):
return "Evaluate" # Surprising momentum — verify before killing
return "Kill"
def posture_color(posture: str) -> str:
return {
"Invest": "✓",
"Maintain": "◑",
"Kill": "✗",
"Evaluate": "⚠",
}.get(posture, "?")
# ---------------------------------------------------------------------------
# Product analysis
# ---------------------------------------------------------------------------
def analyze_product(p: dict) -> dict:
revenue_q = p.get("revenue_quarterly", 0)
revenue_prev = p.get("revenue_prev_q", revenue_q)
qoq_growth = (revenue_q - revenue_prev) / revenue_prev if revenue_prev else 0.0
your_share = p.get("your_market_share", 0)
competitor_share = p.get("largest_competitor_share", 1)
share_ratio = your_share / competitor_share if competitor_share else 0.0
market_growth = p.get("market_growth_pct", 0)
retention = p.get("d30_retention")
nps = p.get("nps")
eng_pct = p.get("eng_capacity_pct", 0)
quadrant = bcg_quadrant(market_growth, share_ratio)
posture = investment_posture(quadrant, qoq_growth, retention)
# Alignment score: how well does engineering investment match the recommended posture?
# Invest products should have high eng allocation; Kill products should have low.
alignment_score = _compute_alignment(posture, eng_pct)
return {
"name": p.get("name", "Unknown"),
"revenue_quarterly": revenue_q,
"revenue_prev_q": revenue_prev,
"qoq_growth": qoq_growth,
"market_growth_pct": market_growth,
"your_market_share": your_share,
"largest_competitor_share": competitor_share,
"share_ratio": share_ratio,
"eng_capacity_pct": eng_pct,
"d30_retention": retention,
"nps": nps,
"quadrant": quadrant,
"posture": posture,
"alignment_score": alignment_score,
"notes": p.get("notes", ""),
"findings": _product_findings(quadrant, posture, qoq_growth, share_ratio,
market_growth, retention, nps, eng_pct),
}
def _compute_alignment(posture: str, eng_pct: float) -> float:
"""
Returns 0.0-1.0 score. High = engineering allocation matches strategic posture.
"""
targets = {"Invest": 0.35, "Maintain": 0.15, "Kill": 0.05, "Evaluate": 0.20}
target = targets.get(posture, 0.20)
deviation = abs(eng_pct / 100 - target)
return max(0.0, 1.0 - (deviation / 0.35))
def _product_findings(
quadrant: str, posture: str,
qoq_growth: float, share_ratio: float, market_growth: float,
retention: Optional[float], nps: Optional[int], eng_pct: float
) -> list:
findings = []
if quadrant == "Star":
if eng_pct < 30:
findings.append(f"⚠ Star product getting only {eng_pct}% of eng capacity — likely underinvested. Stars need fuel.")
else:
findings.append(f"✓ Star product with {eng_pct}% eng allocation — appropriate investment.")
if share_ratio < 1.5:
findings.append(f"◑ Share ratio {share_ratio:.1f}x — leading but not dominant. Accelerate to widen the gap.")
else:
findings.append(f"✓ Share ratio {share_ratio:.1f}x — strong lead. Defend aggressively.")
elif quadrant == "Cash Cow":
if eng_pct > 25:
findings.append(f"⚠ Cash Cow getting {eng_pct}% of eng — overinvested. Reduce to 10-15% max. Redeploy to Stars.")
else:
findings.append(f"✓ Cash Cow with {eng_pct}% eng — appropriate. Don't innovate, just maintain.")
if qoq_growth < -0.05:
findings.append(f"⚠ Revenue declining {abs(qoq_growth):.0%} QoQ — monitor for transition to Dog.")
else:
findings.append(f"✓ Revenue stable (QoQ: {qoq_growth:+.0%}) — milk this.")
elif quadrant == "Question Mark":
findings.append(f"⚠ Fast market ({market_growth:.0f}% growth) but only {share_ratio:.1f}x relative share.")
findings.append(f" Decision required: Invest to capture share or exit. 'Maintain' loses share every quarter.")
if qoq_growth >= 0.15:
findings.append(f"✓ QoQ growth {qoq_growth:+.0%} — momentum building. Investment may be justified.")
elif qoq_growth < 0.05:
findings.append(f"✗ QoQ growth {qoq_growth:+.0%} — stalled despite hot market. Strong exit signal.")
elif quadrant == "Dog":
findings.append(f"✗ Low share ({share_ratio:.1f}x) in slow/declining market ({market_growth:.0f}% growth).")
if eng_pct > 10:
findings.append(f"✗ Dog consuming {eng_pct}% of eng capacity. Set a sunset date. Migrate customers.")
if qoq_growth > 0:
findings.append(f"◑ Slight QoQ growth ({qoq_growth:+.0%}) — verify whether this is genuine or contract timing.")
if retention is not None:
if retention < 0.30:
findings.append(f"✗ D30 retention {retention:.0%} — users not finding value. Weak unit economics for any posture.")
elif retention >= 0.50:
findings.append(f"✓ D30 retention {retention:.0%} — users find value. Supports investment or stable maintenance.")
if nps is not None:
if nps < 0:
findings.append(f"✗ NPS {nps} — net detractors. Word of mouth is negative. Fix before scaling.")
elif nps >= 40:
findings.append(f"✓ NPS {nps} — strong promoter base. Harness for referrals.")
return findings
# ---------------------------------------------------------------------------
# Portfolio-level analysis
# ---------------------------------------------------------------------------
def analyze_portfolio(data: dict) -> dict:
products = [analyze_product(p) for p in data.get("products", [])]
total_revenue = sum(p["revenue_quarterly"] for p in products)
total_eng = sum(p["eng_capacity_pct"] for p in products)
# Revenue by quadrant
quadrant_revenue = {}
quadrant_eng = {}
for p in products:
q = p["quadrant"]
quadrant_revenue[q] = quadrant_revenue.get(q, 0) + p["revenue_quarterly"]
quadrant_eng[q] = quadrant_eng.get(q, 0) + p["eng_capacity_pct"]
# Portfolio health score
health = _portfolio_health(products, total_revenue, total_eng)
# Portfolio-level findings
portfolio_findings = _portfolio_findings(products, total_revenue, quadrant_revenue, quadrant_eng)
return {
"company": data.get("company", "Unknown"),
"total_engineering_headcount": data.get("total_engineering_headcount"),
"products": products,
"total_revenue_quarterly": total_revenue,
"quadrant_summary": {
q: {
"count": sum(1 for p in products if p["quadrant"] == q),
"revenue": quadrant_revenue.get(q, 0),
"revenue_pct": quadrant_revenue.get(q, 0) / total_revenue if total_revenue else 0,
"eng_pct": quadrant_eng.get(q, 0),
}
for q in ["Star", "Cash Cow", "Question Mark", "Dog"]
},
"portfolio_health_score": health,
"portfolio_findings": portfolio_findings,
}
def _portfolio_health(products: list, total_revenue: float, total_eng: float) -> float:
"""
Portfolio health 0-1. Penalizes:
- No Stars (no growth engine)
- Dogs consuming > 20% of eng
- Poor alignment scores
- Revenue concentrated in Dogs/Question Marks
"""
score = 1.0
quadrants = [p["quadrant"] for p in products]
has_star = "Star" in quadrants
has_cash_cow = "Cash Cow" in quadrants
if not has_star:
score -= 0.25 # No growth engine is a serious problem
if not has_cash_cow:
score -= 0.10 # No cash generator means funding stars from burn
# Dog eng allocation penalty
dog_eng = sum(p["eng_capacity_pct"] for p in products if p["quadrant"] == "Dog")
if dog_eng > 20:
score -= 0.20
elif dog_eng > 10:
score -= 0.10
# Revenue in dogs penalty
if total_revenue > 0:
dog_rev_pct = sum(p["revenue_quarterly"] for p in products if p["quadrant"] == "Dog") / total_revenue
if dog_rev_pct > 0.30:
score -= 0.15
# Average alignment score
avg_alignment = sum(p["alignment_score"] for p in products) / len(products) if products else 0
score -= (1 - avg_alignment) * 0.20
return max(0.0, min(1.0, score))
def _portfolio_findings(
products: list, total_revenue: float,
quadrant_revenue: dict, quadrant_eng: dict
) -> list:
findings = []
stars = [p for p in products if p["quadrant"] == "Star"]
cows = [p for p in products if p["quadrant"] == "Cash Cow"]
questions = [p for p in products if p["quadrant"] == "Question Mark"]
dogs = [p for p in products if p["quadrant"] == "Dog"]
if not stars:
findings.append("✗ CRITICAL: No Star products. You have no growth engine. Identify a Question Mark to invest in or revisit your market positioning.")
elif len(stars) == 1:
findings.append(f"◑ Single Star ({stars[0]['name']}). Portfolio is fragile — one product drives all growth. Diversify.")
else:
findings.append(f"✓ {len(stars)} Star products — healthy growth engine.")
if not cows:
findings.append("⚠ No Cash Cow products. Stars are consuming capital without a self-funding mechanism. Watch burn rate.")
else:
cow_rev = quadrant_revenue.get("Cash Cow", 0)
cow_pct = cow_rev / total_revenue if total_revenue else 0
findings.append(f"✓ Cash Cow revenue: {cow_pct:.0%} of total — funds Star investment.")
if questions:
findings.append(f"⚠ {len(questions)} Question Mark(s): {', '.join(p['name'] for p in questions)}.")
findings.append(" Each needs a binary decision: invest to win share, or exit. Set a 2-quarter deadline.")
if dogs:
dog_eng_total = sum(p["eng_capacity_pct"] for p in dogs)
findings.append(f"✗ {len(dogs)} Dog product(s): {', '.join(p['name'] for p in dogs)} consuming {dog_eng_total}% of eng capacity.")
findings.append(f" That's {dog_eng_total}% of your engineers on declining products. Set sunset dates.")
# Alignment check
misaligned = [p for p in products if p["alignment_score"] < 0.50]
if misaligned:
findings.append(f"⚠ Engineering allocation misaligned on: {', '.join(p['name'] for p in misaligned)}.")
findings.append(" Rebalance: move capacity from Dogs/Cows to Stars.")
return findings
# ---------------------------------------------------------------------------
# Report rendering
# ---------------------------------------------------------------------------
def fmt_currency(n: float) -> str:
if n >= 1_000_000:
return f"${n/1_000_000:.1f}M"
elif n >= 1_000:
return f"${n/1_000:.0f}K"
return f"${n:.0f}"
def render_report(result: dict) -> str:
lines = []
lines.append("=" * 65)
lines.append(f" PORTFOLIO ANALYZER — {result['company']}")
lines.append(f" Total Quarterly Revenue: {fmt_currency(result['total_revenue_quarterly'])}")
if result.get("total_engineering_headcount"):
lines.append(f" Engineering Headcount: {result['total_engineering_headcount']}")
lines.append("=" * 65)
lines.append("")
# Portfolio health
health = result["portfolio_health_score"]
bar_len = 40
filled = round(health * bar_len)
bar = "█" * filled + "░" * (bar_len - filled)
lines.append(f" Portfolio Health: {health:.0%}")
lines.append(f" [{bar}]")
lines.append("")
# Quadrant summary
lines.append(" QUADRANT SUMMARY")
lines.append(" " + "-" * 55)
header = f" {'Quadrant':<15} {'Count':>5} {'Revenue':>10} {'Rev%':>6} {'Eng%':>6}"
lines.append(header)
lines.append(" " + "-" * 55)
total_rev = result["total_revenue_quarterly"]
for q in ["Star", "Cash Cow", "Question Mark", "Dog"]:
qs = result["quadrant_summary"][q]
emoji = quadrant_emoji(q)
label = f"{emoji} {q}"
rev_pct = f"{qs['revenue_pct']:.0%}" if qs["count"] else "-"
eng = f"{qs['eng_pct']}%" if qs["count"] else "-"
rev = fmt_currency(qs["revenue"]) if qs["count"] else "-"
lines.append(f" {label:<15} {qs['count']:>5} {rev:>10} {rev_pct:>6} {eng:>6}")
lines.append("")
# Per-product breakdown
lines.append(" PRODUCT BREAKDOWN")
lines.append(" " + "-" * 65)
for p in result["products"]:
emoji = quadrant_emoji(p["quadrant"])
pc = posture_color(p["posture"])
lines.append(
f" {emoji} {p['name']} — {p['quadrant']} → {pc} {p['posture']}"
)
lines.append(
f" Revenue: {fmt_currency(p['revenue_quarterly'])}/qtr "
f"QoQ: {p['qoq_growth']:+.0%} "
f"Mkt growth: {p['market_growth_pct']:+.0f}%"
)
lines.append(
f" Share ratio: {p['share_ratio']:.1f}x "
f"Eng: {p['eng_capacity_pct']}% "
f"Alignment: {p['alignment_score']:.0%}"
)
if p.get("d30_retention") is not None:
lines.append(
f" D30 retention: {p['d30_retention']:.0%} "
f"NPS: {p['nps'] if p['nps'] is not None else 'N/A'}"
)
if p.get("notes"):
lines.append(f" Note: {p['notes']}")
for f in p.get("findings", []):
lines.append(f" {f}")
lines.append("")
# Portfolio-level findings
lines.append(" PORTFOLIO FINDINGS")
lines.append(" " + "-" * 65)
for f in result.get("portfolio_findings", []):
lines.append(f" {f}")
lines.append("")
lines.append("=" * 65)
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Main
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description="Portfolio Analyzer — BCG matrix classification and investment recommendations",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--input", "-i",
metavar="FILE",
help="JSON file with portfolio data (default: built-in sample data)",
)
parser.add_argument(
"--json",
action="store_true",
help="Output raw JSON result",
)
args = parser.parse_args()
if args.input:
try:
with open(args.input) as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: invalid JSON: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file provided — running with sample data.\n")
data = sample_data()
result = analyze_portfolio(data)
if args.json:
# Make result JSON-serializable
def clean(obj):
if isinstance(obj, dict):
return {k: clean(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [clean(v) for v in obj]
elif isinstance(obj, float):
return round(obj, 4)
return obj
print(json.dumps(clean(result), indent=2))
else:
print(render_report(result))
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill c-level-advisor/cpo-advisor Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Leadership
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
c-level-advisor/cpo-advisor/SKILL.md
People who install this also use
CEO Advisor
Executive leadership coaching — strategic decision-making, organizational development, board governance, and navigating high-stakes business challenges.
@alirezarezvani
Product Strategist
OKR cascade generation, strategic alignment scoring, and five strategic frameworks for product planning — from vision to quarterly initiatives.
@alirezarezvani
Product Manager Toolkit
RICE prioritization, customer interview frameworks, PRD writing, and product discovery workflows — a complete PM toolkit in one skill.
@alirezarezvani