CRO Advisor
Revenue leadership — revenue forecasting, sales model design, pricing strategy, and net revenue retention optimization from a Chief Revenue Officer perspective.
What this skill does
Work with a virtual Chief Revenue Officer to design scalable growth plans and optimize pricing strategies for sustainable success. You will produce accurate revenue forecasts, structure effective sales teams, and uncover strategies to stop customers from leaving and increase sales to current clients. Reach for this guidance when preparing board reports, setting sales quotas, or deciding how to price your product for maximum impact.
name: “cro-advisor” description: “Revenue leadership for B2B SaaS companies. Revenue forecasting, sales model design, pricing strategy, net revenue retention, and sales team scaling. Use when designing the revenue engine, setting quotas, modeling NRR, evaluating pricing, building board forecasts, or when user mentions CRO, chief revenue officer, revenue strategy, sales model, ARR growth, NRR, expansion revenue, churn, pricing strategy, or sales capacity.” license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: c-level domain: cro-leadership updated: 2026-03-05 python-tools: revenue_forecast_model.py, churn_analyzer.py frameworks: sales-playbook, pricing-strategy, nrr-playbook
CRO Advisor
Revenue frameworks for building predictable, scalable revenue engines — from $1M ARR to $100M and beyond.
Keywords
CRO, chief revenue officer, revenue strategy, ARR, MRR, sales model, pipeline, revenue forecasting, pricing strategy, net revenue retention, NRR, gross revenue retention, GRR, expansion revenue, upsell, cross-sell, churn, customer success, sales capacity, quota, ramp, territory design, MEDDPICC, PLG, product-led growth, sales-led growth, enterprise sales, SMB, self-serve, value-based pricing, usage-based pricing, ICP, ideal customer profile, revenue board reporting, sales cycle, CAC payback, magic number
Quick Start
Revenue Forecasting
python scripts/revenue_forecast_model.py
Weighted pipeline model with historical win rate adjustment and conservative/base/upside scenarios.
Churn & Retention Analysis
python scripts/churn_analyzer.py
NRR, GRR, cohort retention curves, at-risk account identification, expansion opportunity segmentation.
Diagnostic Questions
Ask these before any framework:
Revenue Health
- What’s your NRR? If below 100%, everything else is a leaky bucket.
- What percentage of ARR comes from expansion vs. new logo?
- What’s your GRR (retention floor without expansion)?
Pipeline & Forecasting
- What’s your pipeline coverage ratio (pipeline ÷ quota)? Under 3x is a problem.
- Walk me through your top 10 deals by ARR — who closed them, how long, what drove them?
- What’s your stage-by-stage conversion rate? Where do deals die?
Sales Team
- What % of your sales team hit quota last quarter?
- What’s average ramp time before a new AE is quota-attaining?
- What’s the sales cycle variance by segment? High variance = unpredictable forecasts.
Pricing
- How do customers articulate the value they get? What outcome do you deliver?
- When did you last raise prices? What happened to win rate?
- If fewer than 20% of prospects push back on price, you’re underpriced.
Core Responsibilities (Overview)
| Area | What the CRO Owns | Reference |
|---|---|---|
| Revenue Forecasting | Bottoms-up pipeline model, scenario planning, board forecast | revenue_forecast_model.py |
| Sales Model | PLG vs. sales-led vs. hybrid, team structure, stage definitions | references/sales_playbook.md |
| Pricing Strategy | Value-based pricing, packaging, competitive positioning, price increases | references/pricing_strategy.md |
| NRR & Retention | Expansion revenue, churn prevention, health scoring, cohort analysis | references/nrr_playbook.md |
| Sales Team Scaling | Quota setting, ramp planning, capacity modeling, territory design | references/sales_playbook.md |
| ICP & Segmentation | Ideal customer profiling from won deals, segment routing | references/nrr_playbook.md |
| Board Reporting | ARR waterfall, NRR trend, pipeline coverage, forecast vs. actual | revenue_forecast_model.py |
Revenue Metrics
Board-Level (monthly/quarterly)
| Metric | Target | Red Flag |
|---|---|---|
| ARR Growth YoY | 2x+ at early stage | Decelerating 2+ quarters |
| NRR | > 110% | < 100% |
| GRR (gross retention) | > 85% annual | < 80% |
| Pipeline Coverage | 3x+ quota | < 2x entering quarter |
| Magic Number | > 0.75 | < 0.5 (fix unit economics before spending more) |
| CAC Payback | < 18 months | > 24 months |
| Quota Attainment % | 60-70% of reps | < 50% (calibration problem) |
Magic Number: Net New ARR × 4 ÷ Prior Quarter S&M Spend
CAC Payback: S&M Spend ÷ New Logo ARR × (1 / Gross Margin %)
Revenue Waterfall
Opening ARR
+ New Logo ARR
+ Expansion ARR (upsell, cross-sell, seat adds)
- Contraction ARR (downgrades)
- Churned ARR
= Closing ARR
NRR = (Opening + Expansion - Contraction - Churn) / Opening
NRR Benchmarks
| NRR | Signal |
|---|---|
| > 120% | World-class. Grow even with zero new logos. |
| 100-120% | Healthy. Existing base is growing. |
| 90-100% | Concerning. Churn eating growth. |
| < 90% | Crisis. Fix before scaling sales. |
Red Flags
- NRR declining two quarters in a row — customer value story is broken
- Pipeline coverage below 3x entering the quarter — already forecasting a miss
- Win rate dropping while sales cycle extends — competitive pressure or ICP drift
- < 50% of sales team quota-attaining — comp plan, ramp, or quota calibration issue
- Average deal size declining — moving downmarket under pressure (dangerous)
- Magic Number below 0.5 — sales spend not converting to revenue
- Forecast accuracy below 80% — reps sandbagging or pipeline quality is poor
- Single customer > 15% of ARR — concentration risk, board will flag this
- “Too expensive” appearing in > 40% of loss notes — value demonstration broken, not pricing
- Expansion ARR < 20% of total ARR — upsell motion isn’t working
Integration with Other C-Suite Roles
| When… | CRO works with… | To… |
|---|---|---|
| Pricing changes | CPO + CFO | Align value positioning, model margin impact |
| Product roadmap | CPO | Ensure features support ICP and close pipeline |
| Headcount plan | CFO + CHRO | Justify sales hiring with capacity model and ROI |
| NRR declining | CPO + COO | Root cause: product gaps or CS process failures |
| Enterprise expansion | CEO | Executive sponsorship, board-level relationships |
| Revenue targets | CFO | Bottoms-up model to validate top-down board targets |
| Pipeline SLA | CMO | MQL → SQL conversion, CAC by channel, attribution |
| Security reviews | CISO | Unblock enterprise deals with security artifacts |
| Sales ops scaling | COO | RevOps staffing, commission infrastructure, tooling |
Resources
- Sales process, MEDDPICC, comp plans, hiring:
references/sales_playbook.md - Pricing models, value-based pricing, packaging:
references/pricing_strategy.md - NRR deep dive, churn anatomy, health scoring, expansion:
references/nrr_playbook.md - Revenue forecast model (CLI):
scripts/revenue_forecast_model.py - Churn & retention analyzer (CLI):
scripts/churn_analyzer.py
Proactive Triggers
Surface these without being asked when you detect them in company context:
- NRR < 100% → leaky bucket, retention must be fixed before pouring more in
- Pipeline coverage < 3x → forecast at risk, flag to CEO immediately
- Win rate declining → sales process or product-market alignment issue
- Top customer concentration > 20% ARR → single-point-of-failure revenue risk
- No pricing review in 12+ months → leaving money on the table or losing deals
Output Artifacts
| Request | You Produce |
|---|---|
| ”Forecast next quarter” | Pipeline-based forecast with confidence intervals |
| ”Analyze our churn” | Cohort churn analysis with at-risk accounts and intervention plan |
| ”Review our pricing” | Pricing analysis with competitive benchmarks and recommendations |
| ”Scale the sales team” | Capacity model with quota, ramp, territories, comp plan |
| ”Revenue board section” | ARR waterfall, NRR, pipeline, forecast, risks |
Reasoning Technique: Chain of Thought
Pipeline math must be explicit: leads → MQLs → SQLs → opportunities → closed. Show conversion rates at each stage. Question any assumption above historical averages.
Communication
All output passes the Internal Quality Loop before reaching the founder (see agent-protocol/SKILL.md).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
Context Integration
- Always read
company-context.mdbefore responding (if it exists) - During board meetings: Use only your own analysis in Phase 2 (no cross-pollination)
- Invocation: You can request input from other roles:
[INVOKE:role|question]
NRR Playbook
Net Revenue Retention is the single most important metric for a SaaS company's health and valuation. A company with 120% NRR grows even if it closes zero new deals. A company with 80% NRR is filling a bucket with a hole in it.
NRR Deep Dive
The Fundamental Formula
NRR = (Opening MRR + Expansion MRR - Contraction MRR - Churned MRR) / Opening MRR
Example:
Opening MRR: $1,000,000
Expansion: +$150,000
Contraction: -$30,000
Churn: -$80,000
Closing MRR: $1,040,000
NRR = $1,040,000 / $1,000,000 = 104%NRR vs. GRR
| Metric | Formula | What It Tells You |
|---|---|---|
| GRR | (Opening - Contraction - Churn) / Opening | Retention floor — how much you keep without any expansion |
| NRR | (Opening + Expansion - Contraction - Churn) / Opening | Net health — expansion offsetting churn |
| Logo Retention | (Customers start - Customers churned) / Customers start | Volume retention, ignores revenue weight |
GRR is the floor. NRR is the ceiling.
If GRR is 80% and NRR is 105%, your expansion is covering 25 points of churn. That's fragile — any expansion slowdown turns NRR negative. The fix is GRR, not more upsell.
Benchmarks by Segment
| Segment | Good GRR | Good NRR | Exceptional NRR |
|---|---|---|---|
| SMB-focused | 80-85% | 95-105% | > 110% |
| Mid-Market | 85-90% | 105-115% | > 120% |
| Enterprise | 90-95% | 115-130% | > 140% |
Enterprise NRR can exceed 140% because large accounts expand substantially and rarely churn entirely — they may downgrade but full logo churn is rare if the product is embedded.
NRR by Cohort
Don't just measure NRR across the full base — measure it by customer cohort (month of acquisition).
Jan 2024 Cohort:
Opening MRR (Jan 2024): $50,000
MRR at Jan 2025: $62,000
12-month NRR: 124%
Feb 2024 Cohort:
Opening MRR (Feb 2024): $45,000
MRR at Feb 2025: $38,000
12-month NRR: 84% ← problem cohortCohort analysis reveals:
- Whether a specific acquisition channel brings lower-quality customers
- Whether a product change or pricing shift affected retention
- Whether specific sales reps or time periods created bad-fit deals
Churn Anatomy
Not all churn is equal. Know the breakdown before prescribing solutions.
Churn Types
| Type | Definition | Primary Cause | Fix |
|---|---|---|---|
| Logo churn | Customer cancels entirely | No value, poor fit, champion left, competitor | Root cause analysis, ICP tightening |
| Revenue churn | ARR lost (cancels + downgrades combined) | Same as logo + downgrade triggers | Address both volume and revenue |
| Involuntary churn | Failed payment, expired card | Billing friction | Dunning improvement (quick win: 20-30% recovery) |
| Voluntary churn | Active cancellation decision | Explicit dissatisfaction, competitor win | Exit interview + intervention program |
| Contraction | Downgrade, seat reduction | Overpurchased, budget cut, team reduction | Right-sizing program, annual contracts |
Churn Root Cause Framework
Run this analysis quarterly on all churned accounts:
Step 1: Categorize by reason
- No value realized (never activated or adopted)
- Value realized but budget cut (external, not product)
- Switched to competitor (why? what did they offer?)
- Champion left company (relationship loss, not product failure)
- Company shutdown / acquisition (unavoidable)
Step 2: Look for patterns
- Which ICP signals predict churn? (company size, vertical, acquisition channel)
- Which product behaviors predict churn? (no login in 30 days, never completed onboarding)
- Which time periods have highest churn? (months 3, 6, 12 are typical cliff points)
Step 3: Act on the patterns
- ICP pattern → tighten qualification criteria
- Behavior pattern → build early warning health score
- Time cliff → build intervention playbooks for months 2, 5, 11
Exit Interview Protocol
Talk to every churned customer if ACV > $10K. For smaller, do quarterly batch surveys.
Questions:
- "What was the primary reason for your decision to cancel?"
- "What would have needed to be true for you to stay?"
- "What did you switch to, and what drove that decision?"
- "Was there a specific moment when you decided to leave?"
Rules:
- CSM who owned the account should NOT conduct the exit interview (too much relationship bias)
- Use a neutral party or the VP CS
- Document verbatim, not paraphrased
- Feed patterns back to Product and Sales monthly
Customer Health Scoring
A health score predicts churn 60-90 days before it happens. Without one, you're reactive.
Health Score Components
Score each account 0-100 across weighted signals:
| Signal | Weight | Red (0-33) | Yellow (34-66) | Green (67-100) |
|---|---|---|---|---|
| Product usage (DAU/WAU, feature adoption depth) | 35% | < 20% seats active | 20-60% seats active | > 60% seats active |
| Engagement (QBR attendance, champion responsiveness) | 20% | No response 60+ days | 30-60 days | Active, < 30 days |
| NPS / CSAT | 20% | Score < 6 | Score 6-7 | Score 8-10 |
| Support volume (negative signal: high volume = friction) | 15% | > 10 tickets/month | 3-10/month | < 3/month |
| Contract signals (time to renewal, expansion in motion) | 10% | < 60 days to renewal, no expansion discussion | 60-90 days, passive | > 90 days, expansion active |
Composite score:
- 70-100: Healthy. Renewal confident. Identify expansion opportunity.
- 50-69: At-risk. CSM check-in required. Executive sponsor loop-in if < 60 days to renewal.
- 0-49: Red alert. Immediate intervention. VP CS or CEO call if strategic account.
Health Score Automation
Trigger alerts automatically:
Score drops > 20 points in 30 days → CSM immediate outreach (same day)
No product login in 14 days → Automated email + CSM flag (within 24 hours)
Champion leaves company → Executive outreach (within 24 hours)
Support escalation → CSM loop-in (within 2 hours)
Renewal < 90 days + score < 60 → VP CS review (weekly)
Seat utilization < 30% → Adoption intervention playbook triggeredLeading Indicators vs. Lagging Indicators
| Leading (predict future churn) | Lagging (confirm past churn) |
|---|---|
| Login frequency declining | Cancellation submitted |
| Feature adoption stalling at basic level | Non-renewal at contract end |
| NPS score trend (not just snapshot) | Downgrade executed |
| No QBR scheduled in 90+ days | Champion departure |
| Support escalations increasing | Competitor mentioned in support |
Build your health score from leading indicators. Lagging indicators tell you what already happened.
Expansion Revenue Strategies
Expansion is cheaper than acquisition. CAC for expansion is typically 20-30% of new logo CAC.
Expansion Motion 1: Seat Expansion
Trigger signals:
- Usage by unlicensed users (shared logins, "can you add my colleague?")
- Team growth visible on LinkedIn (company hiring in target department)
- Champion promotes to a new role with bigger team
- Power users at license limit consistently
Playbook:
- Pull monthly usage report showing which features unlicensed users are using
- Frame as: "Your team is getting value from X — you could be capturing that for the full team"
- Offer a team expansion proposal at renewal + 10% volume discount for seat adds
- Never penalize users for sharing logins before the conversation — that's a data asset
Expansion Motion 2: Upsell (Tier Upgrade)
Trigger signals:
- Customer consistently hitting usage/feature limits
- Security or compliance requirement that requires higher tier
- New stakeholder joining who needs admin controls
- API usage growing rapidly (engineering team engagement)
Playbook:
- Build a "value realized" report before the upsell conversation (ROI proof)
- Use QBR as the venue: "You've achieved X. Here's what's possible at the next level."
- Frame the upgrade as unlocking more of what's already working
- Time to renewal: start upsell conversation 90-120 days before renewal
Expansion Motion 3: Cross-sell
Trigger signals:
- Strategic account with adjacent problem your product can solve
- New product launch that complements existing usage
- Customer explicitly asks about a capability in your roadmap or adjacent product
Playbook:
- Land with core product; build relationship and prove value
- Cross-sell only after health score is green and NPS > 7
- Introduce the new product through a champion, not a cold pitch
- Pilot pricing: bundle into renewal at modest uplift vs. separate sale
- Cross-sell owner: CSM or AE (define explicitly — joint ownership = no ownership)
Expansion Sequencing
Don't try all three simultaneously. Sequence matters:
Month 0-3: Activation focus — ensure core value delivered
Month 3-6: Seat expansion — grow usage within existing team
Month 6-9: Upsell conversation — unlock advanced features
Month 9-12: Cross-sell OR renewal + multi-year lock-inNRR Modeling
Target breakdown for 115% NRR:
GRR: 88% (12% lost to churn/contraction)
Expansion rate: 27% (upsell + cross-sell + seat expansion)
NRR: 88% + 27% = 115%
To reach 120% NRR:
Option A: Improve GRR to 92% (reduce churn), keep expansion at 28%
Option B: Keep GRR at 88%, improve expansion to 32%
Option C: Both, incrementally
Option A is usually easier and more durable. Fix the hole first.Customer Success Integration
CS and Revenue are not separate functions. NRR lives at their intersection.
CS Team Structure (aligned to NRR)
| CS Model | When to Use | NRR Focus |
|---|---|---|
| High-touch CSM | ACV > $25K | Named accounts, QBRs, executive relationships |
| Tech-touch / pooled | ACV $5K-25K | Automated health scoring, office hours, community |
| Self-serve | ACV < $5K | In-app guidance, knowledge base, email sequences |
CSM coverage ratios:
- High-touch: 1 CSM per $2M-4M ARR managed
- Tech-touch: 1 CSM per $5M-10M ARR managed
- Self-serve: Product and automation (no dedicated CSM)
CS Compensation (aligned to NRR)
Don't pay CSMs a flat salary — align incentive to retention and expansion:
CS compensation structure:
Base: 70% of OTE
Variable: 30% of OTE
Variable tied to:
GRR / NRR vs. target (50% of variable)
Health score improvement (25% of variable)
Expansion ARR facilitated (25% of variable)
Do NOT pay CS commission on expansion ARR the same way AEs earn it.
This creates conflict: CS will push expansion before the customer is ready.
Instead, bonus for expansion milestones — it's a different incentive structure.QBR (Quarterly Business Review) Framework
QBRs are the primary vehicle for expansion and churn prevention in enterprise accounts.
QBR agenda (60-90 minutes):
- Their goals, our progress — review what they said success looked like at kickoff (10 min)
- Usage and adoption data — product metrics presented in business language, not feature language (15 min)
- Value delivered — ROI proof: time saved, revenue influenced, risk reduced (10 min)
- Challenges and blockers — what's preventing more adoption? (10 min)
- Roadmap preview — upcoming features relevant to their use case (10 min)
- Next 90 days — joint success plan with owner and due dates (10 min)
- Expansion opportunity — if health score is green and timing is right (10 min)
QBR anti-patterns:
- Leading with your product roadmap (they don't care; start with their results)
- Bringing too many people from your side without matching seniority
- Presenting at a VP without bringing the economic buyer
- Skipping QBRs for "healthy" accounts (health can change fast)
- No confirmed next step at the end
Cohort-Based Retention Analysis
Aggregate NRR hides the signal. Cohort analysis reveals it.
Retention Curve Analysis
Plot retention by months since acquisition for each quarterly cohort:
Month 0: 100% (starting revenue)
Month 3: First cliff — early adopters who didn't activate churn here
Month 6: Second cliff — customers who never expanded, running out of runway
Month 12: Renewal cliff — annual contract renewal decision
Month 18: Mature customers — churn rate stabilizes significantly
Healthy curve: Drops sharply in months 1-3, flattens after month 6
Problem curve: Continues declining linearly through month 12+ (no value anchor)Reading Cohort Data
| Pattern | Interpretation | Action |
|---|---|---|
| Early churn (months 1-3) | Onboarding / activation failure | Fix time-to-value, improve onboarding |
| Mid-cycle churn (months 4-8) | Value not deepening | Adoption program, check product fit |
| Annual renewal churn (month 12) | Buying committee didn't renew | Executive engagement, earlier renewal process |
| Flat after month 6 | Sticky product, low expansion | Increase upsell motion |
| Growing after month 6 | Expansion working | Scale the upsell playbook |
Cohort Segmentation Variables
Slice retention cohorts by:
- Acquisition channel (inbound vs. outbound vs. PLG vs. partner)
- Sales rep (which reps close durable deals vs. churny deals)
- Deal size (SMB churn rate typically 2-3x enterprise)
- Industry vertical (some verticals have structurally higher churn)
- Product tier at signup (self-serve → converted vs. directly contracted)
- Geographic market (international markets often have different retention profiles)
The most actionable finding is usually by acquisition channel or sales rep — both are directly controllable.
Churn Prevention Intervention Playbooks
Playbook 1: Low Activation (no login in first 14 days)
Day 7: Automated email: "Getting started" + specific next step
Day 14: CSM outreach: "I noticed you haven't logged in — can I help?"
Day 21: Escalate to CSM manager if no response
Day 30: Executive outreach for ACV > $25K; flag as at-riskPlaybook 2: Usage Cliff (DAU drops > 50% in 30 days)
Trigger: Automated health score alert
Day 1: CSM reviews usage report, identifies likely cause
Day 2: CSM outreach: "We noticed your team's usage changed — is everything okay?"
Day 7: If no response: schedule 30-min call with champion
Day 14: If unresponsive: VP CS loop-in + executive reach outPlaybook 3: Champion Departure
Trigger: LinkedIn alert or internal report of champion leaving
Day 1: Email to departed champion (warm handoff ask)
Day 1: Email to new stakeholder (introduction from AE or VP CS)
Day 3: Schedule onboarding call for new stakeholder
Day 14: QBR with new stakeholder to establish relationship
Day 30: Health score review — flag if engagement hasn't recoveredPlaybook 4: Pre-Renewal (90 days out, health score < 70)
Day -90: CSM completes account health review, escalates if < 70
Day -75: Executive sponsor from vendor side joins renewal call
Day -60: Value delivered report prepared (ROI proof)
Day -45: Renewal proposal sent with expansion option
Day -30: Follow-up on any open objections or requirements
Day -14: Final confirm or escalate to VP Sales Pricing Strategy
Pricing is not a one-time decision. It's an ongoing hypothesis about value and willingness to pay. Most SaaS companies are underpriced by 20-40%.
Pricing Models
Per Seat / User
How it works: Customer pays a fixed amount per user, per month or year.
Best for:
- Collaboration tools (everyone who uses it needs a license)
- Productivity software where value scales with users
- Products where you want viral / network growth within accounts
Pricing structure:
Starter: $15/user/month (1-10 users)
Professional: $30/user/month (11-100 users)
Enterprise: Custom (100+ users, negotiated)Pros:
- Simple to understand and sell
- Revenue scales naturally with customer growth
- Predictable for customers (fixed monthly cost)
Cons:
- Customers negotiate volume discounts aggressively
- Discourages broad adoption if price is high (seat hoarding)
- Doesn't capture value for power users vs. light users
- Enterprises can negotiate $5/seat on a $25 product
Watch for: Customers sharing logins to avoid per-seat cost. Enforce with IP restrictions or SSO audit logs.
Usage-Based Pricing (UBP)
How it works: Customer pays for what they consume — API calls, data processed, messages sent, compute hours, etc.
Best for:
- API companies, infrastructure, data platforms
- AI products (per-token, per-query pricing)
- Products where value scales non-linearly with usage
- Land-and-expand: low entry cost, grows with customer success
Pricing structure:
Free tier: First 10K API calls/month
Pay-as-you-go: $0.002 per API call
Committed use: $500/month for 500K calls (better rate)
Enterprise: Custom contract, committed volume discountPros:
- Customer pays in proportion to value received
- Low barrier to entry (customers start small, scale up)
- Natural expansion: customer success = revenue growth
- No "unused licenses" problem
Cons:
- Revenue is unpredictable for both you and the customer
- Hard to forecast; hard to budget for customer
- Customers may optimize to reduce usage (and your revenue)
- Complex billing; requires robust usage tracking infrastructure
Usage-based pricing math:
Unit cost (your COGS per unit): $0.0002 per API call
Target gross margin: 80%
Price = COGS / (1 - margin) = $0.0002 / 0.20 = $0.001 minimum
Add markup for value delivered above cost: $0.002 per call (10x markup at scale)Hybrid usage + seat approach:
- Platform fee: $500/month (access, support, base features)
- Usage fee: $0.001 per API call above included 100K
Flat Rate / Subscription
How it works: One price for full access, regardless of usage or users.
Best for:
- Simple products with limited feature differentiation
- Products where usage is predictable and bounded
- Customers who want budget certainty
- Early stage before you've figured out value segmentation
Pros:
- Simplest to sell and explain
- Easiest billing implementation
- Customers love budget predictability
Cons:
- Leaves money on the table for heavy users
- No natural expansion revenue mechanism
- Light users pay the same as power users (retention risk)
When to move away from flat rate:
- 20% of customers are using 80% of the product capacity
- Power users would clearly pay more; light users churn or underutilize
- You have a clear expansion story waiting to happen
Tiered / Feature-Based
How it works: Multiple packages (Starter, Pro, Enterprise) with different feature sets and/or usage limits.
Best for:
- Multi-use-case products
- Different buyer types (individual vs. team vs. enterprise)
- Products with a natural upgrade path based on sophistication
Structure (Good / Better / Best):
Starter ($49/mo): Core features, 3 users, 10GB storage
Professional ($149/mo): Advanced features, 25 users, 100GB, API access
Business ($499/mo): All features, 100 users, 1TB, SSO, priority support
Enterprise (custom): Unlimited, custom integrations, SLA, dedicated CSMTier design principles:
- Starter tier: removes friction, proves value, not the revenue center
- Professional: the primary revenue tier; 60-70% of customers land here
- Enterprise: custom pricing allows you to capture maximum value
- Each tier upgrade should have an obvious "must-have" feature for the target buyer
What to gate on each tier:
| Feature Type | Where to Put It |
|---|---|
| Core product functionality | Starter (must be useful) |
| Collaboration features | Pro (drives team usage) |
| Admin, security, SSO | Business/Enterprise |
| API / integrations | Pro and above |
| SLAs, dedicated support | Enterprise only |
| Advanced analytics | Business/Enterprise |
Hybrid Pricing
How it works: Combination of models (e.g., platform fee + per seat + usage).
Example:
Platform fee: $2,000/month (access, core features, admin console)
Per seat: $50/user/month (up to 200 users)
Usage overage: $0.10/action above 100K included actionsWhen to use hybrid:
- Enterprise customers want budget certainty (platform fee) but your value scales with usage
- You have different cost structures for different features
- Customers have very different usage patterns across the base
Pros: Captures value at multiple dimensions. Hybrid is most common in enterprise SaaS. Cons: More complex to explain and bill. Sales training burden increases.
Value-Based Pricing Methodology
Cost-plus pricing is a race to the bottom. Price on value, not cost.
Step 1: Define the Economic Outcome
What business result does your product deliver? Be specific.
Weak: "We help companies save time" Strong: "We reduce onboarding time for new enterprise software by 40%, saving 8 hours per employee"
Map to one of:
- Revenue increase — "Our customers close 25% more deals using our CRM intelligence"
- Cost reduction — "We eliminate 60% of manual data entry for finance teams"
- Risk reduction — "We reduce compliance violations by 90%, avoiding $500K+ in potential fines"
- Time savings — "CSMs spend 5 fewer hours per week on manual reporting"
Step 2: Quantify Per Customer
Calculate the dollar value of the outcome for your average customer.
Example: Data entry automation product
Target customer: 50-person finance team
Manual data entry: 4 hours/person/week
Hours saved with product: 2.4 hours/person/week (60% reduction)
Fully loaded cost of finance analyst: $75/hour
Weekly savings: 50 employees × 2.4 hours × $75 = $9,000
Annual savings: $9,000 × 52 weeks = $468,000Step 3: Determine Willingness to Pay
Customers will typically pay 10-20% of the value delivered for software.
Annual value delivered: $468,000
Willingness to pay range: $46,800 - $93,600/year
Current market pricing: ~$60,000/year
Your pricing: $72,000/year (between median and upper WTP)Test your hypothesis:
- Interview 5-10 customers: "If we charged $X/year, is that reasonable?"
- Van Westendorp Price Sensitivity Meter:
- "At what price is this too cheap to trust?"
- "At what price is this a good deal?"
- "At what price is this getting expensive but still worth it?"
- "At what price is this too expensive?"
Step 4: Validate with Win Rate Analysis
Run this analysis quarterly:
Track win rate by price point (segmented if possible)
Win rate 30-40%: pricing is likely right
Win rate < 20%: price is too high OR value demonstration is broken
Win rate > 50%: you're underpriced
Note: Distinguish between "lost on price" and "lost on fit."
Lost on price + good ROI proof: test lower price or improve value story
Lost on fit: ICP problem, not pricing problemPackaging (Good / Better / Best)
The Three-Package Framework
Packaging is not just about features. It's about serving different buyer personas with different budgets and needs.
Buyer personas by tier:
Starter → The individual contributor or small team trying to solve an immediate problem
- Low budget authority
- Low-friction purchase (credit card, self-serve)
- Needs quick time to value
Professional → The team manager or department head
- $10K-100K budget authority
- Works with inside sales
- Needs collaboration features and reporting
Enterprise → The VP or C-suite buyer
- Unlimited budget (but requires justification)
- Needs compliance, security, SLAs, dedicated support
- Long buying process, multiple stakeholdersPackaging Design Rules
- Each tier must be useful on its own. Starter can't be crippled—customers need to succeed.
- Upgrade triggers must be obvious. When a customer hits a limit, the next tier should solve it clearly.
- Don't gate features that drive adoption. Collaboration features gated in a low tier kill viral growth.
- Enterprise pricing is custom. Show "Contact Sales" or a starting price. Don't publish a firm enterprise price—you'll anchor too low.
- Annual vs. monthly pricing: Charge 15-25% more for monthly vs. annual. Incentivize annual prepay.
Pricing Page Design
- Lead with the most popular tier (visually prominent)
- Show annual pricing by default (with toggle to monthly)
- Highlight one or two "recommended" plans
- Feature comparison table: minimize the number of rows (overwhelm = no decision)
- Show logos of customers on each tier (social proof by segment)
- Live chat for enterprise CTA, not "Contact Sales" form
Pricing Experiments and Rollout
Before You Change Pricing
Internal checklist:
- Validate new pricing with 5-10 current customers (interviews)
- Run a willingness-to-pay survey with 50+ prospects
- Model revenue impact: how many customers at new pricing are equivalent to current ARR?
- Get CFO sign-off on cash flow impact
- Prepare messaging for customers, website, sales team
- Set a rollout date 60-90 days out
Testing Approaches
Cohort testing (safest):
- New signups see new pricing; existing customers are grandfathered
- Monitor: conversion rate, ACV, win rate, time-to-close
- Run for 90 days before full rollout
A/B pricing test (higher stakes):
- Half of new signups see price A, half see price B
- Risk: word gets out that prices differ (customer frustration)
- Use only on self-serve, where purchase is not sales-assisted
Segment-specific rollout:
- Change pricing in one segment (e.g., SMB) while holding enterprise steady
- Lower risk than full rollout; validate before expanding
Pricing Rollout Plan
Day 0: Decision made, pricing document approved
Day -60: Internal communication to sales, CS, support
Day -45: Customer communication drafted and reviewed
Day -30: New pricing live on website for new customers
Day -30: Existing customer email sent (90-day grandfather period)
Day -30: Sales team trained, FAQ document ready
Day -14: Second reminder to existing customers
Day 0: Existing customers transition to new pricing
Day +30: Win rate analysis, NRR impact reviewGrandfathering Policy
- Standard: Grandfather existing customers at old price for 12 months
- Aggressive: 90 days grandfather, then new pricing applies (use if you're raising significantly)
- Never: Retroactive pricing changes with no notice. This is a churn trigger and brand damage.
Grandfathering message framing:
"We're investing significantly in [feature areas]. As a valued customer, your pricing remains unchanged through [date]. After that, your new rate will be $X — still X% less than new customer pricing as a thank-you for your partnership."
Competitive Pricing Analysis
Mapping the Competitive Landscape
Step 1: List all direct competitors
Step 2: Find their public pricing (website, G2, Capterra)
Step 3: Secret shop their sales process for unpublished pricing
Step 4: Talk to customers who considered them ("What did they quote you?")
Step 5: Map to your packaging (apples-to-apples comparison)
Output: Competitive pricing matrix
You: $X/month per seat at Pro tier
Competitor A: $Y/month per seat at equivalent tier
Competitor B: Custom (enterprise only)Competitive Positioning by Price
| Your Position | Situation | Response |
|---|---|---|
| Significantly cheaper | Unclear why | Raise prices or clarify differentiation |
| Slightly cheaper | Winning on price | Test raising price, monitor win rate |
| At market | Competing on features | Make sure differentiation is clear in sales |
| Slightly more expensive | Win rate healthy | Price is justified by value |
| Significantly more expensive | Win rate low | Improve value proof or re-examine ICP |
When "They're Cheaper" Appears in Deals
Coach your reps:
- "What makes [Competitor] worth choosing over the $X difference?" (reframe value, not price)
- "If price were equal, which would you choose and why?" (understand true preference)
- "What's the cost of not solving this problem in Q3?" (urgency + value)
- "What's their implementation cost and time?" (TCO, not ACV)
If price is truly the barrier:
- Offer a pilot at reduced scope (not price) to prove value
- Multi-year deal with year-one discount
- Defer payment to match their budget cycle (start in Q4, bill in Q1)
- Confirm it's price and not a champion issue or lack of urgency
When to Raise Prices
Green Lights for a Price Increase
Product signals:
- Customer usage growing QoQ (product delivers real value)
- NPS consistently > 40
- Feature requests indicate you're solving critical workflows
- Customers measuring and can articulate ROI
Market signals:
- Win rate > 35% (strong signal of underpricing)
- Waitlist or high inbound conversion without price objections
- Competitors raising prices (market is moving up)
- You've added significant value (new features, integrations, uptime improvements)
Business signals:
- Gross margin below 70% (cost inflation requires pricing response)
- CAC payback > 24 months (need higher ACV to fix unit economics)
- Haven't raised prices in 2+ years (inflation alone justifies adjustment)
How Much to Raise
Conservative: 10-15% increase. Low risk, low disruption. Standard: 15-30% increase. Acceptable if value story is strong. Aggressive: 30-50% increase. Only with major product investment or clear underprice. Repositioning: 2-5x increase. Rare; requires moving to a new buyer persona.
Rule: If fewer than 20% of prospects mention price as a concern, you're underpriced. Test.
Price Increase Execution
- Raise new business pricing immediately on the website
- Communicate to existing customers with 90 days notice
- Grandfather for 12 months OR give a 10-15% loyalty discount on new price
- Track: conversion rate (new business), churn rate (existing), expansion ARR impact
- Monitor win rate for 60 days post-increase; adjust if win rate drops > 5 points
What not to do:
- Don't apologize for raising prices
- Don't over-explain the justification (confident framing wins)
- Don't let sales reps negotiate discounts back to old pricing "just this once"
- Don't raise prices and remove features simultaneously
Sales Playbook
Frameworks for building, running, and scaling a B2B SaaS sales organization.
Sales Process Design
A sales process is a repeatable series of steps that takes a prospect from first contact to closed revenue. Without it, you have individual heroics, not a scalable machine.
The Core Funnel
Lead Generation → Qualification → Discovery → Demo → Trial / POC → Proposal → Negotiation → Close → HandoffEach stage has a clear entry criterion, exit criterion, and owner.
Stage Definitions
Stage 0: Lead / Suspect
- Entry: Contact exists in CRM with basic firmographic data
- Owner: Marketing or SDR
- Exit criterion: Meets ICP criteria (company size, industry, tech stack)
- Action: Research, prioritize, add to outbound sequence
Stage 1: Prospecting / Outreach
- Entry: ICP-qualified account, no contact yet
- Owner: SDR or AE (depending on model)
- Exit criterion: Meeting booked with a qualified contact
- Action: Multi-channel outreach (email + call + LinkedIn), 8-12 touch sequence
- Key metric: Meeting booked rate (benchmark: 2-5% of outbound contacts)
Stage 2: Discovery
- Entry: First meeting confirmed
- Owner: AE (SDR hands off or joins)
- Exit criterion: Confirmed: pain, budget range, decision process, timeline
- Action: Ask questions. Listen. Map the org. Don't pitch yet.
- Key metric: Discovery-to-demo rate (benchmark: 60-80% proceed)
Discovery question framework:
Situation: "How do you currently handle [problem area]?"
Problem: "What's the impact when [pain point] happens?"
Implication: "If this continues, what does that mean for [business goal]?"
Need-payoff: "If we solved this, what would that be worth to you?"Stage 3: Demo / Solution Presentation
- Entry: Confirmed pain and fit from discovery
- Owner: AE (+ SE for complex products)
- Exit criterion: Prospect agrees to evaluate / trial; next step defined
- Action: Show the workflow that solves their specific pain (not a feature tour)
- Key metric: Demo-to-trial/proposal rate (benchmark: 40-60%)
Demo structure:
- Recap their pain (show you listened) — 5 min
- Show the "aha moment" (fastest path to value) — 10 min
- Walk the specific workflow they described — 15 min
- Handle objections, confirm fit — 5 min
- Define clear next step (date, owners, criteria) — 5 min
Never show features they didn't ask for. Every additional feature is noise until they have a reason to care.
Stage 4: Trial / POC
- Entry: Prospect commits to evaluate with real data/use case
- Owner: AE + CSM or SE
- Exit criterion: Success criteria met, POC success confirmed
- Action: Define success criteria upfront (in writing). Set a tight timeframe (2-4 weeks max).
- Key metric: POC-to-proposal rate (benchmark: 50-70%)
POC setup requirements:
Before any POC:
□ Signed NDA
□ Written success criteria ("We'll move forward if X happens")
□ Named champion who owns the evaluation
□ Executive sponsor identified
□ Defined timeline with end date
□ Agreed next step if criteria are metIf you can't get written success criteria, you don't have a real opportunity. You have a "we'll see."
Stage 5: Proposal / Pricing
- Entry: POC success OR strong discovery fit for simple products
- Owner: AE
- Exit criterion: Proposal received, timeline to decision confirmed
- Action: Present in a live call, never email a proposal cold
- Key metric: Proposal-to-negotiation rate (benchmark: 50-75%)
Proposal structure:
- Problem statement (their words, not yours)
- Proposed solution (mapped to their workflow)
- ROI summary (value delivered vs. investment)
- Pricing options (give 2-3 options; anchors the decision)
- Next steps with dates
Stage 6: Negotiation
- Entry: Verbal intent to proceed, price/terms discussion begins
- Owner: AE (+ VP Sales for large deals)
- Exit criterion: Mutual agreement on terms; contract sent
- Action: Never discount before they ask. Discount on scope, not on margin.
- Key metric: Negotiation win rate (benchmark: 70-85%)
Negotiation principles:
- Get something for everything you give. Discount → multi-year. Fast close → early pay discount.
- Don't negotiate against yourself. Silence after an offer is not rejection.
- Know your walk-away before you enter. If you don't have a BATNA, you have no leverage.
- Legal/procurement delay ≠ deal death. Keep the champion engaged.
Stage 7: Close
- Entry: Signed contract or PO received
- Owner: AE
- Exit criterion: Contract countersigned, kickoff date set
- Action: Celebrate with the customer. Immediately introduce CSM.
- Key metric: Average close rate (closed won ÷ all closed = won + lost)
Stage 8: Handoff to Customer Success
- Entry: Deal closed
- Owner: AE + CSM
- Exit criterion: Customer has met their assigned CSM, kickoff scheduled
- Action: Internal handoff call with AE + CSM. AE shares: deal context, key stakeholders, use case, success criteria, any promises made during the sale.
Handoff document (AE fills before first CS meeting):
Account: [name]
ACV: $X
Close date: [date]
Primary contact: [name, title, email]
Economic buyer: [name, title]
Use case: [specific workflow]
Success criteria: [what they said good looks like in 90 days]
Promises made: [anything specific committed during sale]
Risk flags: [competitive, budget, champion strength]MEDDPICC Qualification Framework
MEDDPICC is the enterprise qualification standard. If you can't answer every letter, you don't have a qualified opportunity — you have a conversation.
M — Metrics
What is the quantified business impact? What does winning look like in numbers?
- "What's the current cost of [the problem]?"
- "How do you measure success in this area today?"
- "If we achieve X outcome, what does that save or earn you?"
Red flag: No metrics = no business case = hard to get budget.
E — Economic Buyer
Who has final authority to approve the budget?
- "Who else will be involved in the final decision?"
- "Have you purchased solutions in this range before? Who approved that?"
- "When we get to final terms, who needs to sign?"
Red flag: You only know the user buyer. Economic buyer hasn't engaged.
D — Decision Criteria
What factors will they use to evaluate and select a solution?
- "What's most important in your evaluation?"
- "How will you compare options?"
- "What does the ideal solution look like to you?"
Why it matters: If you don't know their criteria, you're guessing what to prove. Define the criteria before you compete on them.
D — Decision Process
What are the steps from evaluation to signed contract?
- "Walk me through your process from here to signed agreement."
- "Does procurement get involved? Legal? InfoSec?"
- "Have you purchased software at this price before? How long did that take?"
Red flag: No defined process = unlimited sales cycle.
P — Paper Process
What's the contract and legal process?
- "Who manages vendor contracts on your side?"
- "What's your standard MSA, or do you use ours?"
- "How long does legal review typically take?"
Why it matters: Legal and procurement have killed many "done" deals. Start early. Route to your legal team simultaneously.
I — Identify Pain
What is the specific, felt pain driving this evaluation?
- "What triggered this initiative now vs. six months ago?"
- "What happens if you don't solve this in Q3?"
- "On a scale of 1-10, how urgent is this for your team?"
Red flag: Pain isn't felt by the economic buyer. User pain ≠ budget authority.
C — Champion
Who will actively sell your solution internally when you're not in the room?
- "Who else have you brought into this evaluation?"
- "Can you help us get access to [economic buyer / IT / security]?"
- "If the decision went the wrong way, who would be disappointed?"
Red flag: Your champion is enthusiastic but has no internal influence.
C — Competition
Who else are they evaluating? What's your position?
- "Are you looking at alternatives?"
- "What made you start with us?"
- "Have you used [Competitor X] before?"
Why it matters: Knowing the competitive field tells you what you need to prove and what to neutralize.
MEDDPICC Scorecard
| Letter | Score 1 | Score 2 | Score 3 |
|---|---|---|---|
| Metrics | No numbers | Approximate value | Specific ROI model |
| Economic Buyer | Unknown | Named, not engaged | Engaged directly |
| Decision Criteria | Vague | Partially defined | Written, weighted |
| Decision Process | Unknown | Verbal description | Steps confirmed, timeline known |
| Paper Process | Unknown | Basic awareness | Legal contacts, standard process known |
| Identify Pain | No urgency | User-level pain | Executive-level pain with consequences |
| Champion | No advocate | Friendly contact | Actively selling internally |
| Competition | Unknown | Identified | Position mapped, differentiation clear |
Score each 1-3. Total 16+/24 = qualified opportunity. Under 12 = unqualified, do not forecast.
Sales Compensation Plans
Comp drives behavior. Design it precisely.
Base / Variable Split
| Role | Base % | Variable % | Rationale |
|---|---|---|---|
| SDR | 60-70% | 30-40% | Activity-based, not purely revenue |
| AE (Inside Sales) | 50% | 50% | Balanced risk/reward |
| AE (Enterprise) | 55-60% | 40-45% | Longer cycle, higher base for stability |
| VP Sales | 50% | 50% | Accountable for team results |
| CSM (retention focus) | 70% | 30% | Less variable, stable relationship role |
| CSM (expansion focus) | 60% | 40% | Expansion quota adds variable |
Commission Structure
Standard AE plan:
Base: $80K
Variable: $80K (at 100% quota attainment)
OTE: $160K
Commission rate: OTE variable ÷ Quota
If quota = $800K ARR: commission = $80K ÷ $800K = 10% of ARR closed
Accelerators (performance above quota):
101-125% quota: 1.25x commission rate (12.5% of ARR)
126-150% quota: 1.5x commission rate (15% of ARR)
> 150% quota: 2.0x commission rate (20% of ARR)Why accelerators matter:
- They keep top performers motivated past quota
- They make it possible for top reps to earn $200K+ (attracting talent)
- They create the "make it rain" culture
SDR Compensation
SDRs are measured on output (meetings booked, pipeline created), not closed revenue.
Quota: 20 qualified meetings booked per month (or $X pipeline created)
Commission: $150-300 per qualified meeting held
Accelerators:
If a meeting converts to closed won: Bonus $250-500
If monthly meetings > 125% of quota: 1.5x rate on upside meetingsClawbacks
A clawback recovers commission paid on deals that churn or are fraudulently closed.
Common clawback rules:
- Full clawback if customer cancels within 90 days of close
- 50% clawback if customer cancels within 91-180 days
- No clawback after 180 days (AE shouldn't be penalized for future CS failures)
- Clawbacks vest: pay commission immediately but apply against next quarter's payout if triggered
Why clawbacks matter:
- Without them, reps are incentivized to close any deal, regardless of fit
- With them, reps self-qualify more carefully
SPIFFs (Sales Performance Incentive Funds)
Short-term tactical incentives for specific behaviors:
- $5K bonus for closing a new vertical deal this quarter
- 1.5x commission on annual prepay deals in Q4
- $1K for closing a deal in a new geographic territory
Use SPIFFs sparingly. Overuse trains reps to wait for the SPIFF before engaging.
Multi-Year and Prepay Incentives
Align rep behavior with company cash flow:
- Multi-year deals: Credit full TCV against quota, pay commission upfront on TCV
- Annual prepay: 10-20% uplift on commission rate
- Monthly billing: Standard commission rate
Enterprise vs. SMB vs. Self-Serve Models
Self-Serve / PLG
Characteristics:
- Product is the primary acquisition channel
- Credit card required (no invoicing)
- No human touch in the initial purchase
- Sales engages only at enterprise signals (high usage, team expansion, compliance needs)
Funnel:
Website → Free trial / Freemium → Activation → PQL → Expansion → EnterpriseKey metrics:
- Free-to-paid conversion rate (benchmark: 2-5% of signups)
- Time to activation (first core action)
- PQL → expansion conversion rate
- NRR from self-serve base
Sales involvement triggers (PQL signals):
- Team size > 10 seats
- Usage spikes (power user patterns)
- Feature limit hits on core features
- Job title change (new economic buyer appears in account)
SMB Inside Sales
Characteristics:
- ACV $5K-25K
- 30-60 day sales cycle
- Inbound-heavy or light outbound
- SDR → AE → CS model
- Phone + email + video; no in-person
Funnel:
Inbound/MQL → SDR qualifies → AE discovery → Demo → Proposal → CloseKey metrics:
- MQL-to-SQL rate (benchmark: 15-25%)
- SQL-to-close rate (benchmark: 20-30%)
- Average sales cycle (30-60 days)
- AE productivity: $600K-$1M quota per rep
Team ratios:
- 1 SDR supports 3-4 AEs
- 1 CSM manages $1M-2M ARR
Enterprise Sales
Characteristics:
- ACV $50K+
- 90-365 day sales cycle
- Outbound prospecting + inbound from brand
- AE + SE + executive sponsor model
- Multi-stakeholder: champion, economic buyer, IT, legal, procurement
Funnel:
Account targeting → Executive outreach → Discovery → POC → Security review → Legal → Procurement → CloseKey metrics:
- Deals in pipeline (volume matters less, quality more)
- POC win rate (benchmark: 60-75%)
- Average sales cycle (3-12 months)
- AE productivity: $1.5M-$3M quota per rep
Team ratios:
- 1 SE supports 3-4 AEs
- 1 CSM manages $2M-5M ARR (named accounts, high-touch)
Sales Hiring and Ramp
What "Good" Looks Like by Role
SDR (entry level):
- 1-2 years of outbound experience OR strong track record in customer-facing role
- Resilient: rejection is the job
- Coachable: SDR is a proving ground, not a final destination
- Can write clear, concise prospecting emails without templates
AE (inside sales):
- 2-4 years sales experience, preferably SaaS
- Can articulate their process for a discovery call
- Knows their numbers: quota, attainment, average deal size, sales cycle
- Shows how they build pipeline (AEs who only work inbound are a risk)
AE (enterprise):
- 4-8 years B2B sales, at least 2 in enterprise
- Has closed deals > $100K ACV
- Can name the stakeholders in a complex deal they navigated
- Understands procurement, security review, multi-year contracts
VP Sales:
- Has scaled a team from where you are to 2x your size
- Can build a comp plan from scratch
- Has hiring and firing experience
- Revenue from a repeatable process, not personal relationships
Interview Process
3-stage process:
- Recruiter screen (30 min): Motivation, experience, logistics
- Manager interview (60 min): Structured questions on process, examples, numbers
- Panel / role play (90 min): Mock discovery call + debrief; team fit
Role play rubric:
- Did they prepare (knew your product, your ICP)?
- Did they ask before pitching?
- Did they handle pushback without capitulating immediately?
- Did they confirm a next step with a date?
Onboarding Structure (6-Week Ramp)
| Week | Focus | Activities |
|---|---|---|
| 1 | Company, product, ICP | Onboarding sessions, product sandbox, shadow AE calls |
| 2 | Sales process, tools, messaging | CRM training, call review, write first prospecting emails |
| 3 | First outreach | Send first sequences, book first meetings, shadow closes |
| 4 | Independent discovery | Lead own discovery calls with manager reviewing |
| 5 | Full cycle | Handle pipeline independently, weekly coaching |
| 6 | Quota-bearing | 25% of quota expectation; full accountability begins |
Performance Management
Clear standards, no surprises:
Month 3: 25% of quota expected. Miss by > 50% → performance conversation.
Month 4: 50% of quota expected. Miss by > 40% → PIP warning.
Month 5: 75% of quota. Miss by > 30% → formal PIP.
Month 6+: 100% of quota. Consistent miss → exit.PIP (Performance Improvement Plan) — not for show:
- Should include specific, measurable targets (not "improve attitude")
- 30-60 day timeline
- Weekly check-ins with manager
- If targets aren't met: exit, no extensions
- A PIP that doesn't lead to improvement or exit is a management failure
Rule: Low performers who stay cost you your top performers. They watch what you tolerate.
#!/usr/bin/env python3
"""
Churn & Retention Analyzer
===========================
Customer-level churn and Net Revenue Retention (NRR) analysis for B2B SaaS.
Calculates:
- Gross Revenue Retention (GRR) and Net Revenue Retention (NRR)
- Monthly and annual churn rates (logo + revenue)
- Cohort-based retention curves
- At-risk account identification
- Expansion revenue segmentation
- ARR waterfall (new / expansion / contraction / churn)
Usage:
python churn_analyzer.py
python churn_analyzer.py --csv customers.csv
python churn_analyzer.py --period 2026-Q1 --output summary
Input format (CSV):
customer_id, name, segment, arr, start_date, [churn_date], [expansion_arr], [contraction_arr]
Stdlib only. No dependencies.
"""
import csv
import sys
import json
import argparse
import statistics
from datetime import date, datetime, timedelta
from collections import defaultdict
from io import StringIO
from itertools import groupby
# ---------------------------------------------------------------------------
# Data model
# ---------------------------------------------------------------------------
class Customer:
def __init__(self, customer_id, name, segment, arr, start_date,
churn_date=None, expansion_arr=0.0, contraction_arr=0.0,
health_score=None):
self.customer_id = customer_id
self.name = name
self.segment = segment
self.arr = float(arr)
self.start_date = self._parse_date(start_date)
self.churn_date = self._parse_date(churn_date) if churn_date else None
self.expansion_arr = float(expansion_arr or 0)
self.contraction_arr = float(contraction_arr or 0)
self.health_score = float(health_score) if health_score else None
@staticmethod
def _parse_date(value):
if not value or str(value).strip() in ("", "None", "null"):
return None
for fmt in ("%Y-%m-%d", "%m/%d/%Y", "%d/%m/%Y", "%Y/%m/%d"):
try:
return datetime.strptime(str(value).strip(), fmt).date()
except ValueError:
continue
raise ValueError(f"Cannot parse date: {value!r}")
def is_churned(self):
return self.churn_date is not None
def is_active(self, as_of=None):
as_of = as_of or date.today()
if self.churn_date and self.churn_date <= as_of:
return False
return self.start_date <= as_of
def tenure_days(self, as_of=None):
as_of = as_of or date.today()
end = self.churn_date if self.churn_date else as_of
return (end - self.start_date).days
def tenure_months(self, as_of=None):
return self.tenure_days(as_of) / 30.44
def cohort_month(self):
"""Acquisition cohort: YYYY-MM of start_date."""
return self.start_date.strftime("%Y-%m")
def cohort_quarter(self):
q = (self.start_date.month - 1) // 3 + 1
return f"Q{q} {self.start_date.year}"
def net_arr(self):
"""Current ARR + expansion - contraction."""
return self.arr + self.expansion_arr - self.contraction_arr
def days_since_acquisition(self, as_of=None):
as_of = as_of or date.today()
return (as_of - self.start_date).days
# ---------------------------------------------------------------------------
# Core metrics
# ---------------------------------------------------------------------------
class RetentionAnalyzer:
def __init__(self, customers, as_of=None):
self.customers = customers
self.as_of = as_of or date.today()
def active_customers(self, as_of=None):
as_of = as_of or self.as_of
return [c for c in self.customers if c.is_active(as_of)]
def churned_customers(self, start=None, end=None):
"""Customers who churned in [start, end]."""
result = []
for c in self.customers:
if not c.churn_date:
continue
if start and c.churn_date < start:
continue
if end and c.churn_date > end:
continue
result.append(c)
return result
def arr_waterfall(self, period_start, period_end):
"""
Calculate ARR waterfall for a given period.
Returns dict with opening_arr, new_arr, expansion_arr, contraction_arr,
churned_arr, closing_arr, nrr, grr.
"""
# Opening: active at period start
opening_customers = [c for c in self.customers if c.is_active(period_start)]
opening_arr = sum(c.arr for c in opening_customers)
opening_ids = {c.customer_id for c in opening_customers}
# New: started during the period
new_customers = [
c for c in self.customers
if period_start < c.start_date <= period_end
]
new_arr = sum(c.arr for c in new_customers)
# Churned: were active at start, churn_date within period
churned = [
c for c in opening_customers
if c.churn_date and period_start < c.churn_date <= period_end
]
churned_arr = sum(c.arr for c in churned)
# Expansion and contraction: from customers active at opening
expansion = sum(
c.expansion_arr for c in opening_customers
if not c.is_churned() or (c.churn_date and c.churn_date > period_end)
)
contraction = sum(
c.contraction_arr for c in opening_customers
if not c.is_churned() or (c.churn_date and c.churn_date > period_end)
)
closing_arr = opening_arr + new_arr + expansion - contraction - churned_arr
grr = (opening_arr - contraction - churned_arr) / opening_arr if opening_arr else 0
nrr = (opening_arr + expansion - contraction - churned_arr) / opening_arr if opening_arr else 0
return {
"period_start": period_start.isoformat(),
"period_end": period_end.isoformat(),
"opening_arr": opening_arr,
"new_arr": new_arr,
"expansion_arr": expansion,
"contraction_arr": contraction,
"churned_arr": churned_arr,
"closing_arr": closing_arr,
"net_new_arr": new_arr + expansion - contraction - churned_arr,
"grr": max(0.0, grr),
"nrr": max(0.0, nrr),
}
def logo_churn_rate(self, period_start, period_end):
"""Logo churn rate for a period."""
opening = [c for c in self.customers if c.is_active(period_start)]
churned = [
c for c in opening
if c.churn_date and period_start < c.churn_date <= period_end
]
return len(churned) / len(opening) if opening else 0.0
def revenue_churn_rate(self, period_start, period_end):
"""Gross revenue churn rate for a period."""
opening = [c for c in self.customers if c.is_active(period_start)]
opening_arr = sum(c.arr for c in opening)
churned_arr = sum(
c.arr for c in opening
if c.churn_date and period_start < c.churn_date <= period_end
)
contraction = sum(c.contraction_arr for c in opening)
return (churned_arr + contraction) / opening_arr if opening_arr else 0.0
# ---------------------------------------------------------------------------
# Cohort analysis
# ---------------------------------------------------------------------------
class CohortAnalyzer:
def __init__(self, customers):
self.customers = customers
def build_cohorts(self):
"""Group customers by acquisition cohort (month)."""
cohorts = defaultdict(list)
for c in self.customers:
cohorts[c.cohort_month()].append(c)
return dict(sorted(cohorts.items()))
def retention_at_month(self, cohort_customers, months_after):
"""
What fraction of cohort ARR remains `months_after` months after acquisition?
"""
if not cohort_customers:
return None
opening_arr = sum(c.arr for c in cohort_customers)
if opening_arr == 0:
return None
earliest_start = min(c.start_date for c in cohort_customers)
check_date = earliest_start + timedelta(days=int(months_after * 30.44))
if check_date > date.today():
return None # Future — no data
retained_arr = sum(
c.arr for c in cohort_customers
if c.is_active(check_date)
)
return retained_arr / opening_arr
def retention_curve(self, cohort_customers, max_months=24):
"""Return retention at months 0, 3, 6, 9, 12, 18, 24."""
checkpoints = [0, 3, 6, 9, 12, 18, 24]
checkpoints = [m for m in checkpoints if m <= max_months]
curve = {}
for m in checkpoints:
rate = self.retention_at_month(cohort_customers, m)
if rate is not None:
curve[m] = rate
return curve
def cohort_report(self):
"""Returns dict: cohort → {size, opening_arr, retention_curve}."""
cohorts = self.build_cohorts()
report = {}
for cohort_month, customers in cohorts.items():
curve = self.retention_curve(customers)
report[cohort_month] = {
"customer_count": len(customers),
"opening_arr": sum(c.arr for c in customers),
"churned_count": sum(1 for c in customers if c.is_churned()),
"current_retention": curve.get(12, curve.get(max(curve.keys()) if curve else 0)),
"retention_curve": curve,
}
return report
def identify_at_risk(self, tenure_months_max=6, health_threshold=60):
"""
Identify at-risk customers based on:
- Low health score (if available)
- Short tenure (haven't proved long-term value)
- High contraction signals
"""
at_risk = []
for c in self.customers:
if c.is_churned():
continue
reasons = []
score = 0
# Health score signal
if c.health_score is not None and c.health_score < health_threshold:
reasons.append(f"Health score {c.health_score:.0f} < {health_threshold}")
score += 40
# Early tenure risk
tenure = c.tenure_months()
if tenure < tenure_months_max:
reasons.append(f"Tenure {tenure:.1f} months (< {tenure_months_max})")
score += 20
# Contraction signal
if c.contraction_arr > 0:
contraction_pct = c.contraction_arr / c.arr
reasons.append(f"Contraction {contraction_pct:.0%} of ARR")
score += 30
# No expansion in mature account
if tenure > 12 and c.expansion_arr == 0:
reasons.append("No expansion after 12+ months (stagnant)")
score += 10
if score > 0:
at_risk.append({
"customer_id": c.customer_id,
"name": c.name,
"segment": c.segment,
"arr": c.arr,
"tenure_months": round(tenure, 1),
"health_score": c.health_score,
"risk_score": score,
"risk_reasons": reasons,
})
return sorted(at_risk, key=lambda x: -x["risk_score"])
# ---------------------------------------------------------------------------
# Expansion analysis
# ---------------------------------------------------------------------------
class ExpansionAnalyzer:
def __init__(self, customers):
self.customers = customers
def expansion_summary(self):
active = [c for c in self.customers if not c.is_churned()]
expanding = [c for c in active if c.expansion_arr > 0]
contracting = [c for c in active if c.contraction_arr > 0]
total_arr = sum(c.arr for c in active)
total_expansion = sum(c.expansion_arr for c in active)
total_contraction = sum(c.contraction_arr for c in active)
return {
"active_customers": len(active),
"total_arr": total_arr,
"expanding_count": len(expanding),
"contracting_count": len(contracting),
"expansion_arr": total_expansion,
"contraction_arr": total_contraction,
"expansion_rate": total_expansion / total_arr if total_arr else 0,
"contraction_rate": total_contraction / total_arr if total_arr else 0,
"net_expansion_rate": (total_expansion - total_contraction) / total_arr if total_arr else 0,
}
def expansion_by_segment(self):
active = [c for c in self.customers if not c.is_churned()]
by_segment = defaultdict(lambda: {"arr": 0.0, "expansion": 0.0,
"contraction": 0.0, "count": 0})
for c in active:
seg = c.segment or "Unspecified"
by_segment[seg]["arr"] += c.arr
by_segment[seg]["expansion"] += c.expansion_arr
by_segment[seg]["contraction"] += c.contraction_arr
by_segment[seg]["count"] += 1
result = {}
for seg, data in by_segment.items():
arr = data["arr"]
result[seg] = {
"customer_count": data["count"],
"arr": arr,
"expansion_arr": data["expansion"],
"contraction_arr": data["contraction"],
"expansion_rate": data["expansion"] / arr if arr else 0,
"net_nrr_contribution": (arr + data["expansion"] - data["contraction"]) / arr if arr else 0,
}
return result
def top_expansion_candidates(self, min_tenure_months=6, min_arr=5000):
"""
Customers who are active, healthy tenure, but have zero expansion.
These are upsell/expansion targets.
"""
active = [c for c in self.customers if not c.is_churned()]
candidates = []
for c in active:
tenure = c.tenure_months()
if (tenure >= min_tenure_months
and c.arr >= min_arr
and c.expansion_arr == 0
and (c.health_score is None or c.health_score >= 60)):
candidates.append({
"customer_id": c.customer_id,
"name": c.name,
"segment": c.segment,
"arr": c.arr,
"tenure_months": round(tenure, 1),
"health_score": c.health_score,
})
return sorted(candidates, key=lambda x: -x["arr"])
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_currency(value):
if value >= 1_000_000:
return f"${value / 1_000_000:.2f}M"
if value >= 1_000:
return f"${value / 1_000:.1f}K"
return f"${value:.0f}"
def fmt_pct(value):
return f"{value * 100:.1f}%"
def nrr_status(nrr):
if nrr >= 1.20:
return "✅ World-class"
if nrr >= 1.10:
return "✅ Healthy"
if nrr >= 1.00:
return "⚠️ Acceptable"
if nrr >= 0.90:
return "🔴 Concerning"
return "🔴 Crisis"
def grr_status(grr):
if grr >= 0.90:
return "✅ Strong"
if grr >= 0.85:
return "⚠️ Acceptable"
return "🔴 Below threshold"
def print_header(title):
width = 70
print()
print("=" * width)
print(f" {title}")
print("=" * width)
def print_section(title):
print(f"\n--- {title} ---")
def print_full_report(customers, period_start, period_end):
analyzer = RetentionAnalyzer(customers, as_of=period_end)
cohort_analyzer = CohortAnalyzer(customers)
expansion_analyzer = ExpansionAnalyzer(customers)
print_header("CHURN & RETENTION ANALYZER")
print(f" Analysis period: {period_start.isoformat()} → {period_end.isoformat()}")
print(f" Total customers in dataset: {len(customers)}")
active = analyzer.active_customers(period_end)
churned_in_period = analyzer.churned_customers(period_start, period_end)
print(f" Active at period end: {len(active)}")
print(f" Churned in period: {len(churned_in_period)}")
# ── ARR Waterfall
print_section("ARR WATERFALL")
wf = analyzer.arr_waterfall(period_start, period_end)
print(f" Opening ARR: {fmt_currency(wf['opening_arr'])}")
print(f" + New Logo ARR: +{fmt_currency(wf['new_arr'])}")
print(f" + Expansion ARR: +{fmt_currency(wf['expansion_arr'])}")
print(f" - Contraction ARR: -{fmt_currency(wf['contraction_arr'])}")
print(f" - Churned ARR: -{fmt_currency(wf['churned_arr'])}")
print(f" {'─'*42}")
print(f" Closing ARR: {fmt_currency(wf['closing_arr'])}")
print(f" Net New ARR: {'+' if wf['net_new_arr'] >= 0 else ''}{fmt_currency(wf['net_new_arr'])}")
# ── NRR / GRR
print_section("RETENTION METRICS")
nrr = wf["nrr"]
grr = wf["grr"]
logo_churn = analyzer.logo_churn_rate(period_start, period_end)
rev_churn = analyzer.revenue_churn_rate(period_start, period_end)
print(f" NRR (Net Revenue Retention): {fmt_pct(nrr)} {nrr_status(nrr)}")
print(f" GRR (Gross Revenue Retention): {fmt_pct(grr)} {grr_status(grr)}")
print(f" Logo Churn Rate (period): {fmt_pct(logo_churn)}")
print(f" Revenue Churn Rate (period): {fmt_pct(rev_churn)}")
if wf["opening_arr"] > 0:
expansion_rate = wf["expansion_arr"] / wf["opening_arr"]
print(f" Expansion Rate (period): {fmt_pct(expansion_rate)}")
print()
print(f" NRR Benchmark: >120% world-class | 100-120% healthy | <100% fix immediately")
# ── Expansion summary
print_section("EXPANSION REVENUE")
exp = expansion_analyzer.expansion_summary()
print(f" Expanding customers: {exp['expanding_count']} / {exp['active_customers']} ({fmt_pct(exp['expanding_count']/exp['active_customers']) if exp['active_customers'] else '—'})")
print(f" Contracting: {exp['contracting_count']} / {exp['active_customers']}")
print(f" Expansion ARR: {fmt_currency(exp['expansion_arr'])} ({fmt_pct(exp['expansion_rate'])} of base)")
print(f" Contraction ARR: {fmt_currency(exp['contraction_arr'])}")
print(f" Net Expansion Rate: {fmt_pct(exp['net_expansion_rate'])}")
# ── Segment breakdown
print_section("SEGMENT BREAKDOWN (NRR Components)")
seg_data = expansion_analyzer.expansion_by_segment()
col_w = [18, 8, 12, 10, 10, 10]
h = (f" {'Segment':<{col_w[0]}} {'Custs':>{col_w[1]}} {'ARR':>{col_w[2]}} "
f"{'Expansion':>{col_w[3]}} {'Contraction':>{col_w[4]}} {'NRR':>{col_w[5]}}")
print(h)
print(" " + "-" * (sum(col_w) + 5))
for seg, data in sorted(seg_data.items(), key=lambda x: -x[1]["arr"]):
print(f" {seg:<{col_w[0]}} {data['customer_count']:>{col_w[1]}} "
f"{fmt_currency(data['arr']):>{col_w[2]}} "
f"{fmt_currency(data['expansion_arr']):>{col_w[3]}} "
f"{fmt_currency(data['contraction_arr']):>{col_w[4]}} "
f"{fmt_pct(data['net_nrr_contribution']):>{col_w[5]}}")
# ── Cohort retention
print_section("COHORT RETENTION CURVES")
cohort_report = cohort_analyzer.cohort_report()
print(f" {'Cohort':<10} {'Custs':>6} {'Opening ARR':>13} {'Mo.3':>8} {'Mo.6':>8} {'Mo.12':>8}")
print(" " + "-" * 57)
for cohort, data in cohort_report.items():
curve = data["retention_curve"]
m3 = fmt_pct(curve[3]) if 3 in curve else " —"
m6 = fmt_pct(curve[6]) if 6 in curve else " —"
m12 = fmt_pct(curve[12]) if 12 in curve else " —"
print(f" {cohort:<10} {data['customer_count']:>6} "
f"{fmt_currency(data['opening_arr']):>13} "
f"{m3:>8} {m6:>8} {m12:>8}")
# ── At-risk accounts
print_section("AT-RISK ACCOUNTS")
at_risk = cohort_analyzer.identify_at_risk()
if at_risk:
print(f" {'Customer':<22} {'Segment':<14} {'ARR':>10} {'Tenure':>8} {'Risk':>6} Reason")
print(" " + "-" * 80)
for acct in at_risk[:10]: # Top 10
reason_short = acct["risk_reasons"][0] if acct["risk_reasons"] else ""
tenure_str = f"{acct['tenure_months']}mo"
print(f" {acct['name']:<22} {acct['segment']:<14} "
f"{fmt_currency(acct['arr']):>10} {tenure_str:>8} "
f"{acct['risk_score']:>5} {reason_short}")
if len(at_risk) > 10:
print(f" ... and {len(at_risk) - 10} more at-risk accounts")
else:
print(" ✅ No at-risk accounts identified")
# ── Expansion candidates
print_section("EXPANSION CANDIDATES (no expansion yet, healthy tenure)")
candidates = expansion_analyzer.top_expansion_candidates()
if candidates:
print(f" {'Customer':<22} {'Segment':<14} {'ARR':>10} {'Tenure':>8} Action")
print(" " + "-" * 70)
for c in candidates[:8]:
action = "Upsell review" if c["arr"] > 20000 else "Seat expansion call"
tenure_str = f"{c['tenure_months']}mo"
print(f" {c['name']:<22} {c['segment']:<14} "
f"{fmt_currency(c['arr']):>10} {tenure_str:>8} {action}")
else:
print(" ✅ All eligible accounts have expansion in motion")
# ── Red flags
print_section("HEALTH FLAGS")
flags = []
if nrr < 1.0:
flags.append("🔴 NRR below 100% — revenue base is shrinking. Fix before scaling sales.")
if grr < 0.85:
flags.append(f"🔴 GRR {fmt_pct(grr)} — gross retention below 85% threshold. Churn is a product/CS problem.")
if logo_churn > 0.05:
flags.append(f"⚠️ Logo churn {fmt_pct(logo_churn)} this period — run cohort analysis to find the pattern.")
if exp["expansion_rate"] < 0.10 and exp["active_customers"] > 10:
flags.append("⚠️ Expansion rate below 10% — upsell motion is weak or non-existent.")
churned_arr_pct = wf["churned_arr"] / wf["opening_arr"] if wf["opening_arr"] else 0
if churned_arr_pct > 0.10:
flags.append(f"🔴 Revenue churn at {fmt_pct(churned_arr_pct)} of opening ARR this period — high urgency.")
if len(at_risk) > len(active) * 0.20:
flags.append(f"⚠️ {len(at_risk)} of {len(active)} active accounts flagged at-risk ({fmt_pct(len(at_risk)/len(active) if active else 0)})")
if flags:
for f in flags:
print(f" {f}")
else:
print(" ✅ No critical health flags")
print()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
SAMPLE_CSV = """customer_id,name,segment,arr,start_date,churn_date,expansion_arr,contraction_arr,health_score
C001,Acme Manufacturing,Enterprise,120000,2023-01-15,,45000,0,82
C002,TechStart Inc,Mid-Market,28000,2023-02-01,,8000,0,74
C003,Global Retail Co,Enterprise,250000,2023-01-05,,0,25000,45
C004,MedTech Solutions,Mid-Market,45000,2023-03-10,,15000,0,88
C005,FinServ Holdings,Enterprise,185000,2023-01-20,2023-09-15,0,0,
C006,StartupHub Network,SMB,12000,2023-04-01,,0,3000,55
C007,EduPlatform Inc,Mid-Market,32000,2023-02-15,,10000,0,91
C008,BioLab Analytics,Enterprise,95000,2023-01-10,,20000,0,78
C009,RegionalBank Corp,Enterprise,310000,2023-03-01,,75000,0,85
C010,CloudOps Systems,Mid-Market,38000,2023-05-01,2024-01-10,0,0,
C011,InsurTech Platform,Mid-Market,55000,2023-06-15,,0,0,62
C012,LegalAI Corp,SMB,18000,2023-07-01,,5000,0,79
C013,RetailChain Ltd,Enterprise,140000,2023-04-20,,0,20000,41
C014,DataPipeline Co,Mid-Market,42000,2023-08-01,,12000,0,83
C015,NanoTech Startup,SMB,9500,2023-09-15,2024-02-28,0,0,
C016,MedDevice Corp,Enterprise,220000,2023-02-28,,60000,0,92
C017,ConsultingFirm XYZ,SMB,15000,2023-10-01,,0,5000,38
C018,GovTech Solutions,Enterprise,175000,2023-11-15,,0,0,71
C019,AgriData Systems,Mid-Market,31000,2024-01-10,,8000,0,77
C020,HealthcarePlus,Mid-Market,62000,2024-02-01,,0,0,65
"""
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_customers_from_csv(csv_text):
reader = csv.DictReader(StringIO(csv_text))
customers = []
errors = []
for i, row in enumerate(reader, start=2):
try:
c = Customer(
customer_id=row.get("customer_id", f"row_{i}"),
name=row.get("name", f"Customer {i}"),
segment=row.get("segment", ""),
arr=row.get("arr", 0),
start_date=row.get("start_date", ""),
churn_date=row.get("churn_date", None) or None,
expansion_arr=row.get("expansion_arr", 0) or 0,
contraction_arr=row.get("contraction_arr", 0) or 0,
health_score=row.get("health_score", None) or None,
)
customers.append(c)
except (ValueError, KeyError) as e:
errors.append(f" Row {i}: {e}")
if errors:
print("⚠️ Skipped rows with errors:")
for err in errors:
print(err)
return customers
def parse_period(period_str):
"""Parse 'YYYY-QN' or 'YYYY-MM' into (start_date, end_date)."""
if not period_str:
today = date.today()
q = (today.month - 1) // 3
start = date(today.year, q * 3 + 1, 1)
# End of current quarter
end_month = start.month + 2
end_year = start.year + (end_month - 1) // 12
end_month = ((end_month - 1) % 12) + 1
import calendar
end_day = calendar.monthrange(end_year, end_month)[1]
return start, date(end_year, end_month, end_day)
import calendar
if "-Q" in period_str:
year, qpart = period_str.split("-Q")
year = int(year)
q = int(qpart)
start_month = (q - 1) * 3 + 1
end_month = start_month + 2
start = date(year, start_month, 1)
end = date(year, end_month, calendar.monthrange(year, end_month)[1])
return start, end
# YYYY-MM
year, month = period_str.split("-")
year, month = int(year), int(month)
start = date(year, month, 1)
end = date(year, month, calendar.monthrange(year, month)[1])
return start, end
def main():
parser = argparse.ArgumentParser(
description="Churn & Retention Analyzer — NRR, cohort analysis, at-risk detection"
)
parser.add_argument(
"--csv", metavar="FILE",
help="CSV file with customer data (uses sample data if not provided)"
)
parser.add_argument(
"--period", metavar="PERIOD",
help='Analysis period: "2026-Q1" or "2026-03" (defaults to current quarter)'
)
parser.add_argument(
"--output", choices=["summary", "full", "json"],
default="full",
help="Output format (default: full)"
)
args = parser.parse_args()
# Load data
if args.csv:
try:
with open(args.csv, "r", encoding="utf-8") as f:
csv_text = f.read()
except FileNotFoundError:
print(f"Error: File not found: {args.csv}", file=sys.stderr)
sys.exit(1)
else:
print("No --csv provided. Using sample customer data.\n")
csv_text = SAMPLE_CSV
customers = load_customers_from_csv(csv_text)
if not customers:
print("No customers loaded. Exiting.", file=sys.stderr)
sys.exit(1)
period_start, period_end = parse_period(args.period)
if args.output == "json":
analyzer = RetentionAnalyzer(customers, as_of=period_end)
cohort_analyzer = CohortAnalyzer(customers)
expansion_analyzer = ExpansionAnalyzer(customers)
wf = analyzer.arr_waterfall(period_start, period_end)
output = {
"period": {"start": period_start.isoformat(), "end": period_end.isoformat()},
"arr_waterfall": wf,
"logo_churn_rate": analyzer.logo_churn_rate(period_start, period_end),
"revenue_churn_rate": analyzer.revenue_churn_rate(period_start, period_end),
"cohort_report": {k: {**v, "retention_curve": {str(m): r for m, r in v["retention_curve"].items()}}
for k, v in cohort_analyzer.cohort_report().items()},
"at_risk_accounts": cohort_analyzer.identify_at_risk(),
"expansion_summary": expansion_analyzer.expansion_summary(),
"expansion_by_segment": expansion_analyzer.expansion_by_segment(),
"expansion_candidates": expansion_analyzer.top_expansion_candidates(),
}
print(json.dumps(output, indent=2))
elif args.output == "summary":
analyzer = RetentionAnalyzer(customers, as_of=period_end)
wf = analyzer.arr_waterfall(period_start, period_end)
print_header("NRR SUMMARY")
print(f" Period: {period_start.isoformat()} → {period_end.isoformat()}")
print(f" NRR: {fmt_pct(wf['nrr'])} {nrr_status(wf['nrr'])}")
print(f" GRR: {fmt_pct(wf['grr'])} {grr_status(wf['grr'])}")
print(f" Opening: {fmt_currency(wf['opening_arr'])}")
print(f" Closing: {fmt_currency(wf['closing_arr'])}")
print(f" Net New: {fmt_currency(wf['net_new_arr'])}")
print()
else:
print_full_report(customers, period_start, period_end)
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Revenue Forecast Model
======================
Pipeline-based revenue forecasting for B2B SaaS.
Models:
- Weighted pipeline (stage probability × deal value)
- Historical win rate adjustment (calibrate to actuals)
- Scenario analysis (conservative / base / upside)
- Monthly and quarterly projection with confidence ranges
Usage:
python revenue_forecast_model.py
python revenue_forecast_model.py --csv pipeline.csv
python revenue_forecast_model.py --scenario conservative
Input format (CSV):
deal_id, name, stage, arr_value, close_date, rep, segment
Stdlib only. No dependencies.
"""
import csv
import sys
import json
import argparse
import statistics
from datetime import date, datetime, timedelta
from collections import defaultdict
from io import StringIO
# ---------------------------------------------------------------------------
# Stage configuration
# ---------------------------------------------------------------------------
DEFAULT_STAGE_PROBABILITIES = {
"discovery": 0.10,
"qualification": 0.25,
"demo": 0.40,
"proposal": 0.55,
"poc": 0.65,
"negotiation": 0.80,
"verbal_commit": 0.92,
"closed_won": 1.00,
"closed_lost": 0.00,
}
SCENARIO_MULTIPLIERS = {
"conservative": 0.85, # Win rate 15% below historical
"base": 1.00, # Historical win rate
"upside": 1.15, # Win rate 15% above historical
}
# ---------------------------------------------------------------------------
# Data model
# ---------------------------------------------------------------------------
class Deal:
def __init__(self, deal_id, name, stage, arr_value, close_date, rep="", segment=""):
self.deal_id = deal_id
self.name = name
self.stage = stage.lower().replace(" ", "_").replace("/", "_")
self.arr_value = float(arr_value)
self.close_date = self._parse_date(close_date)
self.rep = rep
self.segment = segment
@staticmethod
def _parse_date(value):
for fmt in ("%Y-%m-%d", "%m/%d/%Y", "%d/%m/%Y", "%Y/%m/%d"):
try:
return datetime.strptime(str(value), fmt).date()
except ValueError:
continue
raise ValueError(f"Cannot parse date: {value!r}")
@property
def quarter(self):
q = (self.close_date.month - 1) // 3 + 1
return f"Q{q} {self.close_date.year}"
@property
def month_key(self):
return self.close_date.strftime("%Y-%m")
def weighted_value(self, stage_probs, scenario="base"):
prob = stage_probs.get(self.stage, 0.0)
multiplier = SCENARIO_MULTIPLIERS.get(scenario, 1.0)
# Clamp probability to [0, 1]
adjusted = min(1.0, max(0.0, prob * multiplier))
return self.arr_value * adjusted
def is_open(self):
return self.stage not in ("closed_won", "closed_lost")
def is_closed_won(self):
return self.stage == "closed_won"
# ---------------------------------------------------------------------------
# Win rate calibration
# ---------------------------------------------------------------------------
def calculate_historical_win_rates(deals):
"""
Calculate actual win rates per stage from closed deals.
Returns a dict: stage → win_rate (float).
Requires deals that were at each stage and are now closed won/lost.
"""
# In a real implementation, you'd have historical stage-at-point-in-time data.
# Here we approximate: among closed deals, what fraction were won?
closed = [d for d in deals if not d.is_open()]
if not closed:
return {}
won = [d for d in closed if d.is_closed_won()]
overall_rate = len(won) / len(closed) if closed else 0.0
# Stage-level calibration: adjust default probs by actual overall rate
# (In production: use CRM historical stage-level conversion data)
calibrated = {}
for stage, default_prob in DEFAULT_STAGE_PROBABILITIES.items():
if overall_rate > 0:
calibrated[stage] = min(1.0, default_prob * (overall_rate / 0.25))
else:
calibrated[stage] = default_prob
return calibrated
# ---------------------------------------------------------------------------
# Forecast engine
# ---------------------------------------------------------------------------
class ForecastEngine:
def __init__(self, deals, stage_probs=None):
self.deals = deals
self.stage_probs = stage_probs or DEFAULT_STAGE_PROBABILITIES
def open_deals(self):
return [d for d in self.deals if d.is_open()]
def closed_won_deals(self):
return [d for d in self.deals if d.is_closed_won()]
def pipeline_by_month(self, scenario="base"):
"""Returns dict: month_key → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.month_key] += deal.weighted_value(self.stage_probs, scenario)
return dict(sorted(result.items()))
def pipeline_by_quarter(self, scenario="base"):
"""Returns dict: quarter → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.quarter] += deal.weighted_value(self.stage_probs, scenario)
return dict(sorted(result.items()))
def coverage_ratio(self, quota, period_filter=None):
"""
Pipeline coverage = total pipeline ÷ quota.
period_filter: if set, only include deals with close_date in that period.
"""
pipeline = sum(
d.arr_value for d in self.open_deals()
if period_filter is None or d.quarter == period_filter
)
return pipeline / quota if quota else 0.0
def scenario_summary(self, periods=None):
"""
Returns dict: period → {conservative, base, upside, open_pipeline}.
periods: list of month_keys to include; if None, all months.
"""
summaries = {}
all_months = sorted(set(d.month_key for d in self.open_deals()))
target_months = periods or all_months
for month in target_months:
deals_in_month = [d for d in self.open_deals() if d.month_key == month]
if not deals_in_month:
continue
summaries[month] = {
"deal_count": len(deals_in_month),
"open_pipeline": sum(d.arr_value for d in deals_in_month),
"conservative": sum(d.weighted_value(self.stage_probs, "conservative") for d in deals_in_month),
"base": sum(d.weighted_value(self.stage_probs, "base") for d in deals_in_month),
"upside": sum(d.weighted_value(self.stage_probs, "upside") for d in deals_in_month),
}
return summaries
def rep_performance(self):
"""Returns dict: rep → {pipeline, weighted_base, deal_count, avg_deal_size}."""
rep_data = defaultdict(lambda: {"pipeline": 0.0, "weighted_base": 0.0,
"deal_count": 0, "deals": []})
for deal in self.open_deals():
rep_data[deal.rep]["pipeline"] += deal.arr_value
rep_data[deal.rep]["weighted_base"] += deal.weighted_value(self.stage_probs, "base")
rep_data[deal.rep]["deal_count"] += 1
rep_data[deal.rep]["deals"].append(deal.arr_value)
result = {}
for rep, data in rep_data.items():
deals = data["deals"]
result[rep] = {
"pipeline": data["pipeline"],
"weighted_base": data["weighted_base"],
"deal_count": data["deal_count"],
"avg_deal_size": statistics.mean(deals) if deals else 0.0,
}
return result
def segment_breakdown(self, scenario="base"):
"""Returns dict: segment → weighted ARR."""
result = defaultdict(float)
for deal in self.open_deals():
result[deal.segment or "unspecified"] += deal.weighted_value(self.stage_probs, scenario)
return dict(result)
def stage_distribution(self):
"""Returns dict: stage → {count, total_arr, avg_arr}."""
result = defaultdict(lambda: {"count": 0, "total_arr": 0.0})
for deal in self.open_deals():
result[deal.stage]["count"] += 1
result[deal.stage]["total_arr"] += deal.arr_value
out = {}
for stage, data in result.items():
out[stage] = {
"count": data["count"],
"total_arr": data["total_arr"],
"avg_arr": data["total_arr"] / data["count"] if data["count"] else 0,
"probability": self.stage_probs.get(stage, 0.0),
}
return out
def confidence_interval(self, scenario="base", iterations=1000):
"""
Monte Carlo simulation to generate confidence interval around base forecast.
Each deal wins/loses based on its probability; runs iterations times.
Returns (p10, p50, p90) of total expected ARR.
"""
import random
random.seed(42)
totals = []
for _ in range(iterations):
total = 0.0
for deal in self.open_deals():
prob = min(1.0, self.stage_probs.get(deal.stage, 0.0) * SCENARIO_MULTIPLIERS[scenario])
if random.random() < prob:
total += deal.arr_value
totals.append(total)
totals.sort()
n = len(totals)
return (
totals[int(n * 0.10)], # P10 (conservative)
totals[int(n * 0.50)], # P50 (median)
totals[int(n * 0.90)], # P90 (upside)
)
# ---------------------------------------------------------------------------
# Reporting
# ---------------------------------------------------------------------------
def fmt_currency(value):
if value >= 1_000_000:
return f"${value / 1_000_000:.2f}M"
if value >= 1_000:
return f"${value / 1_000:.1f}K"
return f"${value:.0f}"
def fmt_pct(value):
return f"{value * 100:.1f}%"
def print_header(title):
width = 70
print()
print("=" * width)
print(f" {title}")
print("=" * width)
def print_section(title):
print(f"\n--- {title} ---")
def print_report(engine, quota=None, current_quarter=None):
open_deals = engine.open_deals()
won_deals = engine.closed_won_deals()
print_header("REVENUE FORECAST MODEL")
print(f" Generated: {date.today().isoformat()}")
print(f" Open deals: {len(open_deals)}")
print(f" Closed Won (in dataset): {len(won_deals)}")
total_pipeline = sum(d.arr_value for d in open_deals)
total_won = sum(d.arr_value for d in won_deals)
print(f" Total open pipeline: {fmt_currency(total_pipeline)}")
print(f" Total closed won: {fmt_currency(total_won)}")
# ── Coverage ratio
if quota:
print_section("PIPELINE COVERAGE")
q = current_quarter or "this quarter"
ratio = engine.coverage_ratio(quota, period_filter=current_quarter)
status = "✅ Healthy" if ratio >= 3.0 else ("⚠️ Thin" if ratio >= 2.0 else "🔴 Critical")
print(f" Quota target: {fmt_currency(quota)}")
print(f" Coverage ratio: {ratio:.1f}x {status}")
print(f" (Minimum healthy = 3x; < 2x = pipeline emergency)")
# ── Stage distribution
print_section("STAGE DISTRIBUTION")
stage_dist = engine.stage_distribution()
col_w = [28, 8, 14, 12, 10]
header = f" {'Stage':<{col_w[0]}} {'Deals':>{col_w[1]}} {'Pipeline':>{col_w[2]}} {'Avg Size':>{col_w[3]}} {'Win Prob':>{col_w[4]}}"
print(header)
print(" " + "-" * (sum(col_w) + 4))
for stage, data in sorted(stage_dist.items(), key=lambda x: -x[1]["total_arr"]):
print(f" {stage:<{col_w[0]}} {data['count']:>{col_w[1]}} "
f"{fmt_currency(data['total_arr']):>{col_w[2]}} "
f"{fmt_currency(data['avg_arr']):>{col_w[3]}} "
f"{fmt_pct(data['probability']):>{col_w[4]}}")
# ── Scenario forecast by month
print_section("MONTHLY FORECAST — ALL SCENARIOS")
summaries = engine.scenario_summary()
col_w2 = [10, 8, 14, 14, 14, 14]
h2 = (f" {'Month':<{col_w2[0]}} {'Deals':>{col_w2[1]}} "
f"{'Pipeline':>{col_w2[2]}} {'Conservative':>{col_w2[3]}} "
f"{'Base':>{col_w2[4]}} {'Upside':>{col_w2[5]}}")
print(h2)
print(" " + "-" * (sum(col_w2) + 5))
for month, data in summaries.items():
print(f" {month:<{col_w2[0]}} {data['deal_count']:>{col_w2[1]}} "
f"{fmt_currency(data['open_pipeline']):>{col_w2[2]}} "
f"{fmt_currency(data['conservative']):>{col_w2[3]}} "
f"{fmt_currency(data['base']):>{col_w2[4]}} "
f"{fmt_currency(data['upside']):>{col_w2[5]}}")
# ── Quarterly rollup
print_section("QUARTERLY FORECAST ROLLUP")
q_conservative = defaultdict(float)
q_base = defaultdict(float)
q_upside = defaultdict(float)
q_pipeline = defaultdict(float)
q_count = defaultdict(int)
for deal in open_deals:
q_conservative[deal.quarter] += deal.weighted_value(engine.stage_probs, "conservative")
q_base[deal.quarter] += deal.weighted_value(engine.stage_probs, "base")
q_upside[deal.quarter] += deal.weighted_value(engine.stage_probs, "upside")
q_pipeline[deal.quarter] += deal.arr_value
q_count[deal.quarter] += 1
quarters = sorted(q_base.keys())
col_w3 = [10, 8, 14, 14, 14, 14]
h3 = (f" {'Quarter':<{col_w3[0]}} {'Deals':>{col_w3[1]}} "
f"{'Pipeline':>{col_w3[2]}} {'Conservative':>{col_w3[3]}} "
f"{'Base':>{col_w3[4]}} {'Upside':>{col_w3[5]}}")
print(h3)
print(" " + "-" * (sum(col_w3) + 5))
for q in quarters:
print(f" {q:<{col_w3[0]}} {q_count[q]:>{col_w3[1]}} "
f"{fmt_currency(q_pipeline[q]):>{col_w3[2]}} "
f"{fmt_currency(q_conservative[q]):>{col_w3[3]}} "
f"{fmt_currency(q_base[q]):>{col_w3[4]}} "
f"{fmt_currency(q_upside[q]):>{col_w3[5]}}")
# ── Monte Carlo confidence interval
print_section("CONFIDENCE INTERVAL (Monte Carlo, 1,000 simulations)")
p10, p50, p90 = engine.confidence_interval("base")
print(f" P10 (conservative floor): {fmt_currency(p10)}")
print(f" P50 (median expected): {fmt_currency(p50)}")
print(f" P90 (upside ceiling): {fmt_currency(p90)}")
print(f" Range spread: {fmt_currency(p90 - p10)}")
# ── Rep performance
print_section("REP PIPELINE PERFORMANCE")
rep_perf = engine.rep_performance()
if rep_perf:
col_w4 = [20, 8, 14, 14, 12]
h4 = (f" {'Rep':<{col_w4[0]}} {'Deals':>{col_w4[1]}} "
f"{'Pipeline':>{col_w4[2]}} {'Weighted':>{col_w4[3]}} {'Avg Size':>{col_w4[4]}}")
print(h4)
print(" " + "-" * (sum(col_w4) + 4))
for rep, data in sorted(rep_perf.items(), key=lambda x: -x[1]["pipeline"]):
print(f" {rep:<{col_w4[0]}} {data['deal_count']:>{col_w4[1]}} "
f"{fmt_currency(data['pipeline']):>{col_w4[2]}} "
f"{fmt_currency(data['weighted_base']):>{col_w4[3]}} "
f"{fmt_currency(data['avg_deal_size']):>{col_w4[4]}}")
# ── Segment breakdown
print_section("SEGMENT BREAKDOWN (Base Forecast)")
seg = engine.segment_breakdown("base")
for segment, value in sorted(seg.items(), key=lambda x: -x[1]):
bar_len = int((value / total_pipeline) * 30) if total_pipeline else 0
bar = "█" * bar_len
print(f" {segment:<20} {fmt_currency(value):>12} {bar}")
# ── Red flags
print_section("FORECAST HEALTH FLAGS")
flags = []
if total_pipeline > 0:
coverage = total_pipeline / quota if quota else None
if coverage and coverage < 2.0:
flags.append("🔴 Pipeline coverage below 2x — serious shortfall risk this quarter")
elif coverage and coverage < 3.0:
flags.append("⚠️ Pipeline coverage below 3x — limited buffer for slippage")
# Stage concentration risk
early_stage_pct = sum(
d.arr_value for d in open_deals
if engine.stage_probs.get(d.stage, 0) < 0.30
) / total_pipeline
if early_stage_pct > 0.60:
flags.append(f"⚠️ {fmt_pct(early_stage_pct)} of pipeline in early stages (< 30% probability)")
# Deal concentration
deal_values = sorted([d.arr_value for d in open_deals], reverse=True)
if deal_values and deal_values[0] / total_pipeline > 0.25:
flags.append(f"⚠️ Top deal is {fmt_pct(deal_values[0]/total_pipeline)} of pipeline — concentration risk")
# Spread between scenarios
total_conservative = sum(d.weighted_value(engine.stage_probs, "conservative") for d in open_deals)
total_upside = sum(d.weighted_value(engine.stage_probs, "upside") for d in open_deals)
spread = (total_upside - total_conservative) / total_conservative if total_conservative else 0
if spread > 0.40:
flags.append(f"⚠️ High scenario spread ({fmt_pct(spread)}) — forecast confidence is low")
if flags:
for f in flags:
print(f" {f}")
else:
print(" ✅ No critical flags detected")
print()
# ---------------------------------------------------------------------------
# Sample data
# ---------------------------------------------------------------------------
SAMPLE_CSV = """deal_id,name,stage,arr_value,close_date,rep,segment
D001,Acme Corp ERP Integration,negotiation,85000,2026-03-15,Sarah Chen,Enterprise
D002,TechStart PLG Expansion,proposal,28000,2026-03-28,Marcus Webb,Mid-Market
D003,Global Retail Co,verbal_commit,220000,2026-03-10,Sarah Chen,Enterprise
D004,BioLab Analytics,poc,62000,2026-04-05,Jamie Park,Mid-Market
D005,FinServ Holdings,demo,150000,2026-04-20,Sarah Chen,Enterprise
D006,MidWest Logistics,qualification,35000,2026-04-30,Marcus Webb,Mid-Market
D007,Edu Platform Inc,negotiation,42000,2026-03-25,Jamie Park,SMB
D008,Healthcare Connect,proposal,95000,2026-05-15,Sarah Chen,Enterprise
D009,Startup Hub Network,demo,18000,2026-04-10,Marcus Webb,SMB
D010,CloudOps Systems,poc,75000,2026-05-01,Jamie Park,Mid-Market
D011,National Bank Corp,verbal_commit,310000,2026-03-31,Sarah Chen,Enterprise
D012,RetailTech Co,qualification,22000,2026-05-20,Marcus Webb,SMB
D013,InsurTech Platform,negotiation,88000,2026-04-15,Jamie Park,Mid-Market
D014,GovTech Solutions,proposal,175000,2026-06-01,Sarah Chen,Enterprise
D015,AgriData Systems,demo,31000,2026-05-10,Marcus Webb,Mid-Market
D016,Legal AI Corp,poc,55000,2026-04-25,Jamie Park,Mid-Market
D017,Closed Won Deal,closed_won,120000,2026-02-15,Sarah Chen,Enterprise
D018,Lost Deal,closed_lost,45000,2026-02-20,Marcus Webb,Mid-Market
"""
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def load_deals_from_csv(csv_text):
reader = csv.DictReader(StringIO(csv_text))
deals = []
errors = []
for i, row in enumerate(reader, start=2):
try:
deal = Deal(
deal_id=row.get("deal_id", f"row_{i}"),
name=row.get("name", ""),
stage=row.get("stage", ""),
arr_value=row.get("arr_value", 0),
close_date=row.get("close_date", ""),
rep=row.get("rep", ""),
segment=row.get("segment", ""),
)
deals.append(deal)
except (ValueError, KeyError) as e:
errors.append(f" Row {i}: {e}")
if errors:
print("⚠️ Skipped rows with errors:")
for err in errors:
print(err)
return deals
def main():
parser = argparse.ArgumentParser(
description="Revenue Forecast Model — pipeline-based ARR forecasting"
)
parser.add_argument(
"--csv", metavar="FILE",
help="CSV file with pipeline data (uses sample data if not provided)"
)
parser.add_argument(
"--quota", type=float, default=1_000_000,
help="Quarterly quota target in ARR (default: $1,000,000)"
)
parser.add_argument(
"--quarter", metavar="QUARTER",
help='Current quarter filter e.g. "Q2 2026" (optional)'
)
parser.add_argument(
"--scenario", choices=["conservative", "base", "upside"],
default="base",
help="Primary scenario to report (default: base)"
)
parser.add_argument(
"--json", action="store_true",
help="Output forecast as JSON instead of formatted report"
)
args = parser.parse_args()
# Load data
if args.csv:
try:
with open(args.csv, "r", encoding="utf-8") as f:
csv_text = f.read()
except FileNotFoundError:
print(f"Error: File not found: {args.csv}", file=sys.stderr)
sys.exit(1)
else:
print("No --csv provided. Using sample pipeline data.\n")
csv_text = SAMPLE_CSV
deals = load_deals_from_csv(csv_text)
if not deals:
print("No deals loaded. Exiting.", file=sys.stderr)
sys.exit(1)
# Calibrate win rates from closed deals
historical_probs = calculate_historical_win_rates(deals)
stage_probs = historical_probs if historical_probs else DEFAULT_STAGE_PROBABILITIES
engine = ForecastEngine(deals, stage_probs=stage_probs)
if args.json:
output = {
"generated": date.today().isoformat(),
"quota": args.quota,
"open_pipeline": sum(d.arr_value for d in engine.open_deals()),
"coverage_ratio": engine.coverage_ratio(args.quota, args.quarter),
"monthly_forecast": engine.scenario_summary(),
"quarterly_base": engine.pipeline_by_quarter("base"),
"confidence_interval": dict(zip(
["p10", "p50", "p90"],
engine.confidence_interval("base")
)),
"rep_performance": engine.rep_performance(),
"segment_breakdown": engine.segment_breakdown("base"),
}
print(json.dumps(output, indent=2))
else:
print_report(engine, quota=args.quota, current_quarter=args.quarter)
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill c-level-advisor/cro-advisor Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Leadership
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
c-level-advisor/cro-advisor/SKILL.md
People who install this also use
CEO Advisor
Executive leadership coaching — strategic decision-making, organizational development, board governance, and navigating high-stakes business challenges.
@alirezarezvani
CFO Advisor
Financial leadership guidance — budgeting, forecasting, SaaS metrics, investor relations, and financial strategy for growing companies.
@alirezarezvani
Revenue Operations Manager
Align sales, marketing, and customer success operations — pipeline health, forecasting accuracy, GTM efficiency metrics, and RevOps infrastructure.
@alirezarezvani