Revenue Operations Manager
Align sales, marketing, and customer success operations — pipeline health, forecasting accuracy, GTM efficiency metrics, and RevOps infrastructure.
What this skill does
Validate your growth targets with detailed analysis of sales pipeline health and revenue forecasting accuracy. Access critical metrics like pipeline coverage and deal velocity to identify risks and optimize performance instantly. Use this during quarterly planning or revenue reviews whenever you need to align your sales and finance teams.
name: “revenue-operations” description: Analyzes sales pipeline health, revenue forecasting accuracy, and go-to-market efficiency metrics for SaaS revenue optimization. Use when analyzing sales pipeline coverage, forecasting revenue, evaluating go-to-market performance, reviewing sales metrics, assessing pipeline analysis, tracking forecast accuracy with MAPE, calculating GTM efficiency, or measuring sales efficiency and unit economics for SaaS teams.
Revenue Operations
Pipeline analysis, forecast accuracy tracking, and GTM efficiency measurement for SaaS revenue teams.
Output formats: All scripts support
--format text(human-readable) and--format json(dashboards/integrations).
Quick Start
# Analyze pipeline health and coverage
python scripts/pipeline_analyzer.py --input assets/sample_pipeline_data.json --format text
# Track forecast accuracy over multiple periods
python scripts/forecast_accuracy_tracker.py assets/sample_forecast_data.json --format text
# Calculate GTM efficiency metrics
python scripts/gtm_efficiency_calculator.py assets/sample_gtm_data.json --format text
Tools Overview
1. Pipeline Analyzer
Analyzes sales pipeline health including coverage ratios, stage conversion rates, deal velocity, aging risks, and concentration risks.
Input: JSON file with deals, quota, and stage configuration Output: Coverage ratios, conversion rates, velocity metrics, aging flags, risk assessment
Usage:
python scripts/pipeline_analyzer.py --input pipeline.json --format text
Key Metrics Calculated:
- Pipeline Coverage Ratio — Total pipeline value / quota target (healthy: 3-4x)
- Stage Conversion Rates — Stage-to-stage progression rates
- Sales Velocity — (Opportunities x Avg Deal Size x Win Rate) / Avg Sales Cycle
- Deal Aging — Flags deals exceeding 2x average cycle time per stage
- Concentration Risk — Warns when >40% of pipeline is in a single deal
- Coverage Gap Analysis — Identifies quarters with insufficient pipeline
Input Schema:
{
"quota": 500000,
"stages": ["Discovery", "Qualification", "Proposal", "Negotiation", "Closed Won"],
"average_cycle_days": 45,
"deals": [
{
"id": "D001",
"name": "Acme Corp",
"stage": "Proposal",
"value": 85000,
"age_days": 32,
"close_date": "2025-03-15",
"owner": "rep_1"
}
]
}
2. Forecast Accuracy Tracker
Tracks forecast accuracy over time using MAPE, detects systematic bias, analyzes trends, and provides category-level breakdowns.
Input: JSON file with forecast periods and optional category breakdowns Output: MAPE score, bias analysis, trends, category breakdown, accuracy rating
Usage:
python scripts/forecast_accuracy_tracker.py forecast_data.json --format text
Key Metrics Calculated:
- MAPE — mean(|actual - forecast| / |actual|) x 100
- Forecast Bias — Over-forecasting (positive) vs under-forecasting (negative) tendency
- Weighted Accuracy — MAPE weighted by deal value for materiality
- Period Trends — Improving, stable, or declining accuracy over time
- Category Breakdown — Accuracy by rep, product, segment, or any custom dimension
Accuracy Ratings:
| Rating | MAPE Range | Interpretation |
|---|---|---|
| Excellent | <10% | Highly predictable, data-driven process |
| Good | 10-15% | Reliable forecasting with minor variance |
| Fair | 15-25% | Needs process improvement |
| Poor | >25% | Significant forecasting methodology gaps |
Input Schema:
{
"forecast_periods": [
{"period": "2025-Q1", "forecast": 480000, "actual": 520000},
{"period": "2025-Q2", "forecast": 550000, "actual": 510000}
],
"category_breakdowns": {
"by_rep": [
{"category": "Rep A", "forecast": 200000, "actual": 210000},
{"category": "Rep B", "forecast": 280000, "actual": 310000}
]
}
}
3. GTM Efficiency Calculator
Calculates core SaaS GTM efficiency metrics with industry benchmarking, ratings, and improvement recommendations.
Input: JSON file with revenue, cost, and customer metrics Output: Magic Number, LTV:CAC, CAC Payback, Burn Multiple, Rule of 40, NDR with ratings
Usage:
python scripts/gtm_efficiency_calculator.py gtm_data.json --format text
Key Metrics Calculated:
| Metric | Formula | Target |
|---|---|---|
| Magic Number | Net New ARR / Prior Period S&M Spend | >0.75 |
| LTV:CAC | (ARPA x Gross Margin / Churn Rate) / CAC | >3:1 |
| CAC Payback | CAC / (ARPA x Gross Margin) months | <18 months |
| Burn Multiple | Net Burn / Net New ARR | <2x |
| Rule of 40 | Revenue Growth % + FCF Margin % | >40% |
| Net Dollar Retention | (Begin ARR + Expansion - Contraction - Churn) / Begin ARR | >110% |
Input Schema:
{
"revenue": {
"current_arr": 5000000,
"prior_arr": 3800000,
"net_new_arr": 1200000,
"arpa_monthly": 2500,
"revenue_growth_pct": 31.6
},
"costs": {
"sales_marketing_spend": 1800000,
"cac": 18000,
"gross_margin_pct": 78,
"total_operating_expense": 6500000,
"net_burn": 1500000,
"fcf_margin_pct": 8.4
},
"customers": {
"beginning_arr": 3800000,
"expansion_arr": 600000,
"contraction_arr": 100000,
"churned_arr": 300000,
"annual_churn_rate_pct": 8
}
}
Revenue Operations Workflows
Weekly Pipeline Review
Use this workflow for your weekly pipeline inspection cadence.
-
Verify input data: Confirm pipeline export is current and all required fields (stage, value, close_date, owner) are populated before proceeding.
-
Generate pipeline report:
python scripts/pipeline_analyzer.py --input current_pipeline.json --format text -
Cross-check output totals against your CRM source system to confirm data integrity.
-
Review key indicators:
- Pipeline coverage ratio (is it above 3x quota?)
- Deals aging beyond threshold (which deals need intervention?)
- Concentration risk (are we over-reliant on a few large deals?)
- Stage distribution (is there a healthy funnel shape?)
-
Document using template: Use
assets/pipeline_review_template.md -
Action items: Address aging deals, redistribute pipeline concentration, fill coverage gaps
Forecast Accuracy Review
Use monthly or quarterly to evaluate and improve forecasting discipline.
-
Verify input data: Confirm all forecast periods have corresponding actuals and no periods are missing before running.
-
Generate accuracy report:
python scripts/forecast_accuracy_tracker.py forecast_history.json --format text -
Cross-check actuals against closed-won records in your CRM before drawing conclusions.
-
Analyze patterns:
- Is MAPE trending down (improving)?
- Which reps or segments have the highest error rates?
- Is there systematic over- or under-forecasting?
-
Document using template: Use
assets/forecast_report_template.md -
Improvement actions: Coach high-bias reps, adjust methodology, improve data hygiene
GTM Efficiency Audit
Use quarterly or during board prep to evaluate go-to-market efficiency.
-
Verify input data: Confirm revenue, cost, and customer figures reconcile with finance records before running.
-
Calculate efficiency metrics:
python scripts/gtm_efficiency_calculator.py quarterly_data.json --format text -
Cross-check computed ARR and spend totals against your finance system before sharing results.
-
Benchmark against targets:
- Magic Number (>0.75)
- LTV:CAC (>3:1)
- CAC Payback (<18 months)
- Rule of 40 (>40%)
-
Document using template: Use
assets/gtm_dashboard_template.md -
Strategic decisions: Adjust spend allocation, optimize channels, improve retention
Quarterly Business Review
Combine all three tools for a comprehensive QBR analysis.
- Run pipeline analyzer for forward-looking coverage
- Run forecast tracker for backward-looking accuracy
- Run GTM calculator for efficiency benchmarks
- Cross-reference pipeline health with forecast accuracy
- Align GTM efficiency metrics with growth targets
Reference Documentation
| Reference | Description |
|---|---|
| RevOps Metrics Guide | Complete metrics hierarchy, definitions, formulas, and interpretation |
| Pipeline Management Framework | Pipeline best practices, stage definitions, conversion benchmarks |
| GTM Efficiency Benchmarks | SaaS benchmarks by stage, industry standards, improvement strategies |
Templates
| Template | Use Case |
|---|---|
| Pipeline Review Template | Weekly/monthly pipeline inspection documentation |
| Forecast Report Template | Forecast accuracy reporting and trend analysis |
| GTM Dashboard Template | GTM efficiency dashboard for leadership review |
| Sample Pipeline Data | Example input for pipeline_analyzer.py |
| Expected Output | Reference output from pipeline_analyzer.py |
{
"coverage": {
"total_pipeline_value": 1105000,
"quota": 500000,
"coverage_ratio": 2.21,
"rating": "At Risk",
"target": "3.0x - 4.0x"
},
"stage_conversions": [
{
"from_stage": "Discovery",
"to_stage": "Qualification",
"from_count": 17,
"to_count": 12,
"conversion_rate_pct": 70.6
},
{
"from_stage": "Qualification",
"to_stage": "Proposal",
"from_count": 12,
"to_count": 9,
"conversion_rate_pct": 75.0
},
{
"from_stage": "Proposal",
"to_stage": "Negotiation",
"from_count": 9,
"to_count": 5,
"conversion_rate_pct": 55.6
},
{
"from_stage": "Negotiation",
"to_stage": "Closed Won",
"from_count": 5,
"to_count": 2,
"conversion_rate_pct": 40.0
}
],
"velocity": {
"num_opportunities": 17,
"avg_deal_size": 74588.24,
"win_rate_pct": 11.8,
"avg_cycle_days": 32.5,
"velocity_per_day": 4594.2,
"velocity_per_month": 137826.09
},
"aging": {
"global_aging_threshold_days": 90,
"stage_thresholds": {
"Discovery": 90,
"Qualification": 78,
"Proposal": 67,
"Negotiation": 56
},
"total_open_deals": 15,
"healthy_deals": 13,
"at_risk_deals": 2,
"aging_deals": [
{
"id": "D011",
"name": "Vertex Solutions",
"stage": "Proposal",
"age_days": 95,
"threshold_days": 67,
"days_over": 28,
"value": 110000
},
{
"id": "D014",
"name": "Horizon Telecom",
"stage": "Negotiation",
"age_days": 60,
"threshold_days": 56,
"days_over": 4,
"value": 250000
}
]
},
"risk": {
"overall_risk": "MEDIUM",
"risk_factors_count": 3,
"concentration_risks": [],
"has_concentration_risk": false,
"stage_distribution": {
"Discovery": {
"count": 5,
"value": 194000,
"pct_of_pipeline": 17.6
},
"Qualification": {
"count": 3,
"value": 150000,
"pct_of_pipeline": 13.6
},
"Proposal": {
"count": 4,
"value": 333000,
"pct_of_pipeline": 30.1
},
"Negotiation": {
"count": 3,
"value": 428000,
"pct_of_pipeline": 38.7
}
},
"empty_stages": [],
"coverage_gaps": [
{
"quarter": "2025-Q2",
"pipeline_value": 344000,
"quarterly_target": 125000.0,
"coverage_ratio": 2.75,
"gap": "Below 3x target"
}
]
}
}
Forecast Accuracy Report - [Period]
Report Details
- Prepared By: [Name]
- Report Date: [YYYY-MM-DD]
- Period Analyzed: [Start Period] to [End Period]
- Periods Covered: [N] periods
Executive Summary
| Metric | Value | Rating | Trend |
|---|---|---|---|
| MAPE | _% | ||
| Weighted MAPE | _% | ||
| Forecast Bias | _% | ||
| Bias Direction |
Accuracy Rating:
- Excellent (<10%) / Good (10-15%) / Fair (15-25%) / Poor (>25%)
Key Finding: [1-2 sentence summary of forecast accuracy status]
Period-by-Period Analysis
| Period | Forecast | Actual | Variance | Error % | Bias |
|---|---|---|---|---|---|
| $_ | $_ | $_ | _% | Over/Under | |
| $_ | $_ | $_ | _% | Over/Under | |
| $_ | $_ | $_ | _% | Over/Under | |
| $_ | $_ | $_ | _% | Over/Under | |
| $_ | $_ | $_ | _% | Over/Under | |
| $_ | $_ | $_ | _% | Over/Under |
Bias Analysis
Overall Bias
- Direction: [Over-forecasting / Under-forecasting / Balanced]
- Bias Magnitude: _%
- Over-forecast Periods: _ of _
- Under-forecast Periods: _ of _
- Bias Ratio: _ (1.0 = always over, 0.0 = always under, 0.5 = balanced)
Interpretation
[What does the bias pattern tell us about our forecasting process? Is it systematic or random?]
Root Cause
[Identify the primary drivers of bias: optimistic deal assessment, poor stage qualification, sandbagging, late-arriving deals, etc.]
Trend Analysis
Accuracy Trend
- Direction: [Improving / Stable / Declining]
- Early Period MAPE: _%
- Recent Period MAPE: _%
- MAPE Change: _% (positive = worsening, negative = improving)
Trend Chart (Text)
Period Error% Trend
Q1 __% ████████
Q2 __% ██████████
Q3 __% ██████
Q4 __% ████████████Category Breakdown
By Rep
| Rep | Forecast | Actual | Error % | Bias | Rating |
|---|---|---|---|---|---|
| $_ | $_ | _% | |||
| $_ | $_ | _% | |||
| $_ | $_ | _% | |||
| $_ | $_ | _% |
Overall Rep MAPE: _%
By Segment
| Segment | Forecast | Actual | Error % | Bias | Rating |
|---|---|---|---|---|---|
| Enterprise | $_ | $_ | _% | ||
| Mid-Market | $_ | $_ | _% | ||
| SMB | $_ | $_ | _% |
Overall Segment MAPE: _%
By Product (if applicable)
| Product | Forecast | Actual | Error % | Bias | Rating |
|---|---|---|---|---|---|
| $_ | $_ | _% | |||
| $_ | $_ | _% |
Recommendations
Immediate Actions (This Quarter)
- [Action] -- [Why and expected impact]
- [Action] -- [Why and expected impact]
- [Action] -- [Why and expected impact]
Process Improvements (Next Quarter)
- [Improvement] -- [Implementation plan]
- [Improvement] -- [Implementation plan]
Coaching Focus Areas
| Rep/Team | Issue | Coaching Action | Target |
|---|---|---|---|
Forecast Methodology Notes
Current Methodology
[Describe the current forecasting methodology: weighted pipeline, commit/upside categories, AI-assisted, etc.]
Methodology Changes This Period
[Any changes to the forecasting process or methodology during the reporting period]
Data Quality Issues
[Note any data quality issues that may affect accuracy: missing close dates, inconsistent stage definitions, CRM hygiene gaps]
Next Steps
| # | Action | Owner | Due Date |
|---|---|---|---|
| 1 | |||
| 2 | |||
| 3 |
GTM Efficiency Dashboard - [Quarter/Period]
Dashboard Details
- Prepared By: [Name]
- Report Date: [YYYY-MM-DD]
- Period: [Quarter or Date Range]
- Company Stage: [Seed / Series A / Series B / Series C+ / Growth]
Metrics At A Glance
| Metric | Value | Rating | Target | Trend | vs. Last Period |
|---|---|---|---|---|---|
| Magic Number | _ | >0.75 | |||
| LTV:CAC | _:1 | >3:1 | |||
| CAC Payback | _ mo | <18 mo | |||
| Burn Multiple | _x | <2x | |||
| Rule of 40 | _% | >40% | |||
| NDR | _% | >110% |
Rating Legend: Green = Healthy | Yellow = Monitor | Red = Action Required
Overall GTM Health: [Strong / Healthy / Needs Attention / Critical]
Detailed Metric Analysis
Magic Number
| Component | Value |
|---|---|
| Net New ARR | $_ |
| Prior Period S&M Spend | $_ |
| Magic Number | _ |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [What does this metric tell us about GTM spend efficiency?]
LTV:CAC Ratio
| Component | Value |
|---|---|
| ARPA (Monthly) | $_ |
| ARPA (Annual) | $_ |
| Gross Margin | _% |
| Annual Churn Rate | _% |
| Customer LTV | $_ |
| Customer Acquisition Cost | $_ |
| LTV:CAC Ratio | _:1 |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [Are unit economics sustainable?]
CAC Payback Period
| Component | Value |
|---|---|
| CAC | $_ |
| Monthly Gross Margin Contribution | $_ |
| CAC Payback | _ months |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [How quickly are we recovering acquisition costs?]
Burn Multiple
| Component | Value |
|---|---|
| Net Burn | $_ |
| Net New ARR | $_ |
| Burn Multiple | _x |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [Is growth capital-efficient?]
Rule of 40
| Component | Value |
|---|---|
| Revenue Growth Rate | _% |
| FCF Margin | _% |
| Rule of 40 Score | _% |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [Is the growth-profitability balance healthy?]
Net Dollar Retention
| Component | Value |
|---|---|
| Beginning ARR | $_ |
| Expansion ARR | +$_ |
| Contraction ARR | -$_ |
| Churned ARR | -$_ |
| Ending ARR | $_ |
| NDR | _% |
- Rating: [Green / Yellow / Red]
- Percentile: [Top 10% / Top 25% / Median / Below Median]
- Trend: [Improving / Stable / Declining]
- Interpretation: [Are we growing revenue from the existing customer base?]
Quarterly Trend
| Metric | Q-3 | Q-2 | Q-1 | Current | Direction |
|---|---|---|---|---|---|
| Magic Number | _ | _ | _ | _ | |
| LTV:CAC | _:1 | _:1 | _:1 | _:1 | |
| CAC Payback | _ mo | _ mo | _ mo | _ mo | |
| Burn Multiple | _x | _x | _x | _x | |
| Rule of 40 | _% | _% | _% | _% | |
| NDR | _% | _% | _% | _% |
Benchmark Comparison
| Metric | Our Value | Stage Median | Top Quartile | Gap to Top Quartile |
|---|---|---|---|---|
| Magic Number | _ | _ | _ | _ |
| LTV:CAC | _:1 | _:1 | _:1 | _ |
| CAC Payback | _ mo | _ mo | _ mo | _ mo |
| Burn Multiple | _x | _x | _x | _ |
| Rule of 40 | _% | _% | _% | _% |
| NDR | _% | _% | _% | _% |
Revenue Composition
ARR Bridge
Beginning ARR: $____________
+ New Logo ARR: $____________
+ Expansion ARR: $____________
- Contraction ARR: $____________
- Churned ARR: $____________
= Ending ARR: $____________
Net New ARR: $____________
Growth Rate: ____________%Cost Structure
S&M Spend: $____________ (___% of revenue)
R&D Spend: $____________ (___% of revenue)
G&A Spend: $____________ (___% of revenue)
Total OpEx: $____________
Net Burn: $____________
Gross Margin: ____________%Strategic Recommendations
Top 3 Priorities
[Priority]
- Current state: [Where we are]
- Target: [Where we need to be]
- Action plan: [How to get there]
- Expected impact: [Metric improvement]
- Timeline: [When]
[Priority]
- Current state:
- Target:
- Action plan:
- Expected impact:
- Timeline:
[Priority]
- Current state:
- Target:
- Action plan:
- Expected impact:
- Timeline:
Investment Recommendations
| Area | Current Spend | Recommended | Rationale |
|---|---|---|---|
| $_ | $_ | ||
| $_ | $_ | ||
| $_ | $_ |
Next Steps
| # | Action | Owner | Due Date | Success Metric |
|---|---|---|---|---|
| 1 | ||||
| 2 | ||||
| 3 | ||||
| 4 | ||||
| 5 |
Pipeline Review - [Date]
Review Period
- Review Type: Weekly / Monthly (circle one)
- Prepared By: [Name]
- Review Date: [YYYY-MM-DD]
- Period Covered: [Start Date] to [End Date]
Executive Summary
| Metric | Current | Last Period | Target | Status |
|---|---|---|---|---|
| Pipeline Coverage | _x | _x | 3-4x | |
| Total Pipeline Value | $_ | $_ | $_ | |
| Net Pipeline Change | $_ | $_ | >$0 | |
| Deals in Pipeline | _ | _ | _ | |
| Avg Deal Size | $_ | $_ | $_ | |
| Sales Velocity ($/mo) | $_ | $_ | $_ |
Overall Assessment: [1-2 sentence summary of pipeline health]
Coverage Analysis
By Quarter
| Quarter | Pipeline | Target | Coverage | Status |
|---|---|---|---|---|
| Current Quarter | $_ | $_ | _x | |
| Next Quarter | $_ | $_ | _x | |
| Q+2 | $_ | $_ | _x |
By Segment
| Segment | Pipeline | Target | Coverage | Notes |
|---|---|---|---|---|
| Enterprise | $_ | $_ | _x | |
| Mid-Market | $_ | $_ | _x | |
| SMB | $_ | $_ | _x |
Stage Distribution
| Stage | # Deals | Value | % of Pipeline | Conversion Rate |
|---|---|---|---|---|
| Discovery | _ | $_ | _% | _% |
| Qualification | _ | $_ | _% | _% |
| Proposal | _ | $_ | _% | _% |
| Negotiation | _ | $_ | _% | _% |
Funnel Health: [Healthy / Top-heavy / Bottom-heavy / Gaps identified]
Top Deals Review (S3+)
| Deal | Stage | Value | Age | Close Date | Risk | Next Step |
|---|---|---|---|---|---|---|
| $_ | _d | |||||
| $_ | _d | |||||
| $_ | _d | |||||
| $_ | _d | |||||
| $_ | _d |
Risk Assessment
Concentration Risk
- Largest deal as % of pipeline: _%
- Top 3 deals as % of pipeline: _%
- Risk Level: [Low / Medium / High]
- Mitigation: [Actions to diversify]
Aging Deals
| Deal | Stage | Age | Threshold | Days Over | Action Required |
|---|---|---|---|---|---|
| _d | _d | +_d | |||
| _d | _d | +_d |
Deals Pushed from Last Period
| Deal | Original Close | New Close | Times Pushed | Reason |
|---|---|---|---|---|
Pipeline Movement
Created This Period
| Deal | Source | Value | Stage | Expected Close |
|---|---|---|---|---|
| $_ | ||||
| $_ | ||||
| Total Created: $_ |
Advanced This Period
| Deal | From Stage | To Stage | Value |
|---|---|---|---|
| $_ | |||
| $_ |
Closed Won This Period
| Deal | Value | Cycle Days | Source |
|---|---|---|---|
| $_ | _d | ||
| $_ | _d | ||
| Total Closed Won: $_ |
Closed Lost This Period
| Deal | Value | Stage Lost | Loss Reason |
|---|---|---|---|
| $_ | |||
| $_ | |||
| Total Closed Lost: $_ |
Action Items
| # | Action | Owner | Due Date | Priority |
|---|---|---|---|---|
| 1 | ||||
| 2 | ||||
| 3 | ||||
| 4 | ||||
| 5 |
Notes
[Additional context, observations, or discussion points for the review meeting]
{
"forecast_periods": [
{"period": "2024-Q1", "forecast": 420000, "actual": 445000},
{"period": "2024-Q2", "forecast": 480000, "actual": 460000},
{"period": "2024-Q3", "forecast": 510000, "actual": 525000},
{"period": "2024-Q4", "forecast": 550000, "actual": 510000},
{"period": "2025-Q1", "forecast": 520000, "actual": 540000},
{"period": "2025-Q2", "forecast": 580000, "actual": 560000}
],
"category_breakdowns": {
"by_rep": [
{"category": "Sarah Chen", "forecast": 210000, "actual": 225000},
{"category": "Marcus Johnson", "forecast": 185000, "actual": 160000},
{"category": "Priya Patel", "forecast": 125000, "actual": 135000},
{"category": "Alex Rivera", "forecast": 60000, "actual": 40000}
],
"by_segment": [
{"category": "Enterprise", "forecast": 320000, "actual": 310000},
{"category": "Mid-Market", "forecast": 180000, "actual": 175000},
{"category": "SMB", "forecast": 80000, "actual": 75000}
]
}
}
{
"revenue": {
"current_arr": 5000000,
"prior_arr": 3800000,
"net_new_arr": 1200000,
"arpa_monthly": 2500,
"revenue_growth_pct": 31.6
},
"costs": {
"sales_marketing_spend": 1800000,
"cac": 18000,
"gross_margin_pct": 78,
"total_operating_expense": 6500000,
"net_burn": 1500000,
"fcf_margin_pct": 8.4
},
"customers": {
"beginning_arr": 3800000,
"expansion_arr": 600000,
"contraction_arr": 100000,
"churned_arr": 300000,
"annual_churn_rate_pct": 8
}
}
{
"quota": 500000,
"stages": ["Discovery", "Qualification", "Proposal", "Negotiation", "Closed Won"],
"average_cycle_days": 45,
"deals": [
{
"id": "D001",
"name": "Acme Corp",
"stage": "Proposal",
"value": 85000,
"age_days": 32,
"close_date": "2025-03-15",
"owner": "rep_1"
},
{
"id": "D002",
"name": "TechFlow Inc",
"stage": "Discovery",
"value": 42000,
"age_days": 8,
"close_date": "2025-04-30",
"owner": "rep_2"
},
{
"id": "D003",
"name": "GlobalData Systems",
"stage": "Negotiation",
"value": 120000,
"age_days": 55,
"close_date": "2025-02-28",
"owner": "rep_1"
},
{
"id": "D004",
"name": "Pinnacle Software",
"stage": "Qualification",
"value": 35000,
"age_days": 18,
"close_date": "2025-04-15",
"owner": "rep_3"
},
{
"id": "D005",
"name": "Meridian Health",
"stage": "Proposal",
"value": 95000,
"age_days": 40,
"close_date": "2025-03-20",
"owner": "rep_2"
},
{
"id": "D006",
"name": "CloudVault",
"stage": "Discovery",
"value": 28000,
"age_days": 5,
"close_date": "2025-05-15",
"owner": "rep_1"
},
{
"id": "D007",
"name": "Nexus Financial",
"stage": "Closed Won",
"value": 72000,
"age_days": 38,
"close_date": "2025-01-31",
"owner": "rep_3"
},
{
"id": "D008",
"name": "Urban Analytics",
"stage": "Negotiation",
"value": 58000,
"age_days": 42,
"close_date": "2025-03-05",
"owner": "rep_2"
},
{
"id": "D009",
"name": "Redwood Logistics",
"stage": "Discovery",
"value": 31000,
"age_days": 12,
"close_date": "2025-05-01",
"owner": "rep_3"
},
{
"id": "D010",
"name": "Summit Enterprises",
"stage": "Qualification",
"value": 48000,
"age_days": 22,
"close_date": "2025-04-10",
"owner": "rep_1"
},
{
"id": "D011",
"name": "Vertex Solutions",
"stage": "Proposal",
"value": 110000,
"age_days": 95,
"close_date": "2025-03-01",
"owner": "rep_2"
},
{
"id": "D012",
"name": "DataBridge AI",
"stage": "Discovery",
"value": 55000,
"age_days": 3,
"close_date": "2025-06-15",
"owner": "rep_1"
},
{
"id": "D013",
"name": "Atlas Manufacturing",
"stage": "Qualification",
"value": 67000,
"age_days": 28,
"close_date": "2025-04-20",
"owner": "rep_3"
},
{
"id": "D014",
"name": "Horizon Telecom",
"stage": "Negotiation",
"value": 250000,
"age_days": 60,
"close_date": "2025-03-10",
"owner": "rep_1"
},
{
"id": "D015",
"name": "BlueShift Labs",
"stage": "Proposal",
"value": 43000,
"age_days": 35,
"close_date": "2025-03-25",
"owner": "rep_3"
},
{
"id": "D016",
"name": "Crestview Partners",
"stage": "Discovery",
"value": 38000,
"age_days": 15,
"close_date": "2025-05-20",
"owner": "rep_2"
},
{
"id": "D017",
"name": "Ironclad Security",
"stage": "Closed Won",
"value": 91000,
"age_days": 44,
"close_date": "2025-02-10",
"owner": "rep_1"
}
]
}
GTM Efficiency Benchmarks
SaaS benchmarks by funding stage, industry standards, and strategies for improving go-to-market efficiency.
Benchmarks by Funding Stage
Seed Stage ($0-$2M ARR)
| Metric | Red | Yellow | Green | Elite |
|---|---|---|---|---|
| Magic Number | <0.3 | 0.3-0.5 | >0.5 | >0.8 |
| LTV:CAC | <1.5:1 | 1.5-2.5:1 | >2.5:1 | >4:1 |
| CAC Payback | >30 mo | 24-30 mo | <24 mo | <15 mo |
| Burn Multiple | >5x | 3-5x | <3x | <2x |
| Rule of 40 | <0% | 0-20% | >20% | >40% |
| NDR | <90% | 90-100% | >100% | >110% |
Context: At seed stage, efficiency metrics are naturally less stable due to small sample sizes. Focus on directional improvement rather than absolute numbers. Burn multiple is the most critical metric -- investors want to see capital-efficient growth.
Series A ($2M-$10M ARR)
| Metric | Red | Yellow | Green | Elite |
|---|---|---|---|---|
| Magic Number | <0.4 | 0.4-0.6 | >0.6 | >0.9 |
| LTV:CAC | <2:1 | 2-3:1 | >3:1 | >5:1 |
| CAC Payback | >24 mo | 18-24 mo | <18 mo | <12 mo |
| Burn Multiple | >4x | 2.5-4x | <2.5x | <1.5x |
| Rule of 40 | <10% | 10-30% | >30% | >50% |
| NDR | <95% | 95-105% | >105% | >115% |
Context: Series A is where unit economics must prove out. LTV:CAC >3:1 validates product-market fit in the revenue model. Investors will scrutinize CAC payback to understand capital requirements.
Series B ($10M-$50M ARR)
| Metric | Red | Yellow | Green | Elite |
|---|---|---|---|---|
| Magic Number | <0.5 | 0.5-0.75 | >0.75 | >1.0 |
| LTV:CAC | <2.5:1 | 2.5-3.5:1 | >3.5:1 | >5:1 |
| CAC Payback | >22 mo | 15-22 mo | <15 mo | <10 mo |
| Burn Multiple | >3x | 2-3x | <2x | <1.5x |
| Rule of 40 | <20% | 20-35% | >35% | >50% |
| NDR | <100% | 100-110% | >110% | >120% |
Context: At Series B, the GTM machine should be scaling predictably. Magic Number >0.75 demonstrates that adding GTM spend produces proportional returns. NDR >110% proves land-and-expand motion works.
Series C+ ($50M-$200M ARR)
| Metric | Red | Yellow | Green | Elite |
|---|---|---|---|---|
| Magic Number | <0.5 | 0.5-0.75 | >0.75 | >1.0 |
| LTV:CAC | <3:1 | 3-4:1 | >4:1 | >6:1 |
| CAC Payback | >20 mo | 14-20 mo | <14 mo | <10 mo |
| Burn Multiple | >2.5x | 1.5-2.5x | <1.5x | <1x |
| Rule of 40 | <25% | 25-40% | >40% | >60% |
| NDR | <105% | 105-115% | >115% | >130% |
Context: Growth efficiency and path to profitability become paramount. The Rule of 40 is the primary board-level metric. Companies approaching IPO should target Rule of 40 >40% consistently.
Growth / Pre-IPO ($200M+ ARR)
| Metric | Red | Yellow | Green | Elite |
|---|---|---|---|---|
| Magic Number | <0.6 | 0.6-0.8 | >0.8 | >1.0 |
| LTV:CAC | <3:1 | 3-5:1 | >5:1 | >7:1 |
| CAC Payback | >18 mo | 12-18 mo | <12 mo | <8 mo |
| Burn Multiple | >2x | 1-2x | <1x | <0.5x |
| Rule of 40 | <30% | 30-45% | >45% | >65% |
| NDR | <110% | 110-120% | >120% | >140% |
Context: Pre-IPO and public companies are measured on absolute efficiency. FCF margin matters as much as growth rate. Best-in-class companies demonstrate both growth and profitability.
Industry Vertical Benchmarks
Horizontal SaaS (CRM, HR, Finance, Marketing)
| Metric | Median | Top Quartile |
|---|---|---|
| Magic Number | 0.65 | 0.90+ |
| LTV:CAC | 3.2:1 | 5.5:1+ |
| CAC Payback | 17 months | 11 months |
| Gross Margin | 72% | 80%+ |
| NDR | 108% | 120%+ |
| Win Rate | 22% | 32%+ |
Vertical SaaS (Healthcare, FinTech, PropTech)
| Metric | Median | Top Quartile |
|---|---|---|
| Magic Number | 0.55 | 0.80+ |
| LTV:CAC | 3.8:1 | 6.0:1+ |
| CAC Payback | 15 months | 10 months |
| Gross Margin | 68% | 76%+ |
| NDR | 112% | 125%+ |
| Win Rate | 25% | 38%+ |
Note: Vertical SaaS often has higher NDR (deeper embedding) and higher win rates (less competition) but lower gross margins (more services).
Infrastructure / DevTools
| Metric | Median | Top Quartile |
|---|---|---|
| Magic Number | 0.70 | 1.0+ |
| LTV:CAC | 4.0:1 | 7.0:1+ |
| CAC Payback | 14 months | 9 months |
| Gross Margin | 75% | 85%+ |
| NDR | 118% | 140%+ |
| Win Rate | 18% | 28%+ |
Note: Usage-based pricing in infrastructure drives exceptional NDR but more volatile revenue patterns.
Security / Compliance
| Metric | Median | Top Quartile |
|---|---|---|
| Magic Number | 0.60 | 0.85+ |
| LTV:CAC | 3.5:1 | 5.8:1+ |
| CAC Payback | 16 months | 11 months |
| Gross Margin | 74% | 82%+ |
| NDR | 115% | 130%+ |
| Win Rate | 20% | 30%+ |
Efficiency Improvement Strategies
Improving Magic Number
Current: <0.5 (Red) -- Target: >0.75 (Green)
Channel ROI analysis: Audit spend by channel (paid, outbound, events, content). Cut bottom 20% performing channels and reallocate.
Sales productivity: Measure revenue per rep. Identify bottom-quartile performers for coaching or role change. Top performers should be studied and their practices systematized.
Funnel efficiency: Improve MQL-to-SQL conversion through better lead scoring. Fewer, higher-quality leads reduce wasted sales capacity.
Ramp time reduction: Accelerate new rep ramp from average 6 months to 4 months through structured onboarding, shadowing, and certification.
Territory optimization: Ensure territories are balanced by opportunity (not just geography). Over-served territories waste capacity.
Improving LTV:CAC
Current: <3:1 (Yellow) -- Target: >5:1 (Green)
Increase LTV:
- Reduce churn through proactive health scoring and intervention
- Build expansion playbooks for cross-sell and upsell
- Increase pricing through value-based packaging
- Improve product stickiness with integrations and workflows
Decrease CAC:
- Invest in organic channels (content, SEO, community)
- Implement product-led growth (PLG) motion
- Optimize paid spend through better targeting and attribution
- Leverage customer referrals and case studies
Improving CAC Payback
Current: >18 months (Yellow) -- Target: <12 months (Green)
Increase ARPA: Package features to drive higher initial contract values. Annual prepay discounts accelerate cash collection.
Improve gross margin: Reduce COGS through automation, self-serve onboarding, and tech-touch customer success.
Reduce CAC: Same strategies as LTV:CAC improvement on the CAC side.
Contract structure: Annual or multi-year contracts with upfront payment reduce effective payback period.
Improving Burn Multiple
Current: >2x (Yellow) -- Target: <1.5x (Green)
Revenue efficiency: Focus on the highest ROI growth activities. Not all ARR is equal -- expansion ARR is typically much cheaper than new logo ARR.
Operational efficiency: Automate repeatable processes (billing, provisioning, basic support). Reduce headcount growth rate relative to revenue growth rate.
Spending discipline: Implement zero-based budgeting for non-essential spend. Every dollar of burn should connect to revenue generation.
Revenue acceleration: Sometimes the best way to improve burn multiple is not cutting costs but accelerating revenue. If you can accelerate revenue growth by 20% with 5% more spend, the burn multiple improves.
Improving NDR
Current: 100-110% (Yellow) -- Target: >120% (Green)
Expansion playbooks: Define trigger events for upsell (usage thresholds, team growth, feature requests). Arm CSMs with expansion talk tracks.
Usage-based pricing: Align pricing with customer value creation. As customers use more, they pay more -- naturally drives expansion.
Product-led expansion: Build in-product prompts for upgrades. Feature gating that shows value of next tier.
Reduce contraction: Identify reasons for downgrades. Often related to poor adoption of features customers are paying for.
Reduce churn: Implement early warning system (health scores). Intervene before renewal, not at renewal.
Multi-product strategy: Cross-sell additional products to existing customers. Second product adoption reduces churn by 30-50%.
Metric Relationships and Trade-offs
Growth vs. Efficiency
The fundamental tension in SaaS is between growth rate and capital efficiency:
High Growth + High Burn = Blitzscaling (risky but fast)
High Growth + Low Burn = Efficient Growth (ideal)
Low Growth + Low Burn = Cash Cow (sustainable but limited)
Low Growth + High Burn = Trouble (restructure immediately)Rule of 40 captures this balance: growth rate + margin should exceed 40%.
CAC Payback vs. Growth Rate
Shorter CAC payback enables faster reinvestment in growth. A company with 12-month payback can reinvest recovered CAC into new customer acquisition sooner than one with 24-month payback, creating a compounding advantage.
NDR vs. New Logo Acquisition
High NDR reduces dependence on new logo acquisition for growth:
- NDR of 120% means 20% growth from existing base before any new customers
- NDR of 100% means all growth must come from new customers (expensive)
- NDR of 80% means the company is shrinking and must acquire even more new customers just to replace lost revenue
Strategic implication: Invest in NDR improvement before scaling new logo acquisition. Every dollar spent improving NDR has higher ROI than acquiring new customers.
Benchmark Data Sources
The benchmarks in this guide are compiled from:
- Bessemer Cloud Index -- Public cloud company financial data
- KeyBanc SaaS Survey -- Annual survey of private SaaS companies
- OpenView SaaS Benchmarks -- Product-led growth focused benchmarks
- Iconiq Growth Analytics -- Private company growth and efficiency data
- SaaStr Annual Surveys -- Community-sourced SaaS metrics
- Battery Ventures Software Report -- Enterprise software metrics
Note: Benchmarks shift over time. In capital-constrained environments (higher interest rates), efficiency metrics (burn multiple, Rule of 40) receive more weight. In growth-oriented environments (lower interest rates), growth rate and market share gain importance.
Quarterly Board Reporting Template
When presenting GTM efficiency to the board, organize metrics as follows:
- Growth: ARR, net new ARR, growth rate, NDR
- Efficiency: Magic Number, LTV:CAC, CAC Payback, Burn Multiple
- Balance: Rule of 40 score and composition
- Pipeline: Coverage ratio, velocity, forecast accuracy
- Trends: Quarter-over-quarter change for each metric with directional indicators
- Benchmarks: How the company compares to stage-appropriate benchmarks
- Actions: Top 3 initiatives to improve weakest metrics
Pipeline Management Framework
Best practices for pipeline management including stage definitions, conversion benchmarks, velocity optimization, and inspection cadence.
Pipeline Stage Definitions
A well-defined pipeline requires clear, observable exit criteria at each stage. Subjective stages lead to inaccurate forecasting and unreliable conversion data.
Recommended Stage Model (B2B SaaS)
| Stage | Name | Exit Criteria | Probability | Typical Duration |
|---|---|---|---|---|
| S0 | Lead | Contact identified, initial interest signal | 5% | 0-7 days |
| S1 | Discovery | Pain identified, budget confirmed, stakeholder engaged | 10% | 7-14 days |
| S2 | Qualification | MEDDPICC criteria met, mutual action plan created | 20% | 14-21 days |
| S3 | Proposal | Solution presented, pricing delivered, champion confirmed | 40% | 7-14 days |
| S4 | Negotiation | Commercial terms discussed, legal engaged, verbal commitment | 60% | 7-21 days |
| S5 | Commit | Contract redlined, signature timeline confirmed | 80% | 3-7 days |
| S6 | Closed Won | Signed contract received | 100% | -- |
| SL | Closed Lost | Deal disposition recorded with loss reason | 0% | -- |
Stage Exit Criteria Best Practices
Discovery (S1) Exit Criteria:
- Pain point articulated by prospect (not assumed by rep)
- Budget range discussed (even if informal)
- Decision-making process understood
- Next meeting scheduled with clear agenda
Qualification (S2) Exit Criteria:
- MEDDPICC or BANT qualification framework completed
- Economic buyer identified (not just champion)
- Compelling event or timeline identified
- Mutual action plan (MAP) shared and agreed upon
- Technical requirements understood
Proposal (S3) Exit Criteria:
- Solution demo completed and well-received
- Pricing proposal delivered
- Champion validated proposal internally
- Competitive landscape understood
- No unresolved technical blockers
Negotiation (S4) Exit Criteria:
- Commercial terms discussed (not just pricing, but payment terms, SLA, etc.)
- Legal review initiated
- Security/procurement review started
- Verbal agreement on core terms
- Close date confirmed within 30 days
Commit (S5) Exit Criteria:
- Final contract sent for signature
- All legal redlines resolved
- Procurement approval obtained
- Signature expected within 7 business days
Conversion Benchmarks by Segment
SMB (ACV <$25K)
| Transition | Benchmark | Top Quartile |
|---|---|---|
| Lead to Discovery | 20-30% | 35%+ |
| Discovery to Qualification | 40-50% | 55%+ |
| Qualification to Proposal | 50-60% | 65%+ |
| Proposal to Negotiation | 55-65% | 70%+ |
| Negotiation to Close | 65-75% | 80%+ |
| Overall Win Rate | 20-30% | 35%+ |
| Avg Cycle Length | 14-30 days | <14 days |
Mid-Market (ACV $25K-$100K)
| Transition | Benchmark | Top Quartile |
|---|---|---|
| Lead to Discovery | 15-25% | 30%+ |
| Discovery to Qualification | 35-45% | 50%+ |
| Qualification to Proposal | 45-55% | 60%+ |
| Proposal to Negotiation | 50-60% | 65%+ |
| Negotiation to Close | 60-70% | 75%+ |
| Overall Win Rate | 15-25% | 30%+ |
| Avg Cycle Length | 30-60 days | <30 days |
Enterprise (ACV >$100K)
| Transition | Benchmark | Top Quartile |
|---|---|---|
| Lead to Discovery | 10-20% | 25%+ |
| Discovery to Qualification | 30-40% | 45%+ |
| Qualification to Proposal | 40-50% | 55%+ |
| Proposal to Negotiation | 45-55% | 60%+ |
| Negotiation to Close | 55-65% | 70%+ |
| Overall Win Rate | 10-20% | 25%+ |
| Avg Cycle Length | 60-120 days | <60 days |
Sales Velocity Optimization
Sales velocity = (# Opportunities x Avg Deal Size x Win Rate) / Avg Cycle Days
Each component is an optimization lever:
Lever 1: Increase Opportunity Volume
Strategies:
- Invest in inbound marketing (content, SEO, paid)
- Scale outbound SDR capacity
- Develop partner/channel sourcing
- Launch product-led growth (PLG) motion
- Implement customer referral programs
Measurement: Pipeline created ($) per week/month, by source
Lever 2: Increase Average Deal Size
Strategies:
- Multi-product bundling and packaging
- Usage-based pricing with growth triggers
- Land-and-expand with defined expansion playbooks
- Move upmarket with enterprise features
- Value-based pricing tied to customer outcomes
Measurement: ACV trend by quarter, by segment
Lever 3: Increase Win Rate
Strategies:
- Implement MEDDPICC qualification rigor
- Build competitive battle cards and train on them
- Create multi-threaded relationships (not single-threaded)
- Develop ROI/business case tools
- Invest in sales engineering and demo quality
- Win/loss analysis with structured debriefs
Measurement: Win rate by stage entry, by competitor, by rep
Lever 4: Decrease Sales Cycle Length
Strategies:
- Pre-qualify harder at S1/S2 to remove slow deals
- Mutual action plans with milestone dates
- Champion enablement (arm champions with internal selling materials)
- Parallel processing (legal/security review concurrent with evaluation)
- Standardized contracts and pre-approved terms
- Executive sponsor engagement for stuck deals
Measurement: Days in each stage, cycle length trend, stage-specific bottlenecks
Pipeline Inspection Cadence
Daily (Rep Level)
Focus: Deal-level activity and next steps
Questions:
- What is the next step for each deal in S3+?
- Are any deals missing next steps or scheduled meetings?
- Which deals have not been updated in >3 days?
Weekly (Manager/Team Level)
Focus: Pipeline health and forecast accuracy
Review Format (45-60 minutes):
Coverage Check (10 min)
- Current pipeline vs. quota -- is coverage >3x?
- Pipeline created this week vs. target
- Net pipeline change (created minus closed minus lost)
Deal Inspection (25 min)
- Walk top 10 deals by value in S3+
- MEDDPICC validation for each commit deal
- Identify deals at risk (aging, single-threaded, no next step)
Forecast Call (10 min)
- Commit, best case, and pipeline forecast
- Changes from last week's forecast (what moved and why)
- Gaps to plan and remediation
Action Items (5 min)
- Deals needing executive engagement
- Pipeline generation actions for next week
- Coaching priorities
Monthly (Leadership Level)
Focus: Pipeline trends, velocity, and efficiency
Review Areas:
- Month-over-month pipeline growth trend
- Conversion rate trends by stage
- Sales velocity trend (improving or declining?)
- Forecast accuracy (MAPE) for the month
- Rep performance distribution (quartile analysis)
- Pipeline source mix health
Quarterly (Executive/Board Level)
Focus: GTM efficiency and strategic pipeline
Review Areas:
- Pipeline coverage for next 2-3 quarters
- LTV:CAC and Magic Number trends
- Sales efficiency ratio trends
- Market segment performance comparison
- New market/product pipeline contribution
- Competitive win/loss trends
Pipeline Hygiene
Deal Hygiene Standards
Close date accuracy: Close dates must be based on buyer commitment, not rep hope. Any deal pushed more than twice should be flagged for re-qualification.
Stage accuracy: Deals must meet exit criteria to be in a stage. No deal should be in Proposal (S3) without a pricing deliverable sent.
Amount accuracy: Deal amounts must reflect the current proposal, not aspirational upsell. Variance between deal value and proposal should be <10%.
Contact coverage: Deals >$50K should have 3+ contacts associated. Enterprise deals should have economic buyer, champion, and technical evaluator.
Activity recency: No deal should go 7+ days without logged activity. Deals without recent activity signal stalling.
Pipeline Cleanup Triggers
Run cleanup when:
- Pipeline-to-quota ratio drops below 2.5x
- Forecast accuracy (MAPE) exceeds 20%
- More than 15% of pipeline is >90 days old
- Average deal age exceeds 1.5x normal cycle time
Cleanup Process
- Flag all deals with close date in the past
- Flag all deals with no activity in 14+ days
- Flag all deals pushed 3+ times
- Rep self-assessment: keep, push, or close for each flagged deal
- Manager review and disposition
- Update CRM and recalculate metrics
Pipeline Risk Indicators
Concentration Risk
Definition: Over-reliance on a small number of large deals.
Thresholds:
- Single deal >40% of pipeline = HIGH risk
- Single deal >25% of pipeline = MEDIUM risk
- Top 3 deals >70% of pipeline = HIGH risk
Mitigation: Diversify pipeline across segments, deal sizes, and sources. Increase deal count even if average deal size decreases.
Stage Imbalance Risk
Definition: Pipeline is concentrated in early or late stages with gaps in between.
Healthy Distribution:
- Discovery/Qualification: 50-60% of pipeline value
- Proposal: 20-25% of pipeline value
- Negotiation/Commit: 15-20% of pipeline value
Warning Signs:
70% in early stages = insufficient progression
50% in late stages = insufficient pipeline generation
- Empty stages = broken funnel mechanics
Temporal Risk
Definition: Pipeline is concentrated in a single quarter or lacks coverage for future quarters.
Standard: Maintain 3x coverage for current quarter and 1.5x for next quarter.
Source Risk
Definition: Pipeline is overly dependent on a single source (e.g., 80% outbound, 0% inbound).
Healthy Mix (varies by stage):
- Inbound/Marketing: 30-40%
- Outbound/SDR: 30-40%
- Partner/Channel: 10-20%
- Expansion/Customer: 10-20%
RevOps Metrics Guide
Complete reference for Revenue Operations metrics hierarchy, definitions, formulas, interpretation guidelines, and common mistakes.
Metrics Hierarchy
Revenue Operations metrics are organized in a hierarchy from leading indicators (pipeline activity) through lagging indicators (efficiency outcomes):
Level 1: Activity Metrics (Leading)
├── Pipeline created ($, #)
├── Meetings booked
├── Proposals sent
└── Demo completion rate
Level 2: Pipeline Metrics (Mid-funnel)
├── Pipeline coverage ratio
├── Stage conversion rates
├── Sales velocity
├── Deal aging
└── Pipeline hygiene score
Level 3: Revenue Metrics (Outcomes)
├── Bookings (new, expansion, renewal)
├── Revenue (ARR, MRR, TCV)
├── Win rate
└── Average deal size
Level 4: Efficiency Metrics (Unit Economics)
├── Magic Number
├── LTV:CAC Ratio
├── CAC Payback Period
├── Burn Multiple
├── Rule of 40
└── Net Dollar Retention
Level 5: Strategic Metrics (Board-Level)
├── Revenue per employee
├── Gross margin trend
├── NRR cohort analysis
└── Customer health scoreCore Metric Definitions
Pipeline Coverage Ratio
Formula: Total Weighted Pipeline / Quota Target
What it measures: Whether there is sufficient pipeline to meet revenue targets.
Interpretation:
- 4x+: Strong coverage, selective deal pursuit possible
- 3-4x: Healthy coverage, standard operations
- 2-3x: At risk, accelerate pipeline generation
- <2x: Critical, immediate pipeline intervention needed
Common Mistakes:
- Including closed-won deals in the pipeline total
- Not weighting by stage probability
- Using annual quota against quarterly pipeline
- Ignoring deal quality in favor of quantity
Best Practice: Measure coverage ratio weekly. Track by quarter to identify seasonal gaps early.
Stage Conversion Rates
Formula: # Deals advancing to Stage N+1 / # Deals entering Stage N
What it measures: Efficiency of progression through each pipeline stage.
Typical SaaS Conversion Benchmarks:
| Stage Transition | Median Rate | Top Quartile |
|---|---|---|
| Lead to Qualification | 15-25% | 30%+ |
| Qualification to Proposal | 40-50% | 60%+ |
| Proposal to Negotiation | 50-60% | 70%+ |
| Negotiation to Close | 60-70% | 80%+ |
| Overall Win Rate | 15-25% | 30%+ |
Common Mistakes:
- Not standardizing stage exit criteria (subjective stages)
- Comparing conversion rates across different sales motions (PLG vs enterprise)
- Ignoring stage skipping (deals that jump stages inflate later conversion rates)
- Not segmenting by deal size or segment
Sales Velocity
Formula: (# Opportunities x Avg Deal Size x Win Rate) / Avg Sales Cycle Days
What it measures: The rate at which the pipeline generates revenue, measured as revenue per day.
Components:
- # Opportunities -- Volume of qualified deals in pipeline
- Avg Deal Size -- Average contract value of won deals
- Win Rate -- Percentage of deals that close
- Avg Sales Cycle -- Days from opportunity creation to close
Optimization levers:
- Increase opportunity volume (marketing/SDR investment)
- Increase deal size (pricing, packaging, upsell)
- Increase win rate (sales enablement, competitive positioning)
- Decrease cycle length (champion building, MEDDPICC adherence)
Common Mistakes:
- Using all pipeline deals instead of qualified opportunities
- Not normalizing for segment (SMB velocity vs Enterprise velocity)
- Conflating calendar time with active selling time
- Ignoring velocity trend in favor of absolute number
MAPE (Mean Absolute Percentage Error)
Formula: mean(|Actual - Forecast| / |Actual|) x 100
What it measures: Average forecast error magnitude as a percentage.
Interpretation:
| MAPE | Rating | Action |
|---|---|---|
| <10% | Excellent | Maintain current methodology |
| 10-15% | Good | Minor calibration adjustments |
| 15-25% | Fair | Methodology review needed |
| >25% | Poor | Fundamental process overhaul |
Common Mistakes:
- Using forecast vs. target instead of forecast vs. actual
- Not distinguishing between bias (systematic) and variance (random)
- Measuring only at the aggregate level (masks individual rep errors)
- Comparing MAPE across different time horizons (monthly vs quarterly)
Forecast Bias
Formula: mean(Forecast - Actual) / mean(Actual) x 100
What it measures: Systematic tendency to over-forecast or under-forecast.
Types:
- Positive bias (over-forecasting): Forecast consistently exceeds actual. Often indicates optimistic deal assessment, insufficient qualification, or sandbagging reversal.
- Negative bias (under-forecasting): Actual consistently exceeds forecast. Often indicates conservative call culture, late-stage deals arriving unexpectedly, or poor pipeline visibility.
Healthy Range: Bias within +/- 5% of actual is considered well-calibrated.
Magic Number
Formula: Net New ARR / Prior Period S&M Spend
What it measures: Efficiency of sales & marketing spend in generating new revenue.
Interpretation:
1.0: Extremely efficient, consider increasing GTM investment
- 0.75-1.0: Healthy efficiency, optimize and scale
- 0.50-0.75: Acceptable, focus on channel/spend optimization
- <0.50: Inefficient, audit spend allocation and productivity
Common Mistakes:
- Using total revenue instead of net new ARR
- Including expansion ARR (Magic Number measures new logo efficiency)
- Using current period spend instead of prior period (lag effect)
- Not separating sales spend from marketing spend for diagnostics
LTV:CAC Ratio
Formula: Customer Lifetime Value / Customer Acquisition Cost
Where:
- LTV = (ARPA x Gross Margin) / Churn Rate
- ARPA = Average Revenue Per Account (annualized)
- CAC = Total S&M Spend / New Customers Acquired
Target: >3:1 is healthy; >5:1 may indicate under-investment in growth
Common Mistakes:
- Using revenue instead of gross-margin-weighted revenue in LTV
- Not including all acquisition costs (SDR, marketing, sales engineering)
- Using blended churn instead of cohort-specific churn
- Comparing across segments without normalizing (enterprise LTV:CAC is naturally higher)
CAC Payback Period
Formula: CAC / (ARPA_monthly x Gross Margin)
What it measures: Months to recover the cost of acquiring a customer.
Interpretation:
- <12 months: Excellent capital efficiency
- 12-18 months: Healthy, especially for mid-market/enterprise
- 18-24 months: Acceptable for enterprise, concerning for SMB
24 months: Capital-intensive, needs optimization
Common Mistakes:
- Using revenue instead of gross-margin contribution
- Ignoring expansion revenue in payback calculation (conservative approach)
- Comparing SMB payback to enterprise payback without context
Burn Multiple
Formula: Net Burn / Net New ARR
What it measures: How much cash is consumed for each dollar of new ARR.
Interpretation (David Sacks framework):
- <1.0x: Amazing -- hyper-efficient growth
- 1.0-1.5x: Great -- strong capital efficiency
- 1.5-2.0x: Good -- healthy burn rate
- 2.0-3.0x: Suspect -- needs attention
3.0x: Bad -- unsustainable without course correction
Common Mistakes:
- Using gross burn instead of net burn
- Not annualizing ARR when using quarterly burn
- Ignoring the denominator quality (all new ARR is not equal)
Rule of 40
Formula: Revenue Growth Rate (%) + Free Cash Flow Margin (%)
What it measures: Balance between growth and profitability.
Interpretation:
60%: Elite SaaS company
- 40-60%: Strong performance
- 20-40%: Acceptable, optimize one dimension
- <20%: Needs significant improvement
Common Mistakes:
- Using EBITDA margin instead of FCF margin
- Comparing early-stage (growth-heavy) with late-stage (margin-heavy)
- Not considering the composition (80% growth + -40% margin vs 30% + 10%)
Net Dollar Retention (NDR)
Formula: (Beginning ARR + Expansion - Contraction - Churn) / Beginning ARR x 100
What it measures: Revenue retention and expansion from existing customers.
Interpretation:
130%: World-class expansion (Snowflake, Datadog)
- 120-130%: Excellent land-and-expand
- 110-120%: Strong retention with moderate expansion
- 100-110%: Stable base, limited expansion
- <100%: Net revenue contraction -- critical concern
Common Mistakes:
- Including new logos in the calculation
- Not normalizing for cohort age (newer cohorts expand differently)
- Confusing gross retention with net retention
- Using logo retention as a proxy for dollar retention
Metric Interdependencies
Understanding how metrics relate prevents conflicting optimizations:
Magic Number and LTV:CAC -- Both use S&M spend but measure different horizons. Magic Number is period-specific; LTV:CAC is lifetime.
Burn Multiple and Rule of 40 -- Both measure efficiency but from different angles. Burn Multiple is cash-focused; Rule of 40 balances growth with profitability.
Pipeline Coverage and Sales Velocity -- High coverage with low velocity means pipeline is stagnating. Both must be healthy.
NDR and LTV -- NDR directly impacts LTV. Improving NDR is the highest-leverage way to improve LTV:CAC.
Win Rate and Deal Size -- Often inversely correlated. Moving upmarket increases deal size but may reduce win rate.
Measurement Cadence
| Metric | Cadence | Owner |
|---|---|---|
| Pipeline Coverage | Weekly | Sales Leadership |
| Stage Conversion | Bi-weekly | Sales Ops |
| Sales Velocity | Monthly | RevOps |
| Forecast Accuracy (MAPE) | Monthly/Quarterly | RevOps |
| Magic Number | Quarterly | CRO/CFO |
| LTV:CAC | Quarterly | Finance/RevOps |
| CAC Payback | Quarterly | Finance |
| Burn Multiple | Quarterly | CFO |
| Rule of 40 | Quarterly/Annual | CEO/Board |
| NDR | Quarterly | CS/RevOps |
#!/usr/bin/env python3
"""Forecast Accuracy Tracker - Measures forecast accuracy and bias for SaaS revenue teams.
Calculates MAPE (Mean Absolute Percentage Error), detects systematic forecasting
bias, analyzes accuracy trends, and provides category-level breakdowns.
Usage:
python forecast_accuracy_tracker.py forecast_data.json --format text
python forecast_accuracy_tracker.py forecast_data.json --format json
"""
import argparse
import json
import sys
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def calculate_mape(periods: list[dict]) -> float:
"""Calculate Mean Absolute Percentage Error.
Formula: mean(|actual - forecast| / |actual|) x 100
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
MAPE as a percentage.
"""
if not periods:
return 0.0
errors = []
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
errors.append(abs(actual - forecast) / abs(actual))
if not errors:
return 0.0
return (sum(errors) / len(errors)) * 100
def calculate_weighted_mape(periods: list[dict]) -> float:
"""Calculate value-weighted MAPE.
Weights each period's error by its actual value, giving more importance
to larger periods.
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
Weighted MAPE as a percentage.
"""
if not periods:
return 0.0
total_actual = sum(abs(p["actual"]) for p in periods)
if total_actual == 0:
return 0.0
weighted_errors = 0.0
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
weight = abs(actual) / total_actual
weighted_errors += weight * (abs(actual - forecast) / abs(actual))
return weighted_errors * 100
def get_accuracy_rating(mape: float) -> dict[str, str]:
"""Return accuracy rating based on MAPE threshold.
Ratings:
Excellent: <10%
Good: 10-15%
Fair: 15-25%
Poor: >25%
"""
if mape < 10:
return {"rating": "Excellent", "description": "Highly predictable, data-driven process"}
elif mape < 15:
return {"rating": "Good", "description": "Reliable forecasting with minor variance"}
elif mape < 25:
return {"rating": "Fair", "description": "Needs process improvement"}
else:
return {"rating": "Poor", "description": "Significant forecasting methodology gaps"}
def analyze_bias(periods: list[dict]) -> dict[str, Any]:
"""Analyze systematic forecasting bias.
Positive bias = over-forecasting (forecast > actual, i.e., actual fell short)
Negative bias = under-forecasting (forecast < actual, i.e., actual exceeded)
Args:
periods: List of dicts with 'forecast' and 'actual' keys.
Returns:
Bias analysis with direction, magnitude, and ratio.
"""
if not periods:
return {
"direction": "None",
"bias_pct": 0.0,
"over_forecast_count": 0,
"under_forecast_count": 0,
"exact_count": 0,
"bias_ratio": 0.0,
}
over_count = 0
under_count = 0
exact_count = 0
total_bias = 0.0
for p in periods:
diff = p["forecast"] - p["actual"]
total_bias += diff
if diff > 0:
over_count += 1
elif diff < 0:
under_count += 1
else:
exact_count += 1
avg_bias = total_bias / len(periods)
total_actual = sum(p["actual"] for p in periods)
bias_pct = safe_divide(total_bias, total_actual) * 100
if over_count > under_count:
direction = "Over-forecasting"
elif under_count > over_count:
direction = "Under-forecasting"
else:
direction = "Balanced"
bias_ratio = safe_divide(over_count, over_count + under_count)
return {
"direction": direction,
"avg_bias_amount": round(avg_bias, 2),
"bias_pct": round(bias_pct, 1),
"over_forecast_count": over_count,
"under_forecast_count": under_count,
"exact_count": exact_count,
"bias_ratio": round(bias_ratio, 2),
}
def analyze_trend(periods: list[dict]) -> dict[str, Any]:
"""Analyze period-over-period accuracy trend.
Determines if forecast accuracy is improving, stable, or declining
by comparing error rates across consecutive periods.
Args:
periods: List of dicts with 'period', 'forecast', and 'actual' keys.
Returns:
Trend analysis with direction and period details.
"""
if len(periods) < 2:
return {
"trend": "Insufficient data",
"period_errors": [],
"improving_periods": 0,
"declining_periods": 0,
}
period_errors = []
for p in periods:
actual = p["actual"]
forecast = p["forecast"]
if actual != 0:
error_pct = abs(actual - forecast) / abs(actual) * 100
else:
error_pct = 0.0
period_errors.append({
"period": p.get("period", "Unknown"),
"error_pct": round(error_pct, 1),
"forecast": forecast,
"actual": actual,
})
improving = 0
declining = 0
for i in range(1, len(period_errors)):
if period_errors[i]["error_pct"] < period_errors[i - 1]["error_pct"]:
improving += 1
elif period_errors[i]["error_pct"] > period_errors[i - 1]["error_pct"]:
declining += 1
if improving > declining:
trend = "Improving"
elif declining > improving:
trend = "Declining"
else:
trend = "Stable"
# Calculate recent vs historical MAPE
midpoint = len(periods) // 2
if midpoint > 0:
early_mape = calculate_mape(periods[:midpoint])
recent_mape = calculate_mape(periods[midpoint:])
mape_change = recent_mape - early_mape
else:
early_mape = 0.0
recent_mape = 0.0
mape_change = 0.0
return {
"trend": trend,
"period_errors": period_errors,
"improving_periods": improving,
"declining_periods": declining,
"early_mape": round(early_mape, 1),
"recent_mape": round(recent_mape, 1),
"mape_change": round(mape_change, 1),
}
def analyze_categories(category_breakdowns: dict) -> dict[str, Any]:
"""Analyze accuracy by category (rep, product, segment, etc.).
Args:
category_breakdowns: Dict of category_name -> list of
{category, forecast, actual} dicts.
Returns:
Category-level MAPE and accuracy analysis.
"""
results = {}
for category_name, entries in category_breakdowns.items():
category_results = []
for entry in entries:
actual = entry["actual"]
forecast = entry["forecast"]
if actual != 0:
error_pct = abs(actual - forecast) / abs(actual) * 100
else:
error_pct = 0.0
diff = forecast - actual
if diff > 0:
bias = "Over"
elif diff < 0:
bias = "Under"
else:
bias = "Exact"
rating = get_accuracy_rating(error_pct)
category_results.append({
"category": entry["category"],
"forecast": forecast,
"actual": actual,
"error_pct": round(error_pct, 1),
"bias": bias,
"variance": round(diff, 2),
"rating": rating["rating"],
})
# Sort by error percentage (worst first)
category_results.sort(key=lambda x: x["error_pct"], reverse=True)
overall_mape = calculate_mape(entries)
results[category_name] = {
"entries": category_results,
"overall_mape": round(overall_mape, 1),
"overall_rating": get_accuracy_rating(overall_mape)["rating"],
}
return results
def generate_recommendations(
mape: float, bias: dict, trend: dict, categories: dict
) -> list[str]:
"""Generate actionable recommendations based on analysis results.
Args:
mape: Overall MAPE percentage.
bias: Bias analysis results.
trend: Trend analysis results.
categories: Category analysis results.
Returns:
List of recommendation strings.
"""
recommendations = []
# MAPE-based recommendations
if mape > 25:
recommendations.append(
"CRITICAL: MAPE exceeds 25%. Implement structured forecasting methodology "
"(e.g., weighted pipeline with stage-based probabilities)."
)
elif mape > 15:
recommendations.append(
"Forecast accuracy needs improvement. Consider implementing deal-level "
"forecasting with commit/upside/pipeline categories."
)
# Bias-based recommendations
if bias["direction"] == "Over-forecasting" and abs(bias["bias_pct"]) > 10:
recommendations.append(
f"Systematic over-forecasting detected ({bias['bias_pct']}% bias). "
"Review deal qualification criteria and apply more conservative "
"stage probabilities."
)
elif bias["direction"] == "Under-forecasting" and abs(bias["bias_pct"]) > 10:
recommendations.append(
f"Systematic under-forecasting detected ({bias['bias_pct']}% bias). "
"Review upside deals more carefully and improve pipeline visibility."
)
# Trend-based recommendations
if trend["trend"] == "Declining":
recommendations.append(
"Forecast accuracy is declining over time. Schedule a forecasting "
"methodology review and retrain the team on forecasting best practices."
)
elif trend["trend"] == "Improving":
recommendations.append(
"Forecast accuracy is improving. Continue current methodology and "
"document best practices for consistency."
)
# Category-based recommendations
for cat_name, cat_data in categories.items():
worst_entries = [
e for e in cat_data["entries"] if e["error_pct"] > 25
]
if worst_entries:
names = ", ".join(e["category"] for e in worst_entries[:3])
recommendations.append(
f"High error rates in {cat_name}: {names}. "
f"Provide targeted coaching on forecasting discipline."
)
if not recommendations:
recommendations.append(
"Forecasting performance is strong. Maintain current processes "
"and continue monitoring for drift."
)
return recommendations
def track_forecast_accuracy(data: dict) -> dict[str, Any]:
"""Run complete forecast accuracy analysis.
Args:
data: Forecast data with periods and optional category breakdowns.
Returns:
Complete forecast accuracy analysis results.
"""
periods = data["forecast_periods"]
mape = calculate_mape(periods)
weighted_mape = calculate_weighted_mape(periods)
rating = get_accuracy_rating(mape)
bias = analyze_bias(periods)
trend = analyze_trend(periods)
categories = {}
if "category_breakdowns" in data:
categories = analyze_categories(data["category_breakdowns"])
recommendations = generate_recommendations(mape, bias, trend, categories)
return {
"mape": round(mape, 1),
"weighted_mape": round(weighted_mape, 1),
"accuracy_rating": rating,
"bias": bias,
"trend": trend,
"category_breakdowns": categories,
"recommendations": recommendations,
"periods_analyzed": len(periods),
}
def format_currency(value: float) -> str:
"""Format a number as currency."""
if abs(value) >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif abs(value) >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("FORECAST ACCURACY REPORT")
lines.append("=" * 70)
# Overall accuracy
lines.append("")
lines.append("OVERALL ACCURACY")
lines.append("-" * 40)
lines.append(f" MAPE: {results['mape']}%")
lines.append(f" Weighted MAPE: {results['weighted_mape']}%")
lines.append(f" Rating: {results['accuracy_rating']['rating']}")
lines.append(f" Assessment: {results['accuracy_rating']['description']}")
lines.append(f" Periods Analyzed: {results['periods_analyzed']}")
# Bias analysis
bias = results["bias"]
lines.append("")
lines.append("FORECAST BIAS")
lines.append("-" * 40)
lines.append(f" Direction: {bias['direction']}")
lines.append(f" Bias %: {bias['bias_pct']}%")
lines.append(f" Avg Bias Amount: {format_currency(bias['avg_bias_amount'])}")
lines.append(f" Over-forecast: {bias['over_forecast_count']} periods")
lines.append(f" Under-forecast: {bias['under_forecast_count']} periods")
lines.append(f" Bias Ratio: {bias['bias_ratio']}")
# Trend analysis
trend = results["trend"]
lines.append("")
lines.append("ACCURACY TREND")
lines.append("-" * 40)
lines.append(f" Trend: {trend['trend']}")
lines.append(f" Improving: {trend['improving_periods']} periods")
lines.append(f" Declining: {trend['declining_periods']} periods")
if trend.get("early_mape") is not None and trend["trend"] != "Insufficient data":
lines.append(f" Early MAPE: {trend['early_mape']}%")
lines.append(f" Recent MAPE: {trend['recent_mape']}%")
lines.append(f" MAPE Change: {trend['mape_change']:+.1f}%")
if trend.get("period_errors"):
lines.append("")
lines.append(" PERIOD DETAIL:")
for pe in trend["period_errors"]:
lines.append(
f" {pe['period']:12s} "
f"Forecast: {format_currency(pe['forecast']):>10s} "
f"Actual: {format_currency(pe['actual']):>10s} "
f"Error: {pe['error_pct']}%"
)
# Category breakdowns
if results["category_breakdowns"]:
lines.append("")
lines.append("CATEGORY BREAKDOWN")
lines.append("-" * 40)
for cat_name, cat_data in results["category_breakdowns"].items():
lines.append(
f"\n {cat_name.upper()} (Overall MAPE: {cat_data['overall_mape']}% "
f"- {cat_data['overall_rating']})"
)
for entry in cat_data["entries"]:
lines.append(
f" {entry['category']:20s} "
f"Error: {entry['error_pct']:5.1f}% "
f"Bias: {entry['bias']:5s} "
f"Rating: {entry['rating']}"
)
# Recommendations
lines.append("")
lines.append("RECOMMENDATIONS")
lines.append("-" * 40)
for i, rec in enumerate(results["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for forecast accuracy tracker CLI."""
parser = argparse.ArgumentParser(
description="Track and analyze forecast accuracy for SaaS revenue teams."
)
parser.add_argument(
"input",
help="Path to JSON file containing forecast data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
if "forecast_periods" not in data:
print("Error: Missing required field 'forecast_periods' in input data", file=sys.stderr)
sys.exit(1)
results = track_forecast_accuracy(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""GTM Efficiency Calculator - Calculates go-to-market efficiency metrics for SaaS.
Computes Magic Number, LTV:CAC, CAC Payback, Burn Multiple, Rule of 40,
and Net Dollar Retention with industry benchmarking and ratings.
Usage:
python gtm_efficiency_calculator.py gtm_data.json --format text
python gtm_efficiency_calculator.py gtm_data.json --format json
"""
import argparse
import json
import sys
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
# --- Benchmark tables ---
# Each benchmark defines green/yellow/red thresholds
# and optional percentile placement guidance
BENCHMARKS = {
"magic_number": {
"green": {"min": 0.75, "label": ">0.75 - Efficient GTM spend"},
"yellow": {"min": 0.50, "max": 0.75, "label": "0.50-0.75 - Acceptable efficiency"},
"red": {"max": 0.50, "label": "<0.50 - Inefficient GTM spend"},
"elite": 1.0,
"description": "Net New ARR / Prior Period S&M Spend",
},
"ltv_cac_ratio": {
"green": {"min": 3.0, "label": ">3:1 - Strong unit economics"},
"yellow": {"min": 1.0, "max": 3.0, "label": "1:1-3:1 - Marginal unit economics"},
"red": {"max": 1.0, "label": "<1:1 - Unsustainable unit economics"},
"elite": 5.0,
"description": "Customer LTV / Customer Acquisition Cost",
},
"cac_payback_months": {
"green": {"max": 18, "label": "<18 months - Healthy payback"},
"yellow": {"min": 18, "max": 24, "label": "18-24 months - Acceptable payback"},
"red": {"min": 24, "label": ">24 months - Capital intensive"},
"elite": 12,
"description": "CAC / (ARPA x Gross Margin) in months",
},
"burn_multiple": {
"green": {"max": 2.0, "label": "<2x - Capital efficient growth"},
"yellow": {"min": 2.0, "max": 4.0, "label": "2-4x - Moderate burn"},
"red": {"min": 4.0, "label": ">4x - Unsustainable burn"},
"elite": 1.0,
"description": "Net Burn / Net New ARR",
},
"rule_of_40": {
"green": {"min": 40, "label": ">40% - Strong balance of growth & profitability"},
"yellow": {"min": 20, "max": 40, "label": "20-40% - Acceptable balance"},
"red": {"max": 20, "label": "<20% - Needs improvement"},
"elite": 60,
"description": "Revenue Growth % + FCF Margin %",
},
"ndr_pct": {
"green": {"min": 110, "label": ">110% - Strong expansion revenue"},
"yellow": {"min": 100, "max": 110, "label": "100-110% - Stable base"},
"red": {"max": 100, "label": "<100% - Net revenue contraction"},
"elite": 130,
"description": "(Begin ARR + Expansion - Contraction - Churn) / Begin ARR",
},
}
def rate_metric(metric_name: str, value: float) -> dict[str, str]:
"""Rate a metric as Green/Yellow/Red based on benchmark thresholds.
Args:
metric_name: Key into BENCHMARKS dict.
value: The metric value to rate.
Returns:
Dict with rating color, label, and percentile guidance.
"""
bench = BENCHMARKS.get(metric_name)
if not bench:
return {"rating": "Unknown", "label": "No benchmark available"}
# For metrics where lower is better (cac_payback, burn_multiple)
lower_is_better = metric_name in ("cac_payback_months", "burn_multiple")
if lower_is_better:
if "max" in bench["green"] and value <= bench["green"]["max"]:
rating = "Green"
label = bench["green"]["label"]
elif "min" in bench.get("yellow", {}) and "max" in bench.get("yellow", {}):
if bench["yellow"]["min"] <= value <= bench["yellow"]["max"]:
rating = "Yellow"
label = bench["yellow"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
if "min" in bench["green"] and value >= bench["green"]["min"]:
rating = "Green"
label = bench["green"]["label"]
elif "min" in bench.get("yellow", {}) and "max" in bench.get("yellow", {}):
if bench["yellow"]["min"] <= value <= bench["yellow"]["max"]:
rating = "Yellow"
label = bench["yellow"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
else:
rating = "Red"
label = bench["red"]["label"]
# Percentile placement (simplified)
elite = bench.get("elite", 0)
if lower_is_better:
if elite > 0 and value > 0:
if value <= elite:
percentile = "Top 10%"
elif rating == "Green":
percentile = "Top 25%"
elif rating == "Yellow":
percentile = "Median"
else:
percentile = "Below median"
else:
percentile = "N/A"
else:
if elite > 0:
if value >= elite:
percentile = "Top 10%"
elif rating == "Green":
percentile = "Top 25%"
elif rating == "Yellow":
percentile = "Median"
else:
percentile = "Below median"
else:
percentile = "N/A"
return {
"rating": rating,
"label": label,
"percentile": percentile,
}
def calculate_magic_number(net_new_arr: float, sm_spend: float) -> dict[str, Any]:
"""Calculate Magic Number.
Formula: Net New ARR / Prior Period S&M Spend
Target: >0.75
Args:
net_new_arr: Net new annual recurring revenue in the period.
sm_spend: Sales & marketing spend in the prior period.
Returns:
Magic number value with rating and benchmark.
"""
value = safe_divide(net_new_arr, sm_spend)
benchmark = rate_metric("magic_number", value)
return {
"value": round(value, 2),
"net_new_arr": net_new_arr,
"sm_spend": sm_spend,
"formula": "Net New ARR / Prior Period S&M Spend",
"target": ">0.75",
**benchmark,
}
def calculate_ltv_cac(
arpa_monthly: float,
gross_margin_pct: float,
annual_churn_rate_pct: float,
cac: float,
) -> dict[str, Any]:
"""Calculate LTV:CAC Ratio.
LTV = ARPA_monthly x 12 x Gross Margin / Annual Churn Rate
Ratio = LTV / CAC
Target: >3:1
Args:
arpa_monthly: Average revenue per account per month.
gross_margin_pct: Gross margin as percentage (e.g., 78 for 78%).
annual_churn_rate_pct: Annual churn rate as percentage (e.g., 8 for 8%).
cac: Customer acquisition cost.
Returns:
LTV:CAC ratio with component values, rating, and benchmark.
"""
gross_margin = gross_margin_pct / 100
churn_rate = annual_churn_rate_pct / 100
arpa_annual = arpa_monthly * 12
ltv = safe_divide(arpa_annual * gross_margin, churn_rate)
ratio = safe_divide(ltv, cac)
benchmark = rate_metric("ltv_cac_ratio", ratio)
return {
"ratio": round(ratio, 1),
"ltv": round(ltv, 2),
"cac": cac,
"arpa_monthly": arpa_monthly,
"arpa_annual": arpa_annual,
"gross_margin_pct": gross_margin_pct,
"annual_churn_rate_pct": annual_churn_rate_pct,
"formula": "LTV (ARPA x Gross Margin / Churn Rate) / CAC",
"target": ">3:1",
**benchmark,
}
def calculate_cac_payback(
cac: float, arpa_monthly: float, gross_margin_pct: float
) -> dict[str, Any]:
"""Calculate CAC Payback Period.
Formula: CAC / (ARPA_monthly x Gross Margin) in months
Target: <18 months
Args:
cac: Customer acquisition cost.
arpa_monthly: Average revenue per account per month.
gross_margin_pct: Gross margin as percentage.
Returns:
CAC payback months with rating and benchmark.
"""
gross_margin = gross_margin_pct / 100
monthly_contribution = arpa_monthly * gross_margin
payback_months = safe_divide(cac, monthly_contribution)
benchmark = rate_metric("cac_payback_months", payback_months)
return {
"months": round(payback_months, 1),
"cac": cac,
"arpa_monthly": arpa_monthly,
"gross_margin_pct": gross_margin_pct,
"monthly_contribution": round(monthly_contribution, 2),
"formula": "CAC / (ARPA_monthly x Gross Margin)",
"target": "<18 months",
**benchmark,
}
def calculate_burn_multiple(net_burn: float, net_new_arr: float) -> dict[str, Any]:
"""Calculate Burn Multiple.
Formula: Net Burn / Net New ARR
Target: <2x (lower is better)
Args:
net_burn: Net cash burn in the period.
net_new_arr: Net new ARR added in the period.
Returns:
Burn multiple with rating and benchmark.
"""
value = safe_divide(net_burn, net_new_arr)
benchmark = rate_metric("burn_multiple", value)
return {
"value": round(value, 2),
"net_burn": net_burn,
"net_new_arr": net_new_arr,
"formula": "Net Burn / Net New ARR",
"target": "<2x",
**benchmark,
}
def calculate_rule_of_40(
revenue_growth_pct: float, fcf_margin_pct: float
) -> dict[str, Any]:
"""Calculate Rule of 40.
Formula: Revenue Growth % + FCF Margin %
Target: >40%
Args:
revenue_growth_pct: Year-over-year revenue growth percentage.
fcf_margin_pct: Free cash flow margin percentage.
Returns:
Rule of 40 score with rating and benchmark.
"""
value = revenue_growth_pct + fcf_margin_pct
benchmark = rate_metric("rule_of_40", value)
return {
"value": round(value, 1),
"revenue_growth_pct": revenue_growth_pct,
"fcf_margin_pct": fcf_margin_pct,
"formula": "Revenue Growth % + FCF Margin %",
"target": ">40%",
**benchmark,
}
def calculate_ndr(
beginning_arr: float,
expansion_arr: float,
contraction_arr: float,
churned_arr: float,
) -> dict[str, Any]:
"""Calculate Net Dollar Retention.
Formula: (Beginning ARR + Expansion - Contraction - Churn) / Beginning ARR
Target: >110%
Args:
beginning_arr: ARR at start of period.
expansion_arr: Expansion revenue from existing customers.
contraction_arr: Revenue lost from downgrades.
churned_arr: Revenue lost from customer churn.
Returns:
NDR percentage with rating and benchmark.
"""
ending_arr = beginning_arr + expansion_arr - contraction_arr - churned_arr
ndr_pct = safe_divide(ending_arr, beginning_arr) * 100
benchmark = rate_metric("ndr_pct", ndr_pct)
return {
"ndr_pct": round(ndr_pct, 1),
"beginning_arr": beginning_arr,
"expansion_arr": expansion_arr,
"contraction_arr": contraction_arr,
"churned_arr": churned_arr,
"ending_arr": round(ending_arr, 2),
"formula": "(Begin ARR + Expansion - Contraction - Churn) / Begin ARR",
"target": ">110%",
**benchmark,
}
def generate_recommendations(metrics: dict) -> list[str]:
"""Generate strategic recommendations based on GTM efficiency metrics.
Args:
metrics: Dict of all calculated metric results.
Returns:
List of recommendation strings.
"""
recs = []
# Magic Number
mn = metrics["magic_number"]
if mn["rating"] == "Red":
recs.append(
f"Magic Number is {mn['value']} (target >0.75). GTM spend is inefficient. "
"Audit channel ROI, optimize sales productivity, and consider reducing "
"low-performing spend."
)
elif mn["rating"] == "Yellow":
recs.append(
f"Magic Number is {mn['value']}. GTM efficiency is acceptable but can improve. "
"Focus on sales enablement and pipeline quality over quantity."
)
# LTV:CAC
lc = metrics["ltv_cac"]
if lc["rating"] == "Red":
recs.append(
f"LTV:CAC ratio is {lc['ratio']}:1 (target >3:1). Unit economics are unsustainable. "
"Reduce CAC through better targeting, improve retention to increase LTV, "
"or increase ARPA through pricing optimization."
)
elif lc["rating"] == "Yellow":
recs.append(
f"LTV:CAC ratio is {lc['ratio']}:1. Unit economics are marginal. "
"Focus on reducing churn and expanding within existing accounts."
)
# CAC Payback
cp = metrics["cac_payback"]
if cp["rating"] == "Red":
recs.append(
f"CAC payback is {cp['months']} months (target <18). Capital recovery is too slow. "
"Reduce acquisition costs or increase gross-margin-weighted ARPA."
)
# Burn Multiple
bm = metrics["burn_multiple"]
if bm["rating"] == "Red":
recs.append(
f"Burn multiple is {bm['value']}x (target <2x). Cash consumption relative to "
"growth is unsustainable. Prioritize operating efficiency and path to profitability."
)
# Rule of 40
r40 = metrics["rule_of_40"]
if r40["rating"] == "Red":
recs.append(
f"Rule of 40 score is {r40['value']}% (target >40%). Balance of growth and "
"profitability needs improvement. Either accelerate growth or improve margins."
)
# NDR
ndr = metrics["ndr"]
if ndr["rating"] == "Red":
recs.append(
f"NDR is {ndr['ndr_pct']}% (target >110%). Net revenue is contracting from "
"the existing base. Prioritize churn reduction and expansion playbooks."
)
elif ndr["rating"] == "Yellow":
recs.append(
f"NDR is {ndr['ndr_pct']}%. Base is stable but not expanding. "
"Invest in cross-sell/upsell motions and customer success capacity."
)
# Positive summary if everything is green
green_count = sum(
1 for m in metrics.values()
if isinstance(m, dict) and m.get("rating") == "Green"
)
total_metrics = 6
if green_count == total_metrics:
recs.append(
"All GTM efficiency metrics are in healthy ranges. Maintain current "
"trajectory and optimize for best-in-class performance."
)
elif green_count >= 4:
recs.append(
f"{green_count}/{total_metrics} metrics are green. GTM efficiency is generally "
"healthy. Address the yellow/red areas for continuous improvement."
)
return recs
def calculate_all_metrics(data: dict) -> dict[str, Any]:
"""Calculate all GTM efficiency metrics from input data.
Args:
data: Input data with revenue, costs, and customers sections.
Returns:
Complete GTM efficiency analysis results.
"""
revenue = data["revenue"]
costs = data["costs"]
customers = data["customers"]
metrics = {
"magic_number": calculate_magic_number(
net_new_arr=revenue["net_new_arr"],
sm_spend=costs["sales_marketing_spend"],
),
"ltv_cac": calculate_ltv_cac(
arpa_monthly=revenue["arpa_monthly"],
gross_margin_pct=costs["gross_margin_pct"],
annual_churn_rate_pct=customers["annual_churn_rate_pct"],
cac=costs["cac"],
),
"cac_payback": calculate_cac_payback(
cac=costs["cac"],
arpa_monthly=revenue["arpa_monthly"],
gross_margin_pct=costs["gross_margin_pct"],
),
"burn_multiple": calculate_burn_multiple(
net_burn=costs["net_burn"],
net_new_arr=revenue["net_new_arr"],
),
"rule_of_40": calculate_rule_of_40(
revenue_growth_pct=revenue["revenue_growth_pct"],
fcf_margin_pct=costs["fcf_margin_pct"],
),
"ndr": calculate_ndr(
beginning_arr=customers["beginning_arr"],
expansion_arr=customers["expansion_arr"],
contraction_arr=customers["contraction_arr"],
churned_arr=customers["churned_arr"],
),
}
metrics["recommendations"] = generate_recommendations(metrics)
return metrics
def format_currency(value: float) -> str:
"""Format a number as currency."""
if abs(value) >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif abs(value) >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("GTM EFFICIENCY REPORT")
lines.append("=" * 70)
# Metric summary table
metrics_order = [
("magic_number", "Magic Number", lambda m: f"{m['value']}"),
("ltv_cac", "LTV:CAC Ratio", lambda m: f"{m['ratio']}:1"),
("cac_payback", "CAC Payback", lambda m: f"{m['months']} months"),
("burn_multiple", "Burn Multiple", lambda m: f"{m['value']}x"),
("rule_of_40", "Rule of 40", lambda m: f"{m['value']}%"),
("ndr", "Net Dollar Retention", lambda m: f"{m['ndr_pct']}%"),
]
lines.append("")
lines.append("METRICS SUMMARY")
lines.append("-" * 70)
lines.append(f" {'Metric':25s} {'Value':>12s} {'Rating':>8s} {'Target':>15s}")
lines.append(f" {'':25s} {'':>12s} {'':>8s} {'':>15s}")
for key, name, fmt_fn in metrics_order:
m = results[key]
lines.append(
f" {name:25s} {fmt_fn(m):>12s} {m['rating']:>8s} {m['target']:>15s}"
)
# Detailed breakdown
lines.append("")
lines.append("DETAILED BREAKDOWN")
lines.append("-" * 70)
# Magic Number
mn = results["magic_number"]
lines.append("")
lines.append(f" MAGIC NUMBER: {mn['value']}")
lines.append(f" Net New ARR: {format_currency(mn['net_new_arr'])}")
lines.append(f" S&M Spend: {format_currency(mn['sm_spend'])}")
lines.append(f" Rating: {mn['rating']} - {mn['label']}")
lines.append(f" Percentile: {mn['percentile']}")
# LTV:CAC
lc = results["ltv_cac"]
lines.append("")
lines.append(f" LTV:CAC RATIO: {lc['ratio']}:1")
lines.append(f" Customer LTV: {format_currency(lc['ltv'])}")
lines.append(f" CAC: {format_currency(lc['cac'])}")
lines.append(f" ARPA (Monthly): {format_currency(lc['arpa_monthly'])}")
lines.append(f" Gross Margin: {lc['gross_margin_pct']}%")
lines.append(f" Churn Rate: {lc['annual_churn_rate_pct']}%")
lines.append(f" Rating: {lc['rating']} - {lc['label']}")
lines.append(f" Percentile: {lc['percentile']}")
# CAC Payback
cp = results["cac_payback"]
lines.append("")
lines.append(f" CAC PAYBACK: {cp['months']} months")
lines.append(f" CAC: {format_currency(cp['cac'])}")
lines.append(f" Monthly Contribution:{format_currency(cp['monthly_contribution'])}")
lines.append(f" Rating: {cp['rating']} - {cp['label']}")
lines.append(f" Percentile: {cp['percentile']}")
# Burn Multiple
bm = results["burn_multiple"]
lines.append("")
lines.append(f" BURN MULTIPLE: {bm['value']}x")
lines.append(f" Net Burn: {format_currency(bm['net_burn'])}")
lines.append(f" Net New ARR: {format_currency(bm['net_new_arr'])}")
lines.append(f" Rating: {bm['rating']} - {bm['label']}")
lines.append(f" Percentile: {bm['percentile']}")
# Rule of 40
r40 = results["rule_of_40"]
lines.append("")
lines.append(f" RULE OF 40: {r40['value']}%")
lines.append(f" Revenue Growth: {r40['revenue_growth_pct']}%")
lines.append(f" FCF Margin: {r40['fcf_margin_pct']}%")
lines.append(f" Rating: {r40['rating']} - {r40['label']}")
lines.append(f" Percentile: {r40['percentile']}")
# NDR
ndr = results["ndr"]
lines.append("")
lines.append(f" NET DOLLAR RETENTION: {ndr['ndr_pct']}%")
lines.append(f" Beginning ARR: {format_currency(ndr['beginning_arr'])}")
lines.append(f" Expansion: +{format_currency(ndr['expansion_arr'])}")
lines.append(f" Contraction: -{format_currency(ndr['contraction_arr'])}")
lines.append(f" Churn: -{format_currency(ndr['churned_arr'])}")
lines.append(f" Ending ARR: {format_currency(ndr['ending_arr'])}")
lines.append(f" Rating: {ndr['rating']} - {ndr['label']}")
lines.append(f" Percentile: {ndr['percentile']}")
# Recommendations
lines.append("")
lines.append("RECOMMENDATIONS")
lines.append("-" * 70)
for i, rec in enumerate(results["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for GTM efficiency calculator CLI."""
parser = argparse.ArgumentParser(
description="Calculate GTM efficiency metrics for SaaS revenue teams."
)
parser.add_argument(
"input",
help="Path to JSON file containing GTM data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
required_sections = ["revenue", "costs", "customers"]
for section in required_sections:
if section not in data:
print(
f"Error: Missing required section '{section}' in input data",
file=sys.stderr,
)
sys.exit(1)
results = calculate_all_metrics(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""Pipeline Analyzer - Analyzes sales pipeline health for SaaS revenue teams.
Calculates pipeline coverage ratios, stage conversion rates, sales velocity,
deal aging risks, and concentration risks from pipeline data.
Usage:
python pipeline_analyzer.py --input pipeline.json --format text
python pipeline_analyzer.py --input pipeline.json --format json
"""
import argparse
import json
import sys
from datetime import datetime, date
from typing import Any
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def parse_date(date_str: str) -> date:
"""Parse a date string in YYYY-MM-DD format."""
return datetime.strptime(date_str, "%Y-%m-%d").date()
def get_quarter(d: date) -> str:
"""Return the quarter string for a given date (e.g., '2025-Q1')."""
quarter = (d.month - 1) // 3 + 1
return f"{d.year}-Q{quarter}"
def calculate_coverage_ratio(deals: list[dict], quota: float) -> dict[str, Any]:
"""Calculate pipeline coverage ratio against quota.
Target: 3-4x pipeline coverage for healthy pipeline.
"""
total_pipeline = sum(d["value"] for d in deals if d["stage"] != "Closed Won")
ratio = safe_divide(total_pipeline, quota)
if ratio >= 4.0:
rating = "Strong"
elif ratio >= 3.0:
rating = "Healthy"
elif ratio >= 2.0:
rating = "At Risk"
else:
rating = "Critical"
return {
"total_pipeline_value": total_pipeline,
"quota": quota,
"coverage_ratio": round(ratio, 2),
"rating": rating,
"target": "3.0x - 4.0x",
}
def calculate_stage_conversion_rates(
deals: list[dict], stages: list[str]
) -> list[dict[str, Any]]:
"""Calculate stage-to-stage conversion rates.
Measures the percentage of deals that progress from one stage to the next.
"""
stage_order = {stage: i for i, stage in enumerate(stages)}
stage_counts: dict[str, int] = {stage: 0 for stage in stages}
for deal in deals:
stage = deal["stage"]
if stage in stage_order:
stage_idx = stage_order[stage]
# A deal at stage N has passed through all stages 0..N
for i in range(stage_idx + 1):
stage_counts[stages[i]] += 1
conversions = []
for i in range(len(stages) - 1):
from_stage = stages[i]
to_stage = stages[i + 1]
from_count = stage_counts[from_stage]
to_count = stage_counts[to_stage]
rate = safe_divide(to_count, from_count) * 100
conversions.append({
"from_stage": from_stage,
"to_stage": to_stage,
"from_count": from_count,
"to_count": to_count,
"conversion_rate_pct": round(rate, 1),
})
return conversions
def calculate_sales_velocity(deals: list[dict]) -> dict[str, Any]:
"""Calculate sales velocity.
Formula: (# opportunities x avg deal size x win rate) / avg sales cycle length
Result is revenue per day.
"""
if not deals:
return {
"num_opportunities": 0,
"avg_deal_size": 0,
"win_rate_pct": 0,
"avg_cycle_days": 0,
"velocity_per_day": 0,
"velocity_per_month": 0,
}
won_deals = [d for d in deals if d["stage"] == "Closed Won"]
open_deals = [d for d in deals if d["stage"] != "Closed Won"]
all_considered = deals
num_opportunities = len(all_considered)
avg_deal_size = safe_divide(
sum(d["value"] for d in all_considered), num_opportunities
)
win_rate = safe_divide(len(won_deals), num_opportunities)
avg_cycle_days = safe_divide(
sum(d["age_days"] for d in all_considered), num_opportunities
)
velocity_per_day = safe_divide(
num_opportunities * avg_deal_size * win_rate, avg_cycle_days
)
return {
"num_opportunities": num_opportunities,
"avg_deal_size": round(avg_deal_size, 2),
"win_rate_pct": round(win_rate * 100, 1),
"avg_cycle_days": round(avg_cycle_days, 1),
"velocity_per_day": round(velocity_per_day, 2),
"velocity_per_month": round(velocity_per_day * 30, 2),
}
def analyze_deal_aging(
deals: list[dict], average_cycle_days: int, stages: list[str]
) -> dict[str, Any]:
"""Analyze deal aging and flag stale deals.
Flags deals older than 2x the average cycle time.
Uses stage-specific thresholds based on position in the pipeline.
"""
aging_threshold = average_cycle_days * 2
num_stages = len(stages)
stage_order = {stage: i for i, stage in enumerate(stages)}
# Stage-specific thresholds: early stages get more time, later stages less
stage_thresholds: dict[str, int] = {}
for i, stage in enumerate(stages):
if stage == "Closed Won":
continue
# Progressive thresholds: first stage gets full cycle, last open stage gets 50%
progress = safe_divide(i, num_stages - 1)
threshold = int(average_cycle_days * (1.0 + (1.0 - progress)))
stage_thresholds[stage] = threshold
aging_deals = []
healthy_deals = 0
at_risk_deals = 0
for deal in deals:
if deal["stage"] == "Closed Won":
continue
stage = deal["stage"]
age = deal["age_days"]
threshold = stage_thresholds.get(stage, aging_threshold)
if age > threshold:
at_risk_deals += 1
aging_deals.append({
"id": deal["id"],
"name": deal["name"],
"stage": stage,
"age_days": age,
"threshold_days": threshold,
"days_over": age - threshold,
"value": deal["value"],
})
else:
healthy_deals += 1
aging_deals.sort(key=lambda x: x["days_over"], reverse=True)
return {
"global_aging_threshold_days": aging_threshold,
"stage_thresholds": stage_thresholds,
"total_open_deals": healthy_deals + at_risk_deals,
"healthy_deals": healthy_deals,
"at_risk_deals": at_risk_deals,
"aging_deals": aging_deals,
}
def assess_pipeline_risk(
deals: list[dict], quota: float, stages: list[str]
) -> dict[str, Any]:
"""Assess overall pipeline risk.
Checks for:
- Concentration risk (>40% in single deal)
- Stage distribution health
- Coverage gap by quarter
"""
open_deals = [d for d in deals if d["stage"] != "Closed Won"]
total_pipeline = sum(d["value"] for d in open_deals)
# Concentration risk
concentration_risks = []
for deal in open_deals:
pct = safe_divide(deal["value"], total_pipeline) * 100
if pct > 40:
concentration_risks.append({
"id": deal["id"],
"name": deal["name"],
"value": deal["value"],
"pct_of_pipeline": round(pct, 1),
"risk_level": "HIGH",
})
elif pct > 25:
concentration_risks.append({
"id": deal["id"],
"name": deal["name"],
"value": deal["value"],
"pct_of_pipeline": round(pct, 1),
"risk_level": "MEDIUM",
})
has_concentration_risk = any(
r["risk_level"] == "HIGH" for r in concentration_risks
)
# Stage distribution
stage_distribution: dict[str, dict] = {}
for stage in stages:
if stage == "Closed Won":
continue
stage_deals = [d for d in open_deals if d["stage"] == stage]
count = len(stage_deals)
value = sum(d["value"] for d in stage_deals)
stage_distribution[stage] = {
"count": count,
"value": value,
"pct_of_pipeline": round(safe_divide(value, total_pipeline) * 100, 1),
}
# Check for empty stages (unhealthy funnel)
empty_stages = [
stage for stage, data in stage_distribution.items() if data["count"] == 0
]
# Coverage gap by quarter
today = date.today()
quarterly_coverage: dict[str, float] = {}
for deal in open_deals:
try:
close_date = parse_date(deal["close_date"])
quarter = get_quarter(close_date)
quarterly_coverage[quarter] = (
quarterly_coverage.get(quarter, 0) + deal["value"]
)
except (ValueError, KeyError):
pass
quarterly_target = quota / 4
coverage_gaps = []
for quarter, value in sorted(quarterly_coverage.items()):
coverage = safe_divide(value, quarterly_target)
if coverage < 3.0:
coverage_gaps.append({
"quarter": quarter,
"pipeline_value": value,
"quarterly_target": quarterly_target,
"coverage_ratio": round(coverage, 2),
"gap": "Below 3x target",
})
# Overall risk rating
risk_factors = 0
if has_concentration_risk:
risk_factors += 2
if len(empty_stages) > 0:
risk_factors += 1
if len(coverage_gaps) > 0:
risk_factors += 1
if safe_divide(total_pipeline, quota) < 3.0:
risk_factors += 2
if risk_factors >= 4:
overall_risk = "HIGH"
elif risk_factors >= 2:
overall_risk = "MEDIUM"
else:
overall_risk = "LOW"
return {
"overall_risk": overall_risk,
"risk_factors_count": risk_factors,
"concentration_risks": concentration_risks,
"has_concentration_risk": has_concentration_risk,
"stage_distribution": stage_distribution,
"empty_stages": empty_stages,
"coverage_gaps": coverage_gaps,
}
def analyze_pipeline(data: dict) -> dict[str, Any]:
"""Run complete pipeline analysis.
Args:
data: Pipeline data with deals, quota, stages, and average_cycle_days.
Returns:
Complete analysis results dictionary.
"""
deals = data["deals"]
quota = data["quota"]
stages = data["stages"]
average_cycle_days = data.get("average_cycle_days", 45)
return {
"coverage": calculate_coverage_ratio(deals, quota),
"stage_conversions": calculate_stage_conversion_rates(deals, stages),
"velocity": calculate_sales_velocity(deals),
"aging": analyze_deal_aging(deals, average_cycle_days, stages),
"risk": assess_pipeline_risk(deals, quota, stages),
}
def format_currency(value: float) -> str:
"""Format a number as currency."""
if value >= 1_000_000:
return f"${value / 1_000_000:,.1f}M"
elif value >= 1_000:
return f"${value / 1_000:,.1f}K"
return f"${value:,.0f}"
def format_text_report(results: dict) -> str:
"""Format analysis results as a human-readable text report."""
lines = []
lines.append("=" * 70)
lines.append("PIPELINE ANALYSIS REPORT")
lines.append("=" * 70)
# Coverage
cov = results["coverage"]
lines.append("")
lines.append("PIPELINE COVERAGE")
lines.append("-" * 40)
lines.append(f" Total Pipeline: {format_currency(cov['total_pipeline_value'])}")
lines.append(f" Quota Target: {format_currency(cov['quota'])}")
lines.append(f" Coverage Ratio: {cov['coverage_ratio']}x (Target: {cov['target']})")
lines.append(f" Rating: {cov['rating']}")
# Stage Conversions
lines.append("")
lines.append("STAGE CONVERSION RATES")
lines.append("-" * 40)
for conv in results["stage_conversions"]:
lines.append(
f" {conv['from_stage']} -> {conv['to_stage']}: "
f"{conv['conversion_rate_pct']}% "
f"({conv['to_count']}/{conv['from_count']})"
)
# Velocity
vel = results["velocity"]
lines.append("")
lines.append("SALES VELOCITY")
lines.append("-" * 40)
lines.append(f" Opportunities: {vel['num_opportunities']}")
lines.append(f" Avg Deal Size: {format_currency(vel['avg_deal_size'])}")
lines.append(f" Win Rate: {vel['win_rate_pct']}%")
lines.append(f" Avg Cycle: {vel['avg_cycle_days']} days")
lines.append(f" Velocity/Day: {format_currency(vel['velocity_per_day'])}")
lines.append(f" Velocity/Month: {format_currency(vel['velocity_per_month'])}")
# Aging
aging = results["aging"]
lines.append("")
lines.append("DEAL AGING ANALYSIS")
lines.append("-" * 40)
lines.append(f" Total Open Deals: {aging['total_open_deals']}")
lines.append(f" Healthy: {aging['healthy_deals']}")
lines.append(f" At Risk: {aging['at_risk_deals']}")
if aging["aging_deals"]:
lines.append("")
lines.append(" AGING DEALS (needs attention):")
for deal in aging["aging_deals"]:
lines.append(
f" - {deal['name']} ({deal['stage']}): "
f"{deal['age_days']}d (threshold: {deal['threshold_days']}d, "
f"+{deal['days_over']}d over) | {format_currency(deal['value'])}"
)
# Risk
risk = results["risk"]
lines.append("")
lines.append("PIPELINE RISK ASSESSMENT")
lines.append("-" * 40)
lines.append(f" Overall Risk: {risk['overall_risk']}")
lines.append(f" Risk Factors: {risk['risk_factors_count']}")
if risk["concentration_risks"]:
lines.append("")
lines.append(" CONCENTRATION RISKS:")
for cr in risk["concentration_risks"]:
lines.append(
f" - {cr['name']}: {format_currency(cr['value'])} "
f"({cr['pct_of_pipeline']}% of pipeline) [{cr['risk_level']}]"
)
if risk["empty_stages"]:
lines.append("")
lines.append(f" EMPTY STAGES: {', '.join(risk['empty_stages'])}")
lines.append("")
lines.append(" STAGE DISTRIBUTION:")
for stage, data in risk["stage_distribution"].items():
bar = "#" * max(1, int(data["pct_of_pipeline"] / 2))
lines.append(
f" {stage:20s} {data['count']:3d} deals "
f"{format_currency(data['value']):>10s} "
f"{data['pct_of_pipeline']:5.1f}% {bar}"
)
if risk["coverage_gaps"]:
lines.append("")
lines.append(" COVERAGE GAPS BY QUARTER:")
for gap in risk["coverage_gaps"]:
lines.append(
f" - {gap['quarter']}: {gap['coverage_ratio']}x coverage "
f"({format_currency(gap['pipeline_value'])} vs "
f"{format_currency(gap['quarterly_target'])} target)"
)
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for pipeline analyzer CLI."""
parser = argparse.ArgumentParser(
description="Analyze sales pipeline health for SaaS revenue teams."
)
parser.add_argument(
"--input",
required=True,
help="Path to JSON file containing pipeline data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input}: {e}", file=sys.stderr)
sys.exit(1)
# Validate required fields
required_fields = ["deals", "quota", "stages"]
for field in required_fields:
if field not in data:
print(f"Error: Missing required field '{field}' in input data", file=sys.stderr)
sys.exit(1)
results = analyze_pipeline(data)
if args.format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text_report(results))
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill business-growth/revenue-operations Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Finance
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
business-growth/revenue-operations/SKILL.md
People who install this also use
CRO Advisor
Revenue leadership — revenue forecasting, sales model design, pricing strategy, and net revenue retention optimization from a Chief Revenue Officer perspective.
@alirezarezvani
SaaS Metrics Coach
Master SaaS financial health — ARR, MRR, churn, LTV, CAC payback, magic number, and Rule of 40. Understand what your metrics are telling you and what to fix.
@alirezarezvani
Campaign Analytics Expert
Analyze marketing campaign performance, calculate ROI, interpret attribution models, and surface actionable insights from ad and content data.
@alirezarezvani