Customer Success Manager
Customer retention strategy, health score tracking, expansion playbooks, and churn prevention — a CSM toolkit for growing SaaS companies.
What this skill does
Protect and grow your SaaS revenue by tracking customer health and spotting churn risks before they become problems. Get clear retention insights and prioritized expansion recommendations to identify exactly which accounts need attention or are ready for upsells. Use this when analyzing customer accounts, planning renewal strategies, or looking for new revenue opportunities within your existing base.
name: “customer-success-manager” description: Monitors customer health, predicts churn risk, and identifies expansion opportunities using weighted scoring models for SaaS customer success. Use when analyzing customer accounts, reviewing retention metrics, scoring at-risk customers, or when the user mentions churn, customer health scores, upsell opportunities, expansion revenue, retention analysis, or customer analytics. Runs three Python CLI tools to produce deterministic health scores, churn risk tiers, and prioritized expansion recommendations across Enterprise, Mid-Market, and SMB segments. license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: business-growth domain: customer-success updated: 2026-02-06 python-tools: health_score_calculator.py, churn_risk_analyzer.py, expansion_opportunity_scorer.py tech-stack: customer-success, saas-metrics, health-scoring
Customer Success Manager
Production-grade customer success analytics with multi-dimensional health scoring, churn risk prediction, and expansion opportunity identification. Three Python CLI tools provide deterministic, repeatable analysis using standard library only — no external dependencies, no API calls, no ML models.
Table of Contents
- Input Requirements
- Output Formats
- How to Use
- Scripts
- Reference Guides
- Templates
- Best Practices
- Limitations
Input Requirements
All scripts accept a JSON file as positional input argument. See assets/sample_customer_data.json for complete schema examples and sample data.
Health Score Calculator
Required fields per customer object: customer_id, name, segment, arr, and nested objects usage (login_frequency, feature_adoption, dau_mau_ratio), engagement (support_ticket_volume, meeting_attendance, nps_score, csat_score), support (open_tickets, escalation_rate, avg_resolution_hours), relationship (executive_sponsor_engagement, multi_threading_depth, renewal_sentiment), and previous_period scores for trend analysis.
Churn Risk Analyzer
Required fields per customer object: customer_id, name, segment, arr, contract_end_date, and nested objects usage_decline, engagement_drop, support_issues, relationship_signals, and commercial_factors.
Expansion Opportunity Scorer
Required fields per customer object: customer_id, name, segment, arr, and nested objects contract (licensed_seats, active_seats, plan_tier, available_tiers), product_usage (per-module adoption flags and usage percentages), and departments (current and potential).
Output Formats
All scripts support two output formats via the --format flag:
text(default): Human-readable formatted output for terminal viewingjson: Machine-readable JSON output for integrations and pipelines
How to Use
Quick Start
# Health scoring
python scripts/health_score_calculator.py assets/sample_customer_data.json
python scripts/health_score_calculator.py assets/sample_customer_data.json --format json
# Churn risk analysis
python scripts/churn_risk_analyzer.py assets/sample_customer_data.json
python scripts/churn_risk_analyzer.py assets/sample_customer_data.json --format json
# Expansion opportunity scoring
python scripts/expansion_opportunity_scorer.py assets/sample_customer_data.json
python scripts/expansion_opportunity_scorer.py assets/sample_customer_data.json --format json
Workflow Integration
# 1. Score customer health across portfolio
python scripts/health_score_calculator.py customer_portfolio.json --format json > health_results.json
# Verify: confirm health_results.json contains the expected number of customer records before continuing
# 2. Identify at-risk accounts
python scripts/churn_risk_analyzer.py customer_portfolio.json --format json > risk_results.json
# Verify: confirm risk_results.json is non-empty and risk tiers are present for each customer
# 3. Find expansion opportunities in healthy accounts
python scripts/expansion_opportunity_scorer.py customer_portfolio.json --format json > expansion_results.json
# Verify: confirm expansion_results.json lists opportunities ranked by priority
# 4. Prepare QBR using templates
# Reference: assets/qbr_template.md
Error handling: If a script exits with an error, check that:
- The input JSON matches the required schema for that script (see Input Requirements above)
- All required fields are present and correctly typed
- Python 3.7+ is being used (
python --version) - Output files from prior steps are non-empty before piping into subsequent steps
Scripts
1. health_score_calculator.py
Purpose: Multi-dimensional customer health scoring with trend analysis and segment-aware benchmarking.
Dimensions and Weights:
| Dimension | Weight | Metrics |
|---|---|---|
| Usage | 30% | Login frequency, feature adoption, DAU/MAU ratio |
| Engagement | 25% | Support ticket volume, meeting attendance, NPS/CSAT |
| Support | 20% | Open tickets, escalation rate, avg resolution time |
| Relationship | 25% | Executive sponsor engagement, multi-threading depth, renewal sentiment |
Classification:
- Green (75-100): Healthy — customer achieving value
- Yellow (50-74): Needs attention — monitor closely
- Red (0-49): At risk — immediate intervention required
Usage:
python scripts/health_score_calculator.py customer_data.json
python scripts/health_score_calculator.py customer_data.json --format json
2. churn_risk_analyzer.py
Purpose: Identify at-risk accounts with behavioral signal detection and tier-based intervention recommendations.
Risk Signal Weights:
| Signal Category | Weight | Indicators |
|---|---|---|
| Usage Decline | 30% | Login trend, feature adoption change, DAU/MAU change |
| Engagement Drop | 25% | Meeting cancellations, response time, NPS change |
| Support Issues | 20% | Open escalations, unresolved critical, satisfaction trend |
| Relationship Signals | 15% | Champion left, sponsor change, competitor mentions |
| Commercial Factors | 10% | Contract type, pricing complaints, budget cuts |
Risk Tiers:
- Critical (80-100): Immediate executive escalation
- High (60-79): Urgent CSM intervention
- Medium (40-59): Proactive outreach
- Low (0-39): Standard monitoring
Usage:
python scripts/churn_risk_analyzer.py customer_data.json
python scripts/churn_risk_analyzer.py customer_data.json --format json
3. expansion_opportunity_scorer.py
Purpose: Identify upsell, cross-sell, and expansion opportunities with revenue estimation and priority ranking.
Expansion Types:
- Upsell: Upgrade to higher tier or more of existing product
- Cross-sell: Add new product modules
- Expansion: Additional seats or departments
Usage:
python scripts/expansion_opportunity_scorer.py customer_data.json
python scripts/expansion_opportunity_scorer.py customer_data.json --format json
Reference Guides
| Reference | Description |
|---|---|
references/health-scoring-framework.md | Complete health scoring methodology, dimension definitions, weighting rationale, threshold calibration |
references/cs-playbooks.md | Intervention playbooks for each risk tier, onboarding, renewal, expansion, and escalation procedures |
references/cs-metrics-benchmarks.md | Industry benchmarks for NRR, GRR, churn rates, health scores, expansion rates by segment and industry |
Templates
| Template | Purpose |
|---|---|
assets/qbr_template.md | Quarterly Business Review presentation structure |
assets/success_plan_template.md | Customer success plan with goals, milestones, and metrics |
assets/onboarding_checklist_template.md | 90-day onboarding checklist with phase gates |
assets/executive_business_review_template.md | Executive stakeholder review for strategic accounts |
Best Practices
- Combine signals: Use all three scripts together for a complete customer picture
- Act on trends, not snapshots: A declining Green is more urgent than a stable Yellow
- Calibrate thresholds: Adjust segment benchmarks based on your product and industry per
references/health-scoring-framework.md - Prepare with data: Run scripts before every QBR and executive meeting; reference
references/cs-playbooks.mdfor intervention guidance
Limitations
- No real-time data: Scripts analyze point-in-time snapshots from JSON input files
- No CRM integration: Data must be exported manually from your CRM/CS platform
- Deterministic only: No predictive ML — scoring is algorithmic based on weighted signals
- Threshold tuning: Default thresholds are industry-standard but may need calibration for your business
- Revenue estimates: Expansion revenue estimates are approximations based on usage patterns
Last Updated: February 2026 Tools: 3 Python CLI tools Dependencies: Python 3.7+ standard library only
Executive Business Review
Customer: [Customer Name] Date: [Review Date] Prepared for: [Executive Name, Title] Prepared by: [CSM Name] | [VP Customer Success Name] Classification: [Strategic / Enterprise / Key Account]
1. Partnership Summary
| Metric | Value |
|---|---|
| Partnership Duration | [X months/years] |
| Current ARR | $[Amount] |
| Lifetime Value to Date | $[Amount] |
| Current Plan | [Tier] |
| Licensed Seats | [Number] |
| Active Seats | [Number] |
| Health Score | [Score]/100 ([Green/Yellow/Red]) |
| NPS Score | [Score] |
| Renewal Date | [Date] ([X] days remaining) |
2. Strategic Alignment
Customer's Business Priorities (This Year)
- [Priority 1] -- [How our solution supports this]
- [Priority 2] -- [How our solution supports this]
- [Priority 3] -- [How our solution supports this]
Alignment Assessment
| Business Priority | Our Contribution | Alignment Score |
|---|---|---|
| [Priority 1] | [Specific contribution] | [Strong / Moderate / Weak] |
| [Priority 2] | [Specific contribution] | [Strong / Moderate / Weak] |
| [Priority 3] | [Specific contribution] | [Strong / Moderate / Weak] |
3. Value Delivered
Quantified Business Impact
| Outcome | Metric | Before | After | Business Value |
|---|---|---|---|---|
| [e.g., Operational efficiency] | [Hours saved/week] | [Baseline] | [Current] | $[Estimated value] |
| [e.g., Revenue acceleration] | [Deal velocity] | [Baseline] | [Current] | $[Estimated value] |
| [e.g., Risk reduction] | [Error rate] | [Baseline] | [Current] | $[Estimated value] |
Total Estimated Business Value: $[Amount] ROI: [X]x return on investment
Key Achievements This Period
- [Achievement 1 with measurable outcome]
- [Achievement 2 with measurable outcome]
- [Achievement 3 with measurable outcome]
4. Adoption and Engagement Scorecard
Platform Utilisation
| Module | Adoption Status | Usage Depth | Benchmark | Assessment |
|---|---|---|---|---|
| [Module 1] | Fully Adopted | [High/Med/Low] | [Benchmark] | [Above/At/Below] |
| [Module 2] | Partially Adopted | [High/Med/Low] | [Benchmark] | [Above/At/Below] |
| [Module 3] | Not Adopted | -- | -- | Opportunity |
Engagement Health
| Indicator | Current | Previous Period | Trend |
|---|---|---|---|
| Executive Engagement | [Score] | [Score] | [Up/Down/Stable] |
| Stakeholder Breadth | [# contacts] | [# contacts] | [Up/Down/Stable] |
| Meeting Participation | [%] | [%] | [Up/Down/Stable] |
| Feature Request Activity | [Count] | [Count] | [Up/Down/Stable] |
5. Account Health Overview
Health Score Trend (Last 4 Quarters)
| Quarter | Overall | Usage | Engagement | Support | Relationship |
|---|---|---|---|---|---|
| [Q-3] | [Score] | [Score] | [Score] | [Score] | [Score] |
| [Q-2] | [Score] | [Score] | [Score] | [Score] | [Score] |
| [Q-1] | [Score] | [Score] | [Score] | [Score] | [Score] |
| Current | [Score] | [Score] | [Score] | [Score] | [Score] |
Risk Assessment
| Risk Factor | Level | Details | Mitigation |
|---|---|---|---|
| [Risk 1] | [High/Med/Low] | [Description] | [Action] |
| [Risk 2] | [High/Med/Low] | [Description] | [Action] |
6. Support and Service Quality
| Metric | This Period | SLA Target | Status |
|---|---|---|---|
| Total Tickets | [Number] | -- | |
| Avg First Response | [Hours] | [Hours] | [Met / Not Met] |
| Avg Resolution Time | [Hours] | [Hours] | [Met / Not Met] |
| Escalations | [Number] | 0 | |
| CSAT Score | [Score] | [Target] | [Above / Below] |
| Critical Issues | [Number] | 0 |
Notable Support Interactions
- [Summary of any significant support events and resolution]
7. Product Roadmap Alignment
Features Delivered (Relevant to This Customer)
| Feature | Release Date | Customer Impact |
|---|---|---|
| [Feature 1] | [Date] | [How it helps them] |
| [Feature 2] | [Date] | [How it helps them] |
Upcoming Features (Customer-Relevant)
| Feature | Expected Release | Expected Impact |
|---|---|---|
| [Feature 1] | [Quarter] | [Business value] |
| [Feature 2] | [Quarter] | [Business value] |
Customer Feature Requests
| Request | Priority | Status | Business Case |
|---|---|---|---|
| [Request 1] | [P1/P2/P3] | [Status] | [Why it matters] |
| [Request 2] | [P1/P2/P3] | [Status] | [Why it matters] |
8. Growth and Expansion Opportunity
Current Whitespace Analysis
| Opportunity | Type | Est. Revenue | Effort | Priority |
|---|---|---|---|---|
| [Opportunity 1] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
| [Opportunity 2] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
| [Opportunity 3] | [Upsell/Cross-sell/Expansion] | $[Amount] | [Low/Med/High] | [1-5] |
Total Expansion Opportunity: $[Amount]
Recommended Next Steps for Growth
- [Specific expansion recommendation with business justification]
- [Specific expansion recommendation with business justification]
9. Renewal Outlook
| Factor | Assessment |
|---|---|
| Overall Renewal Confidence | [High / Medium / Low] |
| Budget Availability | [Confirmed / Expected / Uncertain] |
| Sponsor Support | [Strong / Moderate / Weak] |
| Competitive Threat | [None / Low / Medium / High] |
| Value Perception | [Strong / Moderate / Weak] |
| Contract Satisfaction | [Satisfied / Neutral / Concerned] |
Renewal Strategy
[2-3 sentences on the approach for securing renewal, including any specific actions needed]
10. Executive-Level Action Items
| Action | Owner | Due Date | Priority | Impact |
|---|---|---|---|---|
| [Action 1] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
| [Action 2] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
| [Action 3] | [Name, Title] | [Date] | [Critical/High/Med] | [Expected outcome] |
Appendix
Stakeholder Map
| Name | Title | Influence | Sentiment | Last Contact |
|---|---|---|---|---|
| [Name] | [Title] | [Decision Maker / Influencer / User] | [Positive / Neutral / Negative] | [Date] |
| [Name] | [Title] | [Decision Maker / Influencer / User] | [Positive / Neutral / Negative] | [Date] |
Competitive Landscape (If Applicable)
- Known competitors in evaluation: [List]
- Our differentiators: [Key strengths vs. competition]
- Risk mitigation: [Actions to defend position]
Confidential -- For Internal and Customer Executive Use Only Next Executive Review: [Date]
{
"report": "customer_health_scores",
"summary": {
"total_customers": 4,
"average_score": 78.8,
"green_count": 3,
"yellow_count": 1,
"red_count": 0
},
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"overall_score": 86.2,
"classification": "green",
"dimensions": {
"usage": {
"score": 91.6,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 82.0,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 78.5,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 90.1,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "stable",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
},
{
"customer_id": "CUST-002",
"name": "TechStart Inc",
"segment": "smb",
"arr": 18000,
"overall_score": 53.7,
"classification": "yellow",
"dimensions": {
"usage": {
"score": 52.5,
"weight": "30%",
"classification": "yellow"
},
"engagement": {
"score": 61.6,
"weight": "25%",
"classification": "yellow"
},
"support": {
"score": 63.2,
"weight": "20%",
"classification": "yellow"
},
"relationship": {
"score": 39.5,
"weight": "25%",
"classification": "red"
}
},
"trends": {
"usage": "stable",
"engagement": "improving",
"support": "stable",
"relationship": "declining",
"overall": "stable"
},
"recommendations": [
"Login frequency below target -- schedule product engagement session",
"NPS below threshold -- conduct a feedback deep-dive with customer",
"CSAT is critically low -- escalate to support leadership",
"Single-threaded relationship -- expand contacts across departments",
"Renewal sentiment is negative -- initiate save plan immediately"
]
},
{
"customer_id": "CUST-003",
"name": "GlobalTrade Solutions",
"segment": "mid-market",
"arr": 55000,
"overall_score": 79.7,
"classification": "green",
"dimensions": {
"usage": {
"score": 85.6,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 79.6,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 72.0,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 79.0,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "improving",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
},
{
"customer_id": "CUST-004",
"name": "HealthFirst Medical",
"segment": "enterprise",
"arr": 200000,
"overall_score": 95.7,
"classification": "green",
"dimensions": {
"usage": {
"score": 100.0,
"weight": "30%",
"classification": "green"
},
"engagement": {
"score": 92.0,
"weight": "25%",
"classification": "green"
},
"support": {
"score": 88.7,
"weight": "20%",
"classification": "green"
},
"relationship": {
"score": 100.0,
"weight": "25%",
"classification": "green"
}
},
"trends": {
"usage": "improving",
"engagement": "improving",
"support": "stable",
"relationship": "improving",
"overall": "improving"
},
"recommendations": []
}
]
}
Customer Onboarding Checklist (90-Day)
Customer: [Customer Name] Segment: [Enterprise / Mid-Market / SMB] CSM: [CSM Name] Kickoff Date: [Date] Target Go-Live: [Date] Target First Value Date: [Date -- must be within 30 days]
Phase 1: Welcome and Setup (Days 1-14)
Pre-Kickoff Preparation (Day 0)
- Review signed contract and SOW for scope and commitments
- Research customer's industry, business model, and competitive landscape
- Review handoff notes from sales team (pain points, decision drivers, stakeholders)
- Prepare welcome package (login credentials, documentation links, support contacts)
- Create customer workspace in CS platform
- Schedule kickoff meeting with all required attendees
- Prepare kickoff deck with agenda and success plan draft
Kickoff Meeting (Day 1-2)
- Conduct kickoff meeting with customer stakeholders
- Confirm business objectives and success criteria
- Identify key stakeholders and their roles (sponsor, champion, technical lead, users)
- Align on communication cadence and preferred channels
- Review onboarding timeline and milestones
- Set expectations for time commitment from customer team
- Share and agree on success plan (mutual accountability)
- Schedule recurring check-in meetings
Kickoff Meeting Notes:
[Document key takeaways, concerns raised, decisions made]
Technical Setup (Days 3-7)
- Provision customer environment (tenant, workspace, permissions)
- Configure SSO/authentication if applicable
- Set up integrations with customer's existing tools
- Import or migrate existing data (if applicable)
- Validate data integrity post-migration
- Configure role-based access and permissions
- Set up monitoring and alerting
Technical Setup Owner: [SE / Implementation team name] Technical Setup Notes:
[Document configuration decisions, customizations, issues]
Admin Training (Days 7-10)
- Deliver admin training session (system configuration, user management)
- Provide admin documentation and quick reference guide
- Ensure admins can independently manage basic operations
- Set up admin support escalation path
Initial User Training (Days 10-14)
- Deliver core user training (session 1: basic navigation and key workflows)
- Provide user quickstart guide and video resources
- Set up user support channel (Slack, email, in-app chat)
- Confirm all target users have active accounts
- Track initial login completion rate
Training Completion Rate: [___%] of target users
Phase 2: Activation (Days 15-30)
User Activation (Days 15-20)
- Monitor daily active user metrics
- Follow up with users who have not logged in
- Conduct follow-up training for users needing additional help
- Address any usability issues or confusion reported
- Validate that core workflows are functioning as expected
- Collect early feedback from champion and key users
Activation Rate: [___%] of licensed users active
First Value Milestone (Days 20-30)
- Define and track first value milestone (specific to customer objectives)
- Verify customer has completed their first meaningful workflow
- Document value delivered (even if small -- establish the pattern)
- Share "first win" with executive sponsor
- Celebrate the milestone with the customer team
First Value Milestone: [Describe the specific milestone] Date Achieved: [Date]
30-Day Review (Day 28-30)
- Conduct 30-day review meeting with customer
- Review activation metrics (logins, usage, adoption)
- Assess progress against success plan milestones
- Identify any blockers or concerns
- Adjust onboarding plan if needed
- Confirm transition from setup phase to adoption phase
- Set goals for days 31-60
30-Day Health Score: [Score]/100 -- [Green/Yellow/Red]
Phase 3: Adoption (Days 31-60)
Feature Expansion (Days 31-45)
- Introduce additional features beyond core workflows
- Deliver advanced training session (session 2: power features)
- Enable at least one integration with customer's existing tools
- Identify and address feature adoption gaps
- Share best practices from similar customers
Usage Benchmarking (Days 45-55)
- Compare customer's usage against segment benchmarks
- Identify underperforming areas and create enablement plan
- Share usage report with customer champion
- Discuss usage targets for the next 30 days
Current vs. Benchmark:
| Metric | Current | Benchmark | Gap |
|---|---|---|---|
| Feature Adoption | [%] | [%] | [+/-] |
| Daily Active Users | [#] | [#] | [+/-] |
| Key Workflow Completion | [%] | [%] | [+/-] |
60-Day Check-in (Day 55-60)
- Conduct 60-day check-in meeting
- Review adoption metrics and progress
- Discuss any roadblocks to deeper adoption
- Begin identifying advanced use cases
- Set goals for days 61-90
Phase 4: Optimisation (Days 61-90)
Advanced Use Cases (Days 61-75)
- Conduct use case discovery workshop with customer
- Identify 2-3 advanced use cases beyond initial scope
- Build implementation plan for advanced use cases
- Begin pilot of advanced use cases with power users
ROI Measurement (Days 75-85)
- Collect data for ROI measurement against baseline
- Build ROI summary document
- Share ROI results with executive sponsor
- Document customer testimonial or case study opportunity (if willing)
ROI Summary:
| Metric | Baseline | Current | Improvement |
|---|---|---|---|
| [Metric 1] | [Value] | [Value] | [% change] |
| [Metric 2] | [Value] | [Value] | [% change] |
90-Day Executive Review (Days 85-90)
- Prepare 90-day executive review presentation
- Include: value delivered, adoption metrics, ROI, next steps
- Conduct review meeting with executive sponsor
- Transition from onboarding to ongoing success management
- Establish ongoing success plan with quarterly milestones
- Confirm ongoing meeting cadence
- Introduce expansion opportunities if appropriate
90-Day Health Score: [Score]/100 -- [Green/Yellow/Red]
Onboarding Completion Gate
The following criteria must be met to consider onboarding complete:
- User activation rate above 80%
- First value milestone achieved within 30 days
- Core workflows actively used by target users
- Executive sponsor confirms satisfaction
- Health score is Yellow (50+) or better
- Success plan established with ongoing milestones
- Recurring meeting cadence confirmed
- Support escalation path understood by customer
Onboarding Status: [Complete / In Progress / Blocked] Completion Date: [Date] Handoff to Steady-State CSM: [Date if different CSM]
Notes
Risks and Blockers
| Risk/Blocker | Impact | Mitigation | Status |
|---|---|---|---|
| [Item] | [High/Med/Low] | [Action] | [Open/Resolved] |
Key Decisions
| Date | Decision | Made By | Impact |
|---|---|---|---|
| [Date] | [Decision] | [Name] | [Description] |
Template Version: 1.0 Last Updated: February 2026
Quarterly Business Review (QBR)
Customer: [Customer Name] Date: [QBR Date] Prepared by: [CSM Name] Attendees: [List attendees and titles]
1. Executive Summary
Overall Relationship Status: [Green / Yellow / Red] Health Score: [Score]/100 Key Theme: [One sentence summarizing the quarter]
Quarter Highlights
- [Highlight 1: major achievement or milestone]
- [Highlight 2: value delivered]
- [Highlight 3: initiative completed]
Areas of Focus
- [Focus area 1]
- [Focus area 2]
2. Value Delivered This Quarter
Business Outcomes Achieved
| Objective | Target | Actual | Status |
|---|---|---|---|
| [Objective 1] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
| [Objective 2] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
| [Objective 3] | [Target metric] | [Actual metric] | [On Track / At Risk / Achieved] |
ROI Summary
| Metric | Before | After | Improvement |
|---|---|---|---|
| [Metric 1, e.g., Time savings] | [Baseline] | [Current] | [% change] |
| [Metric 2, e.g., Cost reduction] | [Baseline] | [Current] | [% change] |
| [Metric 3, e.g., Revenue impact] | [Baseline] | [Current] | [% change] |
Estimated Total Value Delivered: $[Amount]
3. Product Usage and Adoption
Usage Metrics
| Metric | Last Quarter | This Quarter | Trend |
|---|---|---|---|
| Monthly Active Users | [Number] | [Number] | [Up/Down/Stable] |
| Feature Adoption Rate | [%] | [%] | [Up/Down/Stable] |
| DAU/MAU Ratio | [Ratio] | [Ratio] | [Up/Down/Stable] |
| Seat Utilization | [%] | [%] | [Up/Down/Stable] |
Feature Adoption Breakdown
| Feature/Module | Status | Usage Level | Notes |
|---|---|---|---|
| [Feature 1] | Active | [High/Med/Low] | |
| [Feature 2] | Active | [High/Med/Low] | |
| [Feature 3] | Not Adopted | -- | [Reason / Opportunity] |
Adoption Recommendations
- [Recommendation for increasing adoption of underused features]
- [Recommendation for enabling new use cases]
4. Support Summary
| Metric | This Quarter | Previous Quarter | Benchmark |
|---|---|---|---|
| Total Tickets | [Number] | [Number] | [Segment avg] |
| Avg Resolution Time | [Hours] | [Hours] | [SLA target] |
| Escalations | [Number] | [Number] | [Target: 0] |
| CSAT Score | [Score] | [Score] | [Target] |
Open Issues
| Issue | Priority | Status | ETA |
|---|---|---|---|
| [Issue 1] | [P1/P2/P3] | [In Progress / Pending] | [Date] |
5. Success Plan Progress
Current Success Plan Goals
| Goal | Timeline | Progress | Status |
|---|---|---|---|
| [Goal 1] | [Date] | [%] | [On Track / At Risk / Complete] |
| [Goal 2] | [Date] | [%] | [On Track / At Risk / Complete] |
| [Goal 3] | [Date] | [%] | [On Track / At Risk / Complete] |
Next Quarter Goals (Proposed)
- [Goal 1 with specific measurable outcome]
- [Goal 2 with specific measurable outcome]
- [Goal 3 with specific measurable outcome]
6. Product Roadmap Highlights
Recently Released (Relevant to [Customer Name])
- [Feature/enhancement 1] -- [How it benefits them]
- [Feature/enhancement 2] -- [How it benefits them]
Coming Next Quarter
- [Upcoming feature 1] -- [Expected benefit]
- [Upcoming feature 2] -- [Expected benefit]
Feature Requests Status
| Request | Priority | Status | Expected Release |
|---|---|---|---|
| [Request 1] | [High/Med/Low] | [Planned / In Development / Under Review] | [Quarter] |
7. Growth Opportunities
Expansion Discussion Points
- [Opportunity 1: e.g., additional seats for new team]
- [Opportunity 2: e.g., new module that addresses identified need]
- [Opportunity 3: e.g., tier upgrade for advanced capabilities]
Estimated Value of Expansion: $[Amount] additional ARR
8. Action Items
| Action | Owner | Due Date | Priority |
|---|---|---|---|
| [Action 1] | [Name] | [Date] | [High/Med/Low] |
| [Action 2] | [Name] | [Date] | [High/Med/Low] |
| [Action 3] | [Name] | [Date] | [High/Med/Low] |
| [Action 4] | [Name] | [Date] | [High/Med/Low] |
9. Contract and Renewal
Contract Start: [Date] Renewal Date: [Date] Current ARR: $[Amount] Days to Renewal: [Number]
Renewal Readiness
- Value documented and communicated
- Executive sponsor aligned
- Open issues resolved or plan in place
- Pricing and terms discussed
- Expansion proposal prepared (if applicable)
Next QBR Date: [Date] Next Check-in: [Date]
{
"customers": [
{
"customer_id": "CUST-001",
"name": "Acme Corp",
"segment": "enterprise",
"arr": 120000,
"contract_end_date": "2026-12-31",
"usage": {
"login_frequency": 85,
"feature_adoption": 72,
"dau_mau_ratio": 0.45
},
"engagement": {
"support_ticket_volume": 3,
"meeting_attendance": 90,
"nps_score": 8,
"csat_score": 4.2
},
"support": {
"open_tickets": 2,
"escalation_rate": 0.05,
"avg_resolution_hours": 18
},
"relationship": {
"executive_sponsor_engagement": 80,
"multi_threading_depth": 4,
"renewal_sentiment": "positive"
},
"previous_period": {
"usage_score": 70,
"engagement_score": 65,
"support_score": 75,
"relationship_score": 60,
"overall_score": 67
},
"usage_decline": {
"login_trend": 5,
"feature_adoption_change": 3,
"dau_mau_change": 0.02
},
"engagement_drop": {
"meeting_cancellations": 0,
"response_time_days": 1,
"nps_change": 1
},
"support_issues": {
"open_escalations": 0,
"unresolved_critical": 0,
"satisfaction_trend": "improving"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": false,
"competitor_mentions": 0
},
"commercial_factors": {
"contract_type": "annual",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 100,
"active_seats": 95,
"plan_tier": "professional",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 85},
"analytics_module": {"adopted": true, "usage_pct": 60},
"integrations_module": {"adopted": false, "usage_pct": 0},
"api_access": {"adopted": true, "usage_pct": 40},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["engineering", "product"],
"potential": ["marketing", "sales", "support"]
}
},
{
"customer_id": "CUST-002",
"name": "TechStart Inc",
"segment": "smb",
"arr": 18000,
"contract_end_date": "2026-04-15",
"usage": {
"login_frequency": 40,
"feature_adoption": 30,
"dau_mau_ratio": 0.15
},
"engagement": {
"support_ticket_volume": 8,
"meeting_attendance": 50,
"nps_score": 5,
"csat_score": 3.0
},
"support": {
"open_tickets": 6,
"escalation_rate": 0.18,
"avg_resolution_hours": 42
},
"relationship": {
"executive_sponsor_engagement": 30,
"multi_threading_depth": 1,
"renewal_sentiment": "negative"
},
"previous_period": {
"usage_score": 55,
"engagement_score": 50,
"support_score": 60,
"relationship_score": 45,
"overall_score": 52
},
"usage_decline": {
"login_trend": -25,
"feature_adoption_change": -18,
"dau_mau_change": -0.12
},
"engagement_drop": {
"meeting_cancellations": 3,
"response_time_days": 8,
"nps_change": -4
},
"support_issues": {
"open_escalations": 2,
"unresolved_critical": 1,
"satisfaction_trend": "declining"
},
"relationship_signals": {
"champion_left": true,
"sponsor_change": false,
"competitor_mentions": 3
},
"commercial_factors": {
"contract_type": "month-to-month",
"pricing_complaints": true,
"budget_cuts_mentioned": true
},
"contract": {
"licensed_seats": 20,
"active_seats": 8,
"plan_tier": "starter",
"available_tiers": ["starter", "professional", "enterprise"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 35},
"analytics_module": {"adopted": false, "usage_pct": 0},
"integrations_module": {"adopted": false, "usage_pct": 0},
"api_access": {"adopted": false, "usage_pct": 0},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["engineering"],
"potential": ["product", "design"]
}
},
{
"customer_id": "CUST-003",
"name": "GlobalTrade Solutions",
"segment": "mid-market",
"arr": 55000,
"contract_end_date": "2026-09-30",
"usage": {
"login_frequency": 70,
"feature_adoption": 58,
"dau_mau_ratio": 0.35
},
"engagement": {
"support_ticket_volume": 5,
"meeting_attendance": 75,
"nps_score": 7,
"csat_score": 3.8
},
"support": {
"open_tickets": 3,
"escalation_rate": 0.10,
"avg_resolution_hours": 30
},
"relationship": {
"executive_sponsor_engagement": 60,
"multi_threading_depth": 3,
"renewal_sentiment": "neutral"
},
"previous_period": {
"usage_score": 68,
"engagement_score": 70,
"support_score": 65,
"relationship_score": 62,
"overall_score": 66
},
"usage_decline": {
"login_trend": -8,
"feature_adoption_change": -5,
"dau_mau_change": -0.03
},
"engagement_drop": {
"meeting_cancellations": 1,
"response_time_days": 3,
"nps_change": -1
},
"support_issues": {
"open_escalations": 1,
"unresolved_critical": 0,
"satisfaction_trend": "stable"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": true,
"competitor_mentions": 1
},
"commercial_factors": {
"contract_type": "annual",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 50,
"active_seats": 48,
"plan_tier": "professional",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 78},
"analytics_module": {"adopted": true, "usage_pct": 45},
"integrations_module": {"adopted": true, "usage_pct": 55},
"api_access": {"adopted": false, "usage_pct": 0},
"advanced_reporting": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["operations", "finance"],
"potential": ["logistics", "compliance"]
}
},
{
"customer_id": "CUST-004",
"name": "HealthFirst Medical",
"segment": "enterprise",
"arr": 200000,
"contract_end_date": "2027-03-15",
"usage": {
"login_frequency": 92,
"feature_adoption": 88,
"dau_mau_ratio": 0.55
},
"engagement": {
"support_ticket_volume": 2,
"meeting_attendance": 95,
"nps_score": 9,
"csat_score": 4.6
},
"support": {
"open_tickets": 1,
"escalation_rate": 0.02,
"avg_resolution_hours": 12
},
"relationship": {
"executive_sponsor_engagement": 92,
"multi_threading_depth": 6,
"renewal_sentiment": "positive"
},
"previous_period": {
"usage_score": 85,
"engagement_score": 82,
"support_score": 88,
"relationship_score": 80,
"overall_score": 84
},
"usage_decline": {
"login_trend": 3,
"feature_adoption_change": 5,
"dau_mau_change": 0.03
},
"engagement_drop": {
"meeting_cancellations": 0,
"response_time_days": 1,
"nps_change": 0
},
"support_issues": {
"open_escalations": 0,
"unresolved_critical": 0,
"satisfaction_trend": "improving"
},
"relationship_signals": {
"champion_left": false,
"sponsor_change": false,
"competitor_mentions": 0
},
"commercial_factors": {
"contract_type": "multi-year",
"pricing_complaints": false,
"budget_cuts_mentioned": false
},
"contract": {
"licensed_seats": 250,
"active_seats": 240,
"plan_tier": "enterprise",
"available_tiers": ["professional", "enterprise", "enterprise_plus"]
},
"product_usage": {
"core_platform": {"adopted": true, "usage_pct": 92},
"analytics_module": {"adopted": true, "usage_pct": 80},
"integrations_module": {"adopted": true, "usage_pct": 70},
"api_access": {"adopted": true, "usage_pct": 65},
"advanced_reporting": {"adopted": true, "usage_pct": 50},
"security_module": {"adopted": false, "usage_pct": 0},
"audit_module": {"adopted": false, "usage_pct": 0}
},
"departments": {
"current": ["clinical", "operations", "IT", "compliance"],
"potential": ["research", "finance", "HR"]
}
}
]
}
Customer Success Plan
Customer: [Customer Name] CSM: [CSM Name] Account Executive: [AE Name] Plan Created: [Date] Last Updated: [Date] Review Cadence: [Monthly / Quarterly]
1. Customer Overview
| Field | Details |
|---|---|
| Industry | [Industry] |
| Company Size | [Employees] |
| Segment | [Enterprise / Mid-Market / SMB] |
| ARR | $[Amount] |
| Contract Start | [Date] |
| Renewal Date | [Date] |
| Plan Tier | [Tier name] |
| Licensed Seats | [Number] |
Key Stakeholders
| Name | Title | Role | Engagement Level |
|---|---|---|---|
| [Name] | [Title] | Executive Sponsor | [High / Medium / Low] |
| [Name] | [Title] | Day-to-Day Champion | [High / Medium / Low] |
| [Name] | [Title] | Technical Lead | [High / Medium / Low] |
| [Name] | [Title] | End User Lead | [High / Medium / Low] |
2. Business Objectives
Primary Business Objectives
| # | Objective | Success Metric | Target | Timeline |
|---|---|---|---|---|
| 1 | [e.g., Reduce manual reporting time] | [Hours saved per week] | [Target number] | [Date] |
| 2 | [e.g., Improve team collaboration] | [Project completion rate] | [Target %] | [Date] |
| 3 | [e.g., Increase revenue visibility] | [Forecast accuracy] | [Target %] | [Date] |
Why These Objectives Matter
- Objective 1: [Business context -- why this matters to the customer's overall strategy]
- Objective 2: [Business context]
- Objective 3: [Business context]
3. Success Milestones
Phase 1: Foundation (Days 1-30)
| Milestone | Target Date | Status | Owner | Notes |
|---|---|---|---|---|
| Technical setup complete | [Date] | [ ] | [Name] | |
| Admin training delivered | [Date] | [ ] | CSM | |
| Core team onboarded | [Date] | [ ] | CSM | |
| First value milestone achieved | [Date] | [ ] | [Name] | |
| Data migration validated | [Date] | [ ] | SE |
Phase 2: Adoption (Days 31-90)
| Milestone | Target Date | Status | Owner | Notes |
|---|---|---|---|---|
| 80% user adoption | [Date] | [ ] | CSM | |
| Key workflows live | [Date] | [ ] | [Name] | |
| Integrations configured | [Date] | [ ] | SE | |
| First ROI measurement | [Date] | [ ] | CSM | |
| 30-day review complete | [Date] | [ ] | CSM |
Phase 3: Value Realisation (Days 91-180)
| Milestone | Target Date | Status | Owner | Notes |
|---|---|---|---|---|
| Objective 1 progress measurable | [Date] | [ ] | [Name] | |
| Advanced features adopted | [Date] | [ ] | CSM | |
| QBR completed | [Date] | [ ] | CSM | |
| Executive alignment confirmed | [Date] | [ ] | CSM |
Phase 4: Optimisation and Growth (Days 181-365)
| Milestone | Target Date | Status | Owner | Notes |
|---|---|---|---|---|
| All objectives on track | [Date] | [ ] | CSM | |
| ROI documented for renewal | [Date] | [ ] | CSM | |
| Expansion opportunities identified | [Date] | [ ] | CSM + AE | |
| Renewal conversation initiated | [Date] | [ ] | CSM + AE |
4. Health Score Tracking
| Date | Overall Score | Usage | Engagement | Support | Relationship | Classification |
|---|---|---|---|---|---|---|
| [Date] | [Score] | [Score] | [Score] | [Score] | [Score] | [Green/Yellow/Red] |
| [Date] | [Score] | [Score] | [Score] | [Score] | [Score] | [Green/Yellow/Red] |
5. Risk Register
| Risk | Probability | Impact | Mitigation | Owner | Status |
|---|---|---|---|---|---|
| [e.g., Executive sponsor departure] | [High/Med/Low] | [High/Med/Low] | [Multi-thread relationships] | CSM | [Active/Resolved] |
| [e.g., Low adoption in team X] | [High/Med/Low] | [High/Med/Low] | [Targeted training session] | CSM | [Active/Resolved] |
| [e.g., Budget review next quarter] | [High/Med/Low] | [High/Med/Low] | [Document ROI before review] | CSM | [Active/Resolved] |
6. Communication Plan
| Activity | Frequency | Participants | Purpose |
|---|---|---|---|
| Status check-in | [Weekly / Bi-weekly] | CSM + Champion | Tactical progress review |
| Strategic review | [Monthly] | CSM + Stakeholders | Objective alignment |
| QBR | [Quarterly] | CSM + Executive Sponsor | Executive business review |
| Technical review | [As needed] | SE + Technical Lead | Architecture and integration |
| Renewal planning | [90 days before] | CSM + AE + Sponsor | Contract discussion |
7. Product Adoption Plan
Current State
| Module/Feature | Status | Usage Level | Target Usage | Gap |
|---|---|---|---|---|
| [Module 1] | Adopted | [%] | [%] | [Actions needed] |
| [Module 2] | Adopted | [%] | [%] | [Actions needed] |
| [Module 3] | Not Adopted | 0% | [%] | [Enablement plan] |
Enablement Activities
| Activity | Target Date | Audience | Expected Outcome |
|---|---|---|---|
| [Training session] | [Date] | [Team/Group] | [Metric improvement] |
| [Workshop] | [Date] | [Team/Group] | [New workflow adoption] |
| [Office hours] | [Ongoing] | [All users] | [Question resolution] |
8. Expansion Roadmap
| Opportunity | Type | Estimated Value | Timeline | Prerequisites |
|---|---|---|---|---|
| [e.g., Additional seats] | Expansion | $[Amount] | [Quarter] | [Usage > 90%] |
| [e.g., Tier upgrade] | Upsell | $[Amount] | [Quarter] | [Feature requests] |
| [e.g., New module] | Cross-sell | $[Amount] | [Quarter] | [Use case validated] |
9. Notes and Updates
[Date] - [Author]
[Update notes, key decisions, changes to plan]
[Date] - [Author]
[Update notes, key decisions, changes to plan]
Next Review Date: [Date] Plan Owner: [CSM Name]
Customer Success Metrics and Benchmarks
Industry benchmarks for key customer success metrics, segmented by company size, customer segment, and industry vertical.
Core SaaS Metrics
Net Revenue Retention (NRR)
NRR measures revenue retained from existing customers including expansion, contraction, and churn. It is the single most important metric for SaaS customer success.
Formula: (Starting ARR + Expansion - Contraction - Churn) / Starting ARR * 100
| Performance Level | NRR Range | Interpretation |
|---|---|---|
| Best-in-class | > 130% | Strong expansion engine, very low churn |
| Excellent | 120-130% | Healthy growth from existing customers |
| Good | 110-120% | Solid retention with moderate expansion |
| Target | > 110% | Minimum for sustainable growth |
| Acceptable | 100-110% | Revenue stable but limited expansion |
| Below target | 90-100% | Churn exceeds expansion |
| Concerning | < 90% | Significant revenue erosion |
Benchmarks by Segment:
| Customer Segment | Median NRR | Top Quartile | Bottom Quartile |
|---|---|---|---|
| Enterprise (>$100K ARR) | 115% | 130%+ | 105% |
| Mid-Market ($25K-$100K) | 108% | 120% | 98% |
| SMB (<$25K ARR) | 95% | 105% | 85% |
Gross Revenue Retention (GRR)
GRR measures revenue retained without counting expansion. It isolates the churn and contraction signal.
Formula: (Starting ARR - Contraction - Churn) / Starting ARR * 100
| Performance Level | GRR Range | Interpretation |
|---|---|---|
| Best-in-class | > 95% | Minimal churn, highly sticky product |
| Excellent | 92-95% | Strong retention |
| Good | 90-92% | Healthy with room to improve |
| Target | > 90% | Industry standard target |
| Acceptable | 85-90% | Moderate churn, needs focus |
| Below target | 80-85% | High churn impacting growth |
| Concerning | < 80% | Urgent retention problem |
Benchmarks by Segment:
| Customer Segment | Median GRR | Top Quartile | Bottom Quartile |
|---|---|---|---|
| Enterprise | 95% | 98% | 90% |
| Mid-Market | 90% | 95% | 85% |
| SMB | 82% | 90% | 75% |
Health Score Benchmarks
Portfolio Health Distribution (Target)
A healthy CS portfolio should have the following approximate distribution:
| Classification | Target Distribution | Alert Threshold |
|---|---|---|
| Green (Healthy) | 60-70% | < 50% triggers portfolio review |
| Yellow (Attention) | 20-30% | > 35% signals systemic issues |
| Red (At Risk) | 5-10% | > 15% requires executive intervention |
Average Health Score by Segment
| Segment | Target Average | Industry Median | Top Quartile |
|---|---|---|---|
| Enterprise | > 78 | 72 | 82 |
| Mid-Market | > 75 | 68 | 78 |
| SMB | > 70 | 65 | 75 |
Health Score by Dimension (Industry Medians)
| Dimension | Enterprise | Mid-Market | SMB |
|---|---|---|---|
| Usage | 72 | 68 | 60 |
| Engagement | 70 | 62 | 55 |
| Support | 78 | 72 | 65 |
| Relationship | 68 | 60 | 50 |
Churn Metrics
Logo Churn Rate (Annual)
| Performance Level | Rate | Interpretation |
|---|---|---|
| Best-in-class | < 5% | Exceptional retention |
| Excellent | 5-8% | Very strong |
| Good | 8-12% | Healthy |
| Acceptable | 12-15% | Room for improvement |
| Below target | 15-20% | Significant churn problem |
| Concerning | > 20% | Urgent -- product-market fit issues likely |
Benchmarks by Segment:
| Segment | Median Annual Logo Churn | Top Quartile | Bottom Quartile |
|---|---|---|---|
| Enterprise | 5% | 2% | 10% |
| Mid-Market | 10% | 5% | 18% |
| SMB | 20% | 12% | 35% |
Churn Leading Indicators
The following metrics have the highest predictive power for churn events:
| Indicator | Lead Time | Correlation with Churn |
|---|---|---|
| Login frequency decline (>30%) | 60-90 days | Very High |
| NPS drop (>3 points) | 30-60 days | High |
| Executive sponsor departure | 30-90 days | Very High |
| Support escalation rate increase | 30-60 days | High |
| Meeting cancellation increase | 30-45 days | Moderate-High |
| Feature adoption decline | 60-90 days | Moderate |
| Competitor mentions | 30-60 days | Moderate |
Expansion Metrics
Expansion Revenue Rate
| Performance Level | Rate | Notes |
|---|---|---|
| Best-in-class | > 30% of total revenue | Strong land-and-expand motion |
| Excellent | 25-30% | Effective expansion engine |
| Good | 20-25% | Solid upsell/cross-sell |
| Target | > 20% | Minimum for healthy growth |
| Below target | 10-20% | Expansion motion needs development |
| Concerning | < 10% | Missing significant expansion opportunity |
Expansion by Type
| Expansion Type | Typical Contribution | Average Deal Size |
|---|---|---|
| Seat Expansion | 40-50% of expansion | 15-25% of contract value |
| Tier Upsell | 25-35% of expansion | 40-80% of contract value |
| Module Cross-sell | 15-25% of expansion | 10-20% of contract value |
| Department Expansion | 5-15% of expansion | 50-100% of contract value |
Expansion Readiness Indicators
| Signal | Interpretation |
|---|---|
| Seat utilisation > 90% | Ready for seat expansion |
| Feature requests for higher tier | Upsell opportunity |
| Usage of 70%+ of current modules | Ready for cross-sell |
| New department interest | Department expansion play |
| Customer referral activity | Strong relationship, open to expansion |
Engagement Metrics
Customer Engagement Score (CES) Benchmarks
| Metric | Target | Median | Warning |
|---|---|---|---|
| Meeting attendance rate | > 80% | 72% | < 50% |
| Average NPS | > 50 | 35 | < 20 |
| Average CSAT | > 4.2/5 | 3.8/5 | < 3.0/5 |
| Response time (days) | < 2 | 3 | > 5 |
| QBR completion rate | > 90% | 75% | < 60% |
Time to First Value (TTFV)
| Segment | Target TTFV | Median TTFV | Warning Threshold |
|---|---|---|---|
| Enterprise | < 30 days | 45 days | > 60 days |
| Mid-Market | < 21 days | 30 days | > 45 days |
| SMB | < 14 days | 21 days | > 30 days |
CSM Operational Metrics
Portfolio Management
| Metric | Enterprise CSM | Mid-Market CSM | SMB CSM (Tech-Touch) |
|---|---|---|---|
| Accounts per CSM | 10-25 | 30-60 | 100-300+ |
| ARR per CSM | $2M-$5M | $2M-$4M | $1M-$3M |
| Touch frequency | Weekly-biweekly | Biweekly-monthly | Quarterly-automated |
| QBR frequency | Quarterly | Semi-annually | Annually |
| Health score reviews | Weekly | Bi-weekly | Monthly |
CSM Activity Benchmarks
| Activity | Target per Month | Purpose |
|---|---|---|
| Strategic calls | 2-4 per account | Relationship building |
| Health score reviews | 4 (weekly) | Portfolio monitoring |
| QBR preparation | 3-5 per quarter | Executive engagement |
| Escalation handling | < 2 per month | Issue resolution |
| Expansion conversations | 1-2 per account | Revenue growth |
Industry-Specific Benchmarks
By Industry Vertical
| Industry | Median NRR | Median GRR | Median Logo Churn |
|---|---|---|---|
| Infrastructure/DevOps | 125% | 95% | 5% |
| Cybersecurity | 120% | 93% | 7% |
| HR Tech | 110% | 90% | 12% |
| MarTech | 105% | 87% | 15% |
| FinTech | 115% | 92% | 8% |
| HealthTech | 112% | 91% | 10% |
| EdTech | 100% | 85% | 18% |
| eCommerce Tools | 108% | 88% | 14% |
By Company Stage
| Stage | Median NRR | Median GRR | Notes |
|---|---|---|---|
| Early Stage (<$10M ARR) | 100% | 85% | Focus on product-market fit |
| Growth ($10M-$50M ARR) | 110% | 90% | Building CS function |
| Scale ($50M-$200M ARR) | 118% | 93% | Mature CS operations |
| Enterprise (>$200M ARR) | 115% | 95% | Optimisation phase |
Metric Relationships
Key Correlations
| If This Metric Moves | This Also Tends to Move | Direction |
|---|---|---|
| Health score down | Churn probability up | Inverse |
| NPS up | NRR up | Direct |
| TTFV down | GRR up | Inverse |
| Feature adoption up | Expansion rate up | Direct |
| Escalation rate up | NPS down | Inverse |
| Multi-threading depth up | GRR up | Direct |
The SaaS Retention Equation
Sustainable Growth requires: NRR > 110% AND GRR > 90%
If NRR is high but GRR is low: You are churning customers and replacing with expansion from survivors. Not sustainable.
If GRR is high but NRR is low: You retain well but do not expand. Leaving money on the table.
Both high: Healthy, compounding growth from existing customers.
Last Updated: February 2026 Sources: Industry surveys, SaaS benchmarking reports, customer success community data (2024-2025 data cycles).
Customer Success Playbooks
Comprehensive intervention, onboarding, renewal, expansion, and escalation playbooks for SaaS customer success management.
Risk Tier Intervention Playbooks
Critical Risk (Score 80-100)
Situation: Customer is at imminent risk of churn. Multiple severe warning signals detected. Requires immediate executive-level intervention.
Timeline: Act within 48 hours.
Steps:
Executive Escalation (Day 0)
- Alert VP of Customer Success and account executive immediately
- Brief internal leadership on situation, warning signals, and ARR at risk
- Identify any pending support issues and fast-track resolution
Customer Contact (Day 1-2)
- Schedule executive-to-executive call (VP CS to customer VP/C-level)
- Frame the conversation around understanding their challenges, not defending your product
- Listen more than talk -- capture the real objections
Save Plan Creation (Day 2-3)
- Create a detailed save plan with specific value milestones tied to their business outcomes
- Include timeline, owners, and measurable success criteria
- Get internal alignment on any concessions (pricing, features, roadmap commitments)
Rescue Team Assignment (Day 3-5)
- Assign a dedicated rescue team: CSM + Solutions Engineer + Support Lead
- Daily internal stand-up (15 min max) on account status
- Solutions Engineer to conduct technical health check
Execution and Monitoring (Week 2-4)
- Execute save plan with weekly customer check-ins
- Track progress against milestones
- Prepare competitive displacement defence if competitor involvement detected
Resolution Assessment (Week 4)
- Evaluate whether the situation is stabilising
- If improving: transition to High-risk monitoring cadence
- If not improving: escalate to CEO/GM for final intervention
Success Criteria: Risk score drops below 60 within 30 days. Customer confirms continued partnership intent.
High Risk (Score 60-79)
Situation: Customer showing clear signs of dissatisfaction or disengagement. Still salvageable with focused CSM intervention.
Timeline: Act within 1 week.
Steps:
Root Cause Analysis (Day 1-3)
- Review all health score dimensions to identify the primary drivers
- Pull support ticket history for patterns
- Check product usage trends for the past 90 days
CSM Outreach (Day 3-5)
- Schedule a dedicated call with the customer (not a routine check-in)
- Open with empathy: "I've noticed some changes and want to make sure we're supporting you properly"
- Identify the top 3 customer concerns
30-Day Recovery Plan (Day 5-7)
- Build a 30-day recovery plan with measurable checkpoints every week
- Include specific actions for each concern identified
- Share the plan with the customer for mutual commitment
Re-Engage Executive Sponsor (Week 2)
- Request a meeting with the executive sponsor
- Align on business outcomes and how your product supports them
- Confirm continued sponsorship and address any political changes
Support Fast-Track (Ongoing)
- Escalate any pending support tickets internally
- Assign a support point of contact for this account
- Provide weekly status updates on open issues
Progress Review (Week 3-4)
- Review all metrics for improvement
- Adjust plan if specific interventions are not working
- If score drops to Critical: escalate to executive playbook
Success Criteria: Risk score drops below 40 within 30 days. No new warning signals emerge.
Medium Risk (Score 40-59)
Situation: Early warning signs detected. Customer may not be aware of emerging issues. Proactive outreach prevents escalation.
Timeline: Act within 2 weeks.
Steps:
Data Review (Day 1-5)
- Analyse which dimension(s) are pulling the score down
- Review recent support interactions for sentiment clues
- Check for any known product issues affecting this customer
Proactive Check-In (Week 1-2)
- Schedule a "value check-in" call (position it as routine, not reactive)
- Share relevant success stories from similar customers
- Propose a training session or product walkthrough for underutilised features
Value Reinforcement (Week 2-3)
- Send a customised ROI summary showing value delivered
- Highlight feature releases relevant to their use case
- Connect them with your customer community or user group
Monitoring (Week 3-4)
- Increase monitoring frequency to bi-weekly
- Watch for improvement or continued decline
- If declining: move to High-risk playbook
Success Criteria: Score stabilises above 50 or improves. No escalation to High risk.
Low Risk (Score 0-39)
Situation: Customer is healthy. Standard success cadence applies. Focus on value reinforcement and expansion readiness.
Timeline: Standard touch cadence.
Steps:
Maintain Cadence
- Enterprise: Monthly strategic reviews, quarterly QBRs
- Mid-Market: Bi-monthly check-ins, semi-annual reviews
- SMB: Quarterly automated health updates, annual review
Proactive Communication
- Share product updates and release notes
- Invite to webinars, conferences, and community events
- Share relevant industry insights and benchmarks
Expansion Readiness
- Monitor for expansion signals (usage approaching limits, new use cases)
- Prepare expansion proposals when timing is right
- Position premium features and modules relevant to their needs
Renewal Preparation
- Begin renewal preparation 90 days before contract end
- Build renewal proposal with value delivered summary
- Identify any terms or pricing adjustments needed
Success Criteria: Customer remains in Green classification. Expansion conversations initiated when appropriate.
Onboarding Playbook
Phase 1: Welcome and Setup (Day 1-14)
| Day | Activity | Owner | Deliverable |
|---|---|---|---|
| 1 | Welcome email and introduction | CSM | Welcome package sent |
| 1-2 | Kickoff call | CSM + SE | Success plan drafted |
| 3-5 | Technical setup and configuration | SE | Environment configured |
| 5-7 | Admin training session | CSM | Admins trained |
| 7-10 | Data migration (if applicable) | SE | Data validated |
| 10-14 | Initial user training | CSM | Core team trained |
Phase 2: Activation (Day 15-30)
| Day | Activity | Owner | Deliverable |
|---|---|---|---|
| 15 | Activation check -- are users logging in? | CSM | Usage report |
| 15-20 | Follow-up training for laggards | CSM | All users active |
| 20-25 | First business outcome milestone | CSM | Milestone achieved |
| 25-30 | 30-day review call | CSM | Review documented |
Critical Milestone: Time to First Value must be under 30 days.
Phase 3: Adoption (Day 31-60)
| Day | Activity | Owner | Deliverable |
|---|---|---|---|
| 30-40 | Feature adoption expansion | CSM | New features in use |
| 40-50 | Integration setup (if applicable) | SE | Integrations live |
| 50-60 | Usage benchmarking vs. peers | CSM | Benchmark report |
Phase 4: Optimisation (Day 61-90)
| Day | Activity | Owner | Deliverable |
|---|---|---|---|
| 60-70 | Advanced use case workshop | CSM + SE | New use cases identified |
| 70-80 | ROI measurement | CSM | ROI documented |
| 80-90 | 90-day executive review | CSM | Transition to steady-state |
Gate: Handoff from onboarding to ongoing CSM management. Health score must be Yellow or better.
Renewal Playbook
120 Days Before Renewal
- Review contract terms and pricing
- Assess current health score and trajectory
- Identify any outstanding issues or concerns
- Begin internal alignment on renewal strategy
90 Days Before Renewal
- Schedule renewal conversation with customer
- Prepare value delivered summary (ROI, usage stats, milestones achieved)
- Draft renewal proposal with recommended terms
- If at-risk: escalate and begin risk mitigation
60 Days Before Renewal
- Present renewal proposal to customer
- Negotiate terms if needed
- Address any concerns raised during the process
- Escalate blockers to leadership
30 Days Before Renewal
- Finalise contract terms
- Obtain signatures
- Plan for any post-renewal actions (expansion, migration)
- Update CRM with renewal details
Post-Renewal
- Confirm renewed contract in systems
- Send thank-you and updated success plan
- Schedule next QBR
- Identify expansion opportunities
Expansion Playbook
Identifying Expansion Signals
| Signal | Expansion Type | Priority |
|---|---|---|
| Seat utilisation > 90% | Seat expansion | High |
| Requests for features in higher tier | Tier upsell | High |
| New department inquiries | Department expansion | Medium |
| High adoption of existing modules | Module cross-sell | Medium |
| Customer referencing competitors for missing features | Cross-sell | High |
Expansion Conversation Framework
- Discovery: "I noticed your team has been getting great value from [feature]. Have you considered how [new module] could help with [related business outcome]?"
- Value Framing: "Companies similar to yours who adopted [module] saw [specific metric improvement]."
- Proposal: "Based on your current usage, here's what the expansion would look like..."
- Stakeholder Alignment: Involve the economic buyer early. The champion can advocate, but the budget holder decides.
- Close: Coordinate with sales/account executive for commercial negotiation.
Escalation Procedures
Internal Escalation Matrix
| Trigger | Escalation Level | Response Time |
|---|---|---|
| Health score drops to Red | VP Customer Success | 24 hours |
| Executive sponsor leaves | Director CS + AE | 48 hours |
| Critical bug affecting customer | VP Engineering + VP CS | 4 hours |
| Customer mentions competitor evaluation | VP CS + VP Sales | 24 hours |
| Renewal at risk (60 days or less) | CRO/VP Sales | 24 hours |
| Customer threatens legal action | Legal + VP CS | Immediate |
Escalation Communication Template
Subject: [ESCALATION] {Customer Name} -- {Brief Description}
Body:
- Customer: {name}, {segment}, ${ARR}
- Health Score: {score} ({classification})
- Renewal Date: {date}
- Issue Summary: {2-3 sentences}
- Warning Signals: {list}
- Recommended Action: {specific next step}
- Urgency: {critical/high/medium}
Last Updated: February 2026
Health Scoring Framework
Complete methodology for multi-dimensional customer health scoring in SaaS customer success.
Overview
Customer health scoring is the foundation of proactive customer success management. A well-calibrated health score enables CSMs to prioritise their portfolio, identify emerging risks before they become churn events, and allocate resources where they will have the greatest impact.
This framework uses a weighted, multi-dimensional approach that scores customers across four key areas: usage, engagement, support, and relationship. Each dimension contributes to an overall health score (0-100) that classifies accounts as Green (healthy), Yellow (needs attention), or Red (at risk).
Scoring Dimensions
1. Usage (Weight: 30%)
Usage metrics are the strongest leading indicator of customer health. Customers who are not using the product are not deriving value and are at elevated churn risk.
| Metric | Definition | Scoring Method |
|---|---|---|
| Login Frequency | Percentage of expected login days with actual logins | (actual / target) * 100, capped at 100 |
| Feature Adoption | Percentage of available features actively used | (adopted / available) * 100, capped at 100 |
| DAU/MAU Ratio | Daily active users divided by monthly active users | (actual / target) * 100, capped at 100 |
Sub-weights within Usage:
- Login Frequency: 35%
- Feature Adoption: 40%
- DAU/MAU Ratio: 25%
Why 30% weight: Usage is the most objective, data-driven signal. Declining usage almost always precedes churn. However, some customers may have seasonal usage patterns, which is why it is not weighted even higher.
2. Engagement (Weight: 25%)
Engagement measures how actively the customer participates in the relationship beyond just product usage.
| Metric | Definition | Scoring Method |
|---|---|---|
| Support Ticket Volume | Number of support tickets in the period | Inverse score: (1 - actual/max) * 100 |
| Meeting Attendance | Percentage of scheduled meetings attended | (actual / target) * 100, capped at 100 |
| NPS Score | Net Promoter Score response (0-10) | (actual / target) * 100, capped at 100 |
| CSAT Score | Customer Satisfaction score (1-5) | (actual / target) * 100, capped at 100 |
Sub-weights within Engagement:
- Support Ticket Volume: 20% (inverse -- fewer tickets is better)
- Meeting Attendance: 30%
- NPS Score: 25%
- CSAT Score: 25%
Why 25% weight: Engagement signals complement usage data. A customer who attends meetings but does not use the product may be in an evaluation phase. A customer who uses the product but skips meetings may be becoming self-sufficient -- or disengaging.
3. Support (Weight: 20%)
Support health measures the quality of the customer's support experience, which directly impacts satisfaction and renewal likelihood.
| Metric | Definition | Scoring Method |
|---|---|---|
| Open Tickets | Number of currently unresolved tickets | Inverse score: (1 - actual/max) * 100 |
| Escalation Rate | Percentage of tickets escalated | Inverse score: (1 - actual/max) * 100 |
| Avg Resolution Time | Average hours to resolve tickets | Inverse score: (1 - actual/max) * 100 |
Sub-weights within Support:
- Open Tickets: 35%
- Escalation Rate: 35%
- Resolution Time: 30%
Why 20% weight: Support issues are lagging indicators -- they tell you there is already a problem. However, unresolved support issues are a strong predictor of churn, especially when combined with declining engagement.
4. Relationship (Weight: 25%)
Relationship health measures the strength and depth of the human connection between the customer and your organisation.
| Metric | Definition | Scoring Method |
|---|---|---|
| Executive Sponsor Engagement | Engagement level of exec sponsor (0-100) | (actual / target) * 100, capped at 100 |
| Multi-Threading Depth | Number of stakeholder contacts | (actual / target) * 100, capped at 100 |
| Renewal Sentiment | Qualitative sentiment assessment | Mapped to score: positive=100, neutral=60, negative=20, unknown=50 |
Sub-weights within Relationship:
- Executive Sponsor Engagement: 35%
- Multi-Threading Depth: 30%
- Renewal Sentiment: 35%
Why 25% weight: Relationship strength is the most important defence against competitive displacement. A customer with strong relationships will give you more chances to fix problems. A customer with weak relationships may leave without warning.
Classification Thresholds
Standard Thresholds
| Classification | Score Range | Meaning | Action |
|---|---|---|---|
| Green | 75-100 | Customer is healthy and achieving value | Standard cadence, focus on expansion |
| Yellow | 50-74 | Customer needs attention | Increase touch frequency, investigate root causes |
| Red | 0-49 | Customer is at risk | Immediate intervention, create save plan |
Segment-Adjusted Thresholds
Enterprise customers typically have higher expectations and more complex deployments, which means a higher bar for "healthy." SMB customers may have simpler use cases and lower engagement expectations.
| Segment | Green Threshold | Yellow Threshold | Red Threshold |
|---|---|---|---|
| Enterprise | 75-100 | 50-74 | 0-49 |
| Mid-Market | 70-100 | 45-69 | 0-44 |
| SMB | 65-100 | 40-64 | 0-39 |
Segment-Specific Benchmarks
Each metric target is calibrated per segment. Enterprise customers are expected to have higher login frequency, attendance, and sponsor engagement. SMB customers have lower targets but still meaningful thresholds.
Example Calibration:
- Enterprise login frequency target: 90% (high-touch, deeply embedded)
- Mid-Market login frequency target: 80% (balanced engagement)
- SMB login frequency target: 70% (self-serve oriented)
Trend Analysis
A single health score snapshot is useful. A health score trend is actionable.
Trend Classification
| Trend | Criteria | Implication |
|---|---|---|
| Improving | Current > Previous by 5+ points | Positive trajectory, reinforce what is working |
| Stable | Within +/- 5 points | Maintain current approach |
| Declining | Current < Previous by 5+ points | Investigate and intervene |
| No Data | No previous period available | Establish baseline |
Trend Priority Matrix
| Current Score | Trend | Priority |
|---|---|---|
| Green | Declining | HIGH -- intervene before it drops further |
| Yellow | Declining | CRITICAL -- trajectory leads to Red |
| Yellow | Improving | MEDIUM -- reinforce positive momentum |
| Red | Improving | HIGH -- support the recovery |
| Red | Stable | CRITICAL -- needs new intervention approach |
Calibration Guidelines
When to Recalibrate
- After major product changes: New features may change what "good usage" looks like
- Seasonal patterns: Some industries have cyclical usage (retail holiday season, fiscal year end)
- Portfolio composition changes: If you add many SMB customers, the overall averages shift
- After churn events: Review whether the health score predicted the churn
Calibration Process
- Export health scores for all customers over the past 12 months
- Identify all churn events in the same period
- Calculate the average health score of churned customers 90, 60, and 30 days before churn
- Adjust thresholds so that churned customers would have been classified as Yellow or Red at least 60 days before churn
- Validate with a holdout set of recent data
Common Calibration Pitfalls
- Threshold creep: Gradually lowering Green thresholds to make the portfolio look healthier
- Over-weighting lagging indicators: Support metrics react after the damage is done
- Ignoring segment differences: Using one threshold for all segments
- Sentiment bias: Over-relying on subjective renewal sentiment
Implementation Checklist
- Define data sources for each metric (CRM, product analytics, support system)
- Establish data refresh frequency (daily for usage, weekly for engagement)
- Configure segment benchmarks for your customer base
- Set initial thresholds using industry defaults (provided above)
- Run a 30-day pilot with manual review of edge cases
- Calibrate thresholds based on pilot results
- Automate scoring and alerting
- Review and recalibrate quarterly
Last Updated: February 2026
#!/usr/bin/env python3
"""
Churn Risk Analyzer
Identifies at-risk customer accounts by scoring behavioral signals across
usage decline, engagement drop, support issues, relationship signals, and
commercial factors. Produces risk tiers with intervention playbooks and
time-to-renewal urgency multipliers.
Usage:
python churn_risk_analyzer.py customer_data.json
python churn_risk_analyzer.py customer_data.json --format json
"""
import argparse
import json
import sys
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
RISK_SIGNAL_WEIGHTS: Dict[str, float] = {
"usage_decline": 0.30,
"engagement_drop": 0.25,
"support_issues": 0.20,
"relationship_signals": 0.15,
"commercial_factors": 0.10,
}
RISK_TIERS: List[Dict[str, Any]] = [
{"name": "critical", "min": 80, "max": 100, "label": "CRITICAL", "action": "Immediate executive escalation"},
{"name": "high", "min": 60, "max": 79, "label": "HIGH", "action": "Urgent CSM intervention"},
{"name": "medium", "min": 40, "max": 59, "label": "MEDIUM", "action": "Proactive outreach"},
{"name": "low", "min": 0, "max": 39, "label": "LOW", "action": "Standard monitoring"},
]
WARNING_SEVERITY: Dict[str, int] = {
"critical": 4,
"high": 3,
"medium": 2,
"low": 1,
}
# Intervention playbooks per tier
INTERVENTION_PLAYBOOKS: Dict[str, List[str]] = {
"critical": [
"Schedule executive-to-executive call within 48 hours",
"Create detailed save plan with specific value milestones",
"Offer concessions or contract restructuring if needed",
"Assign dedicated rescue team (CSM + Solutions Engineer)",
"Daily internal stand-up on account status until stabilised",
"Prepare competitive displacement defence strategy",
],
"high": [
"Schedule urgent CSM call within 1 week",
"Conduct root cause analysis on declining metrics",
"Build 30-day recovery plan with measurable checkpoints",
"Re-engage executive sponsor for alignment meeting",
"Accelerate any pending feature requests or bug fixes",
"Increase touch frequency to weekly until improvement",
],
"medium": [
"Schedule proactive check-in within 2 weeks",
"Share relevant success stories and best practices",
"Propose training session or product walkthrough",
"Review current usage against success plan goals",
"Identify and address any unvoiced concerns",
"Bi-weekly monitoring until score improves to Low",
],
"low": [
"Maintain standard touch cadence",
"Share product updates and new feature announcements",
"Monitor health score trends monthly",
"Proactively share relevant industry insights",
"Prepare for upcoming renewal conversations (if within 90 days)",
],
}
SATISFACTION_TREND_SCORES: Dict[str, float] = {
"improving": 10.0,
"stable": 30.0,
"declining": 70.0,
"critical": 95.0,
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def days_until(date_str: Optional[str]) -> Optional[int]:
"""Return days from today until *date_str* (ISO format), or None."""
if not date_str:
return None
try:
target = datetime.strptime(date_str[:10], "%Y-%m-%d")
delta = (target - datetime.now()).days
return max(delta, 0)
except (ValueError, TypeError):
return None
def renewal_urgency_multiplier(days_remaining: Optional[int]) -> float:
"""Return a multiplier (1.0 - 1.5) based on proximity to renewal.
Closer renewals amplify the risk score.
"""
if days_remaining is None:
return 1.0
if days_remaining <= 30:
return 1.5
elif days_remaining <= 60:
return 1.35
elif days_remaining <= 90:
return 1.2
elif days_remaining <= 180:
return 1.1
return 1.0
def get_risk_tier(score: float) -> Dict[str, Any]:
"""Return the risk tier dict matching the score."""
for tier in RISK_TIERS:
if tier["min"] <= score <= tier["max"]:
return tier
return RISK_TIERS[-1] # default to low
# ---------------------------------------------------------------------------
# Signal Scoring
# ---------------------------------------------------------------------------
def score_usage_decline(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score usage decline signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
login_trend = data.get("login_trend", 0) # negative = decline
feature_change = data.get("feature_adoption_change", 0)
dau_mau_change = data.get("dau_mau_change", 0)
# Convert declines to risk scores (0-100)
login_risk = clamp(abs(min(login_trend, 0)) * 3.0) # -33% => 100
feature_risk = clamp(abs(min(feature_change, 0)) * 4.0) # -25% => 100
dau_mau_risk = clamp(abs(min(dau_mau_change, 0)) * 500) # -0.20 => 100
score = round(login_risk * 0.40 + feature_risk * 0.35 + dau_mau_risk * 0.25, 1)
if login_trend <= -20:
warnings.append({"severity": "critical", "signal": f"Login frequency dropped {abs(login_trend)}%"})
elif login_trend <= -10:
warnings.append({"severity": "high", "signal": f"Login frequency declined {abs(login_trend)}%"})
elif login_trend < -5:
warnings.append({"severity": "medium", "signal": f"Login frequency dipping {abs(login_trend)}%"})
if feature_change <= -15:
warnings.append({"severity": "high", "signal": f"Feature adoption dropped {abs(feature_change)}%"})
elif feature_change < -5:
warnings.append({"severity": "medium", "signal": f"Feature adoption declining {abs(feature_change)}%"})
if dau_mau_change <= -0.10:
warnings.append({"severity": "high", "signal": f"DAU/MAU ratio fell by {abs(dau_mau_change):.2f}"})
return score, warnings
def score_engagement_drop(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score engagement drop signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
cancellations = data.get("meeting_cancellations", 0)
response_days = data.get("response_time_days", 1)
nps_change = data.get("nps_change", 0)
cancel_risk = clamp(cancellations * 25.0) # 4 cancellations => 100
response_risk = clamp((response_days - 1) * 15.0) # 1 day baseline; 7+ days => 90+
nps_risk = clamp(abs(min(nps_change, 0)) * 20.0) # -5 => 100
score = round(cancel_risk * 0.30 + response_risk * 0.35 + nps_risk * 0.35, 1)
if cancellations >= 3:
warnings.append({"severity": "critical", "signal": f"{cancellations} meeting cancellations -- customer disengaging"})
elif cancellations >= 2:
warnings.append({"severity": "high", "signal": f"{cancellations} meeting cancellations recently"})
if response_days >= 7:
warnings.append({"severity": "critical", "signal": f"Customer response time: {response_days} days -- going dark"})
elif response_days >= 4:
warnings.append({"severity": "high", "signal": f"Customer response time increasing: {response_days} days"})
if nps_change <= -4:
warnings.append({"severity": "critical", "signal": f"NPS dropped by {abs(nps_change)} points"})
elif nps_change <= -2:
warnings.append({"severity": "high", "signal": f"NPS declined by {abs(nps_change)} points"})
return score, warnings
def score_support_issues(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score support-related risk signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
escalations = data.get("open_escalations", 0)
critical_unresolved = data.get("unresolved_critical", 0)
sat_trend = data.get("satisfaction_trend", "stable").lower()
esc_risk = clamp(escalations * 35.0) # 3 escalations => 100
critical_risk = clamp(critical_unresolved * 50.0) # 2 unresolved critical => 100
sat_risk = SATISFACTION_TREND_SCORES.get(sat_trend, 30.0)
score = round(esc_risk * 0.35 + critical_risk * 0.35 + sat_risk * 0.30, 1)
if critical_unresolved >= 2:
warnings.append({"severity": "critical", "signal": f"{critical_unresolved} unresolved critical support tickets"})
elif critical_unresolved >= 1:
warnings.append({"severity": "high", "signal": "Unresolved critical support ticket"})
if escalations >= 2:
warnings.append({"severity": "high", "signal": f"{escalations} open escalations"})
elif escalations >= 1:
warnings.append({"severity": "medium", "signal": "Open support escalation"})
if sat_trend == "critical":
warnings.append({"severity": "critical", "signal": "Support satisfaction at critical levels"})
elif sat_trend == "declining":
warnings.append({"severity": "high", "signal": "Support satisfaction trending down"})
return score, warnings
def score_relationship_signals(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score relationship risk signals (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
risk_points = 0.0
champion_left = data.get("champion_left", False)
sponsor_change = data.get("sponsor_change", False)
competitor_mentions = data.get("competitor_mentions", 0)
if champion_left:
risk_points += 45.0
warnings.append({"severity": "critical", "signal": "Internal champion has left the organisation"})
if sponsor_change:
risk_points += 30.0
warnings.append({"severity": "high", "signal": "Executive sponsor change detected"})
if competitor_mentions >= 3:
risk_points += 35.0
warnings.append({"severity": "critical", "signal": f"Customer mentioned competitors {competitor_mentions} times"})
elif competitor_mentions >= 1:
risk_points += competitor_mentions * 12.0
warnings.append({"severity": "medium", "signal": f"Customer mentioned competitor {competitor_mentions} time(s)"})
score = clamp(risk_points)
return round(score, 1), warnings
def score_commercial_factors(data: Dict[str, Any]) -> Tuple[float, List[Dict[str, str]]]:
"""Score commercial risk factors (0-100, higher = more risk)."""
warnings: List[Dict[str, str]] = []
risk_points = 0.0
contract_type = data.get("contract_type", "annual").lower()
pricing_complaints = data.get("pricing_complaints", False)
budget_cuts = data.get("budget_cuts_mentioned", False)
if contract_type == "month-to-month":
risk_points += 30.0
warnings.append({"severity": "medium", "signal": "Month-to-month contract -- low switching cost"})
elif contract_type == "quarterly":
risk_points += 15.0
if pricing_complaints:
risk_points += 35.0
warnings.append({"severity": "high", "signal": "Customer has raised pricing complaints"})
if budget_cuts:
risk_points += 40.0
warnings.append({"severity": "high", "signal": "Customer mentioned budget cuts or cost reduction"})
score = clamp(risk_points)
return round(score, 1), warnings
# ---------------------------------------------------------------------------
# Main Analysis
# ---------------------------------------------------------------------------
def analyse_churn_risk(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Analyse churn risk for a single customer."""
usage_score, usage_warnings = score_usage_decline(customer.get("usage_decline", {}))
engagement_score, engagement_warnings = score_engagement_drop(customer.get("engagement_drop", {}))
support_score, support_warnings = score_support_issues(customer.get("support_issues", {}))
relationship_score, relationship_warnings = score_relationship_signals(customer.get("relationship_signals", {}))
commercial_score, commercial_warnings = score_commercial_factors(customer.get("commercial_factors", {}))
# Weighted raw score
raw_score = (
usage_score * RISK_SIGNAL_WEIGHTS["usage_decline"]
+ engagement_score * RISK_SIGNAL_WEIGHTS["engagement_drop"]
+ support_score * RISK_SIGNAL_WEIGHTS["support_issues"]
+ relationship_score * RISK_SIGNAL_WEIGHTS["relationship_signals"]
+ commercial_score * RISK_SIGNAL_WEIGHTS["commercial_factors"]
)
# Apply renewal urgency multiplier
remaining = days_until(customer.get("contract_end_date"))
multiplier = renewal_urgency_multiplier(remaining)
adjusted_score = clamp(round(raw_score * multiplier, 1))
tier = get_risk_tier(adjusted_score)
# Collect and sort warnings by severity
all_warnings = usage_warnings + engagement_warnings + support_warnings + relationship_warnings + commercial_warnings
all_warnings.sort(key=lambda w: WARNING_SEVERITY.get(w["severity"], 0), reverse=True)
playbook = INTERVENTION_PLAYBOOKS.get(tier["name"], [])
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": customer.get("segment", "unknown"),
"arr": customer.get("arr", 0),
"risk_score": adjusted_score,
"raw_score": round(raw_score, 1),
"risk_tier": tier["name"],
"risk_label": tier["label"],
"urgency_multiplier": multiplier,
"days_to_renewal": remaining,
"signal_scores": {
"usage_decline": {"score": usage_score, "weight": "30%"},
"engagement_drop": {"score": engagement_score, "weight": "25%"},
"support_issues": {"score": support_score, "weight": "20%"},
"relationship_signals": {"score": relationship_score, "weight": "15%"},
"commercial_factors": {"score": commercial_score, "weight": "10%"},
},
"warning_signals": all_warnings,
"recommended_actions": playbook,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("CHURN RISK ANALYSIS REPORT")
lines.append("=" * 72)
lines.append("")
total = len(results)
critical_count = sum(1 for r in results if r["risk_tier"] == "critical")
high_count = sum(1 for r in results if r["risk_tier"] == "high")
medium_count = sum(1 for r in results if r["risk_tier"] == "medium")
low_count = sum(1 for r in results if r["risk_tier"] == "low")
total_arr_at_risk = sum(r["arr"] for r in results if r["risk_tier"] in ("critical", "high"))
lines.append(f"Portfolio Summary: {total} customers analysed")
lines.append(f" Critical Risk: {critical_count}")
lines.append(f" High Risk: {high_count}")
lines.append(f" Medium Risk: {medium_count}")
lines.append(f" Low Risk: {low_count}")
lines.append(f" ARR at Risk (Critical + High): ${total_arr_at_risk:,.0f}")
lines.append("")
# Sort by risk score descending
sorted_results = sorted(results, key=lambda r: r["risk_score"], reverse=True)
for r in sorted_results:
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | ARR: ${r['arr']:,.0f}")
renewal_str = f"{r['days_to_renewal']} days" if r["days_to_renewal"] is not None else "N/A"
lines.append(f"Risk Score: {r['risk_score']}/100 [{r['risk_label']}] | Renewal: {renewal_str}")
if r["urgency_multiplier"] > 1.0:
lines.append(f" ** Urgency multiplier applied: {r['urgency_multiplier']}x (renewal approaching)")
lines.append("")
lines.append(" Signal Scores:")
for signal_name, signal_data in r["signal_scores"].items():
display_name = signal_name.replace("_", " ").title()
lines.append(f" {display_name:25s} {signal_data['score']:6.1f}/100 ({signal_data['weight']})")
if r["warning_signals"]:
lines.append("")
lines.append(" Warning Signals:")
for w in r["warning_signals"]:
severity_tag = w["severity"].upper()
lines.append(f" [{severity_tag}] {w['signal']}")
if r["recommended_actions"]:
lines.append("")
lines.append(" Recommended Actions:")
for i, action in enumerate(r["recommended_actions"], 1):
lines.append(f" {i}. {action}")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total = len(results)
output = {
"report": "churn_risk_analysis",
"summary": {
"total_customers": total,
"critical_count": sum(1 for r in results if r["risk_tier"] == "critical"),
"high_count": sum(1 for r in results if r["risk_tier"] == "high"),
"medium_count": sum(1 for r in results if r["risk_tier"] == "medium"),
"low_count": sum(1 for r in results if r["risk_tier"] == "low"),
"total_arr_at_risk": sum(r["arr"] for r in results if r["risk_tier"] in ("critical", "high")),
},
"customers": sorted(results, key=lambda r: r["risk_score"], reverse=True),
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Analyse churn risk with behavioral signal detection and intervention recommendations."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [analyse_churn_risk(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Expansion Opportunity Scorer
Analyses customer product adoption depth, maps whitespace for unused
features/products, estimates revenue opportunities, and prioritises
expansion plays by effort vs impact.
Usage:
python expansion_opportunity_scorer.py customer_data.json
python expansion_opportunity_scorer.py customer_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
# Tier pricing multipliers (relative to current plan price)
TIER_UPLIFT: Dict[str, float] = {
"starter": 1.0,
"professional": 1.8,
"enterprise": 3.0,
"enterprise_plus": 4.5,
}
# Module revenue estimates as a fraction of base ARR
MODULE_REVENUE_FRACTION: Dict[str, float] = {
"core_platform": 0.00, # Already included in base
"analytics_module": 0.15,
"integrations_module": 0.12,
"api_access": 0.10,
"advanced_reporting": 0.18,
"security_module": 0.20,
"automation_module": 0.15,
"collaboration_module": 0.10,
"data_export": 0.08,
"custom_workflows": 0.22,
"sso_module": 0.08,
"audit_module": 0.10,
}
# Effort classification for different expansion types
EFFORT_MAP: Dict[str, str] = {
"upsell_tier": "medium",
"cross_sell_module": "low",
"seat_expansion": "low",
"department_expansion": "high",
}
# Usage thresholds for recommendations
HIGH_USAGE_THRESHOLD = 75 # % usage indicates readiness for more
LOW_ADOPTION_THRESHOLD = 30 # % usage is too low to push expansion there
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def estimate_seat_expansion_revenue(
arr: float, licensed: int, active: int, segment: str
) -> Tuple[float, str]:
"""Estimate revenue from seat expansion.
Returns (estimated_revenue, rationale).
"""
utilisation = safe_divide(active, licensed)
if utilisation >= 0.90:
# Near capacity -- likely needs more seats
growth_factor = {"enterprise": 0.25, "mid-market": 0.20, "smb": 0.15}
factor = growth_factor.get(segment.lower(), 0.15)
revenue = round(arr * factor, 0)
return revenue, f"Seat utilisation at {utilisation:.0%} -- likely needs {int(licensed * factor)} additional seats"
return 0.0, f"Seat utilisation at {utilisation:.0%} -- not yet at expansion threshold"
def estimate_tier_upgrade_revenue(
arr: float, current_tier: str, available_tiers: List[str]
) -> Tuple[float, Optional[str], str]:
"""Estimate revenue from tier upgrade.
Returns (estimated_revenue, target_tier, rationale).
"""
current_mult = TIER_UPLIFT.get(current_tier.lower(), 1.0)
best_revenue = 0.0
best_tier = None
rationale = "Already on highest tier"
for tier in available_tiers:
tier_mult = TIER_UPLIFT.get(tier.lower(), 1.0)
if tier_mult > current_mult:
# Calculate revenue as the incremental ARR from upgrading
base_arr = safe_divide(arr, current_mult)
upgrade_arr = base_arr * tier_mult
incremental = upgrade_arr - arr
if incremental > best_revenue:
# Pick the next tier up (not skip tiers)
if best_tier is None or tier_mult < TIER_UPLIFT.get(best_tier.lower(), 999):
best_revenue = round(incremental, 0)
best_tier = tier
rationale = f"Upgrade from {current_tier} to {tier} adds ${incremental:,.0f} ARR"
return best_revenue, best_tier, rationale
def estimate_module_revenue(
arr: float, product_usage: Dict[str, Dict[str, Any]]
) -> List[Dict[str, Any]]:
"""Identify cross-sell opportunities from unadopted modules.
Returns list of opportunity dicts.
"""
opportunities: List[Dict[str, Any]] = []
for module_name, module_data in product_usage.items():
adopted = module_data.get("adopted", False)
usage_pct = module_data.get("usage_pct", 0)
fraction = MODULE_REVENUE_FRACTION.get(module_name.lower(), 0.10)
if not adopted and fraction > 0:
revenue = round(arr * fraction, 0)
opportunities.append({
"module": module_name,
"type": "cross_sell",
"estimated_revenue": revenue,
"effort": "low",
"rationale": f"Module not adopted -- ${revenue:,.0f} potential ARR",
})
elif adopted and usage_pct < LOW_ADOPTION_THRESHOLD and fraction > 0:
# Already adopted but underutilised -- focus on enablement, not expansion
pass # Skip -- needs enablement, not a sales motion
return opportunities
def estimate_department_expansion_revenue(
arr: float,
current_departments: List[str],
potential_departments: List[str],
segment: str,
) -> List[Dict[str, Any]]:
"""Estimate revenue from expanding to new departments."""
opportunities: List[Dict[str, Any]] = []
current_set = {d.lower() for d in current_departments}
per_dept_estimate = safe_divide(arr, max(len(current_departments), 1))
for dept in potential_departments:
if dept.lower() not in current_set:
# Estimate each new department at the average per-department ARR
revenue = round(per_dept_estimate * 0.8, 0) # Slight discount for new dept
opportunities.append({
"department": dept,
"type": "expansion",
"estimated_revenue": revenue,
"effort": "high",
"rationale": f"Expand to {dept} department -- est. ${revenue:,.0f} ARR",
})
return opportunities
# ---------------------------------------------------------------------------
# Priority Scoring
# ---------------------------------------------------------------------------
def priority_score(revenue: float, effort: str) -> float:
"""Calculate priority score (higher = better).
Favours high revenue with low effort.
"""
effort_multiplier = {"low": 3.0, "medium": 2.0, "high": 1.0}
mult = effort_multiplier.get(effort.lower(), 1.0)
# Normalise revenue to a 0-100 scale (assume max single opportunity is $200k)
rev_score = clamp(safe_divide(revenue, 2000.0)) # $200k => 100
return round(rev_score * mult, 1)
# ---------------------------------------------------------------------------
# Main Analysis
# ---------------------------------------------------------------------------
def analyse_expansion(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Analyse expansion opportunities for a single customer."""
arr = customer.get("arr", 0)
segment = customer.get("segment", "mid-market").lower()
contract = customer.get("contract", {})
product_usage = customer.get("product_usage", {})
departments = customer.get("departments", {})
all_opportunities: List[Dict[str, Any]] = []
# 1. Seat expansion
licensed = contract.get("licensed_seats", 0)
active = contract.get("active_seats", 0)
seat_rev, seat_rationale = estimate_seat_expansion_revenue(arr, licensed, active, segment)
if seat_rev > 0:
all_opportunities.append({
"type": "expansion",
"category": "seat_expansion",
"estimated_revenue": seat_rev,
"effort": "low",
"rationale": seat_rationale,
"priority_score": priority_score(seat_rev, "low"),
})
# 2. Tier upgrade
current_tier = contract.get("plan_tier", "").lower()
available_tiers = contract.get("available_tiers", [])
tier_rev, target_tier, tier_rationale = estimate_tier_upgrade_revenue(arr, current_tier, available_tiers)
if tier_rev > 0 and target_tier:
all_opportunities.append({
"type": "upsell",
"category": "tier_upgrade",
"target_tier": target_tier,
"estimated_revenue": tier_rev,
"effort": "medium",
"rationale": tier_rationale,
"priority_score": priority_score(tier_rev, "medium"),
})
# 3. Module cross-sell
module_opps = estimate_module_revenue(arr, product_usage)
for opp in module_opps:
opp["category"] = "module_cross_sell"
opp["priority_score"] = priority_score(opp["estimated_revenue"], opp["effort"])
all_opportunities.append(opp)
# 4. Department expansion
current_depts = departments.get("current", [])
potential_depts = departments.get("potential", [])
dept_opps = estimate_department_expansion_revenue(arr, current_depts, potential_depts, segment)
for opp in dept_opps:
opp["category"] = "department_expansion"
opp["priority_score"] = priority_score(opp["estimated_revenue"], opp["effort"])
all_opportunities.append(opp)
# Sort by priority score descending
all_opportunities.sort(key=lambda o: o["priority_score"], reverse=True)
# Adoption depth summary
total_modules = len(product_usage)
adopted_modules = sum(1 for m in product_usage.values() if m.get("adopted", False))
avg_usage = round(
safe_divide(
sum(m.get("usage_pct", 0) for m in product_usage.values() if m.get("adopted", False)),
max(adopted_modules, 1),
),
1,
)
total_estimated_revenue = sum(o["estimated_revenue"] for o in all_opportunities)
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": segment,
"arr": arr,
"adoption_summary": {
"total_modules": total_modules,
"adopted_modules": adopted_modules,
"adoption_rate": round(safe_divide(adopted_modules, total_modules) * 100, 1) if total_modules > 0 else 0,
"avg_usage_pct": avg_usage,
"seat_utilisation": round(safe_divide(active, max(licensed, 1)) * 100, 1),
"current_tier": current_tier,
"departments_covered": len(current_depts),
"departments_potential": len(potential_depts),
},
"total_estimated_revenue": round(total_estimated_revenue, 0),
"opportunity_count": len(all_opportunities),
"opportunities": all_opportunities,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("EXPANSION OPPORTUNITY REPORT")
lines.append("=" * 72)
lines.append("")
total_rev = sum(r["total_estimated_revenue"] for r in results)
total_opps = sum(r["opportunity_count"] for r in results)
lines.append(f"Portfolio Summary: {len(results)} customers")
lines.append(f" Total Expansion Revenue Potential: ${total_rev:,.0f}")
lines.append(f" Total Opportunities Identified: {total_opps}")
lines.append("")
# Sort customers by total estimated revenue descending
sorted_results = sorted(results, key=lambda r: r["total_estimated_revenue"], reverse=True)
for r in sorted_results:
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | Current ARR: ${r['arr']:,.0f}")
lines.append(f"Total Expansion Potential: ${r['total_estimated_revenue']:,.0f} ({r['opportunity_count']} opportunities)")
lines.append("")
adoption = r["adoption_summary"]
lines.append(" Adoption Summary:")
lines.append(f" Modules Adopted: {adoption['adopted_modules']}/{adoption['total_modules']} ({adoption['adoption_rate']}%)")
lines.append(f" Avg Module Usage: {adoption['avg_usage_pct']}%")
lines.append(f" Seat Utilisation: {adoption['seat_utilisation']}%")
lines.append(f" Current Tier: {adoption['current_tier'].title()}")
lines.append(f" Departments: {adoption['departments_covered']} active, {adoption['departments_potential']} potential")
if r["opportunities"]:
lines.append("")
lines.append(" Opportunities (ranked by priority):")
for i, opp in enumerate(r["opportunities"], 1):
opp_type = opp.get("type", "unknown").title()
category = opp.get("category", "").replace("_", " ").title()
rev = opp["estimated_revenue"]
effort = opp.get("effort", "unknown").title()
pri = opp.get("priority_score", 0)
lines.append(f" {i}. [{opp_type}] {category}")
lines.append(f" Revenue: ${rev:,.0f} | Effort: {effort} | Priority: {pri}")
lines.append(f" {opp.get('rationale', '')}")
else:
lines.append("")
lines.append(" No expansion opportunities identified at this time.")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total_rev = sum(r["total_estimated_revenue"] for r in results)
total_opps = sum(r["opportunity_count"] for r in results)
output = {
"report": "expansion_opportunities",
"summary": {
"total_customers": len(results),
"total_estimated_revenue": total_rev,
"total_opportunities": total_opps,
},
"customers": sorted(results, key=lambda r: r["total_estimated_revenue"], reverse=True),
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Score expansion opportunities with adoption analysis and revenue estimation."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [analyse_expansion(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Customer Health Score Calculator
Multi-dimensional weighted health scoring across usage, engagement, support,
and relationship dimensions. Produces Red/Yellow/Green classification with
trend analysis and segment-aware benchmarking.
Usage:
python health_score_calculator.py customer_data.json
python health_score_calculator.py customer_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Tuple
# ---------------------------------------------------------------------------
# Constants
# ---------------------------------------------------------------------------
DIMENSION_WEIGHTS: Dict[str, float] = {
"usage": 0.30,
"engagement": 0.25,
"support": 0.20,
"relationship": 0.25,
}
# Segment-specific thresholds (green_min, yellow_min)
SEGMENT_THRESHOLDS: Dict[str, Dict[str, Tuple[int, int]]] = {
"enterprise": {"green": (75, 100), "yellow": (50, 74), "red": (0, 49)},
"mid-market": {"green": (70, 100), "yellow": (45, 69), "red": (0, 44)},
"smb": {"green": (65, 100), "yellow": (40, 64), "red": (0, 39)},
}
# Benchmarks per segment for normalising raw metrics
SEGMENT_BENCHMARKS: Dict[str, Dict[str, Any]] = {
"enterprise": {
"login_frequency_target": 90,
"feature_adoption_target": 80,
"dau_mau_target": 0.50,
"support_ticket_volume_max": 5,
"meeting_attendance_target": 95,
"nps_target": 9,
"csat_target": 4.5,
"open_tickets_max": 10,
"escalation_rate_max": 0.25,
"avg_resolution_hours_max": 72,
"exec_sponsor_target": 90,
"multi_threading_target": 5,
},
"mid-market": {
"login_frequency_target": 80,
"feature_adoption_target": 70,
"dau_mau_target": 0.40,
"support_ticket_volume_max": 8,
"meeting_attendance_target": 85,
"nps_target": 8,
"csat_target": 4.0,
"open_tickets_max": 15,
"escalation_rate_max": 0.30,
"avg_resolution_hours_max": 96,
"exec_sponsor_target": 75,
"multi_threading_target": 3,
},
"smb": {
"login_frequency_target": 70,
"feature_adoption_target": 60,
"dau_mau_target": 0.30,
"support_ticket_volume_max": 10,
"meeting_attendance_target": 75,
"nps_target": 7,
"csat_target": 3.8,
"open_tickets_max": 20,
"escalation_rate_max": 0.40,
"avg_resolution_hours_max": 120,
"exec_sponsor_target": 60,
"multi_threading_target": 2,
},
}
RENEWAL_SENTIMENT_SCORES: Dict[str, float] = {
"positive": 100.0,
"neutral": 60.0,
"negative": 20.0,
"unknown": 50.0,
}
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Return numerator / denominator, or *default* when denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def clamp(value: float, lo: float = 0.0, hi: float = 100.0) -> float:
"""Clamp *value* between *lo* and *hi*."""
return max(lo, min(hi, value))
def get_benchmarks(segment: str) -> Dict[str, Any]:
"""Return benchmarks for the given segment, falling back to mid-market."""
return SEGMENT_BENCHMARKS.get(segment.lower(), SEGMENT_BENCHMARKS["mid-market"])
def get_thresholds(segment: str) -> Dict[str, Tuple[int, int]]:
"""Return classification thresholds for the given segment."""
return SEGMENT_THRESHOLDS.get(segment.lower(), SEGMENT_THRESHOLDS["mid-market"])
def classify(score: float, segment: str) -> str:
"""Return 'green', 'yellow', or 'red' classification."""
thresholds = get_thresholds(segment)
if score >= thresholds["green"][0]:
return "green"
elif score >= thresholds["yellow"][0]:
return "yellow"
return "red"
def trend_direction(current: float, previous: Optional[float]) -> str:
"""Return trend direction string."""
if previous is None:
return "no_data"
diff = current - previous
if diff > 5:
return "improving"
elif diff < -5:
return "declining"
return "stable"
# ---------------------------------------------------------------------------
# Dimension Scoring
# ---------------------------------------------------------------------------
def score_usage(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the usage dimension (0-100).
Metrics: login_frequency, feature_adoption, dau_mau_ratio.
"""
recommendations: List[str] = []
login = clamp(safe_divide(data.get("login_frequency", 0), benchmarks["login_frequency_target"]) * 100)
adoption = clamp(safe_divide(data.get("feature_adoption", 0), benchmarks["feature_adoption_target"]) * 100)
dau_mau = clamp(safe_divide(data.get("dau_mau_ratio", 0), benchmarks["dau_mau_target"]) * 100)
score = round(login * 0.35 + adoption * 0.40 + dau_mau * 0.25, 1)
if login < 60:
recommendations.append("Login frequency below target -- schedule product engagement session")
if adoption < 50:
recommendations.append("Feature adoption is low -- recommend guided feature walkthrough")
if dau_mau < 50:
recommendations.append("DAU/MAU ratio indicates shallow usage -- investigate stickiness barriers")
return score, recommendations
def score_engagement(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the engagement dimension (0-100).
Metrics: support_ticket_volume (inverse), meeting_attendance, nps_score, csat_score.
"""
recommendations: List[str] = []
# Lower ticket volume is better -- invert
ticket_vol = data.get("support_ticket_volume", 0)
ticket_score = clamp((1.0 - safe_divide(ticket_vol, benchmarks["support_ticket_volume_max"])) * 100)
attendance = clamp(safe_divide(data.get("meeting_attendance", 0), benchmarks["meeting_attendance_target"]) * 100)
nps_raw = data.get("nps_score", 5)
nps_score = clamp(safe_divide(nps_raw, benchmarks["nps_target"]) * 100)
csat_raw = data.get("csat_score", 3.0)
csat_score = clamp(safe_divide(csat_raw, benchmarks["csat_target"]) * 100)
score = round(ticket_score * 0.20 + attendance * 0.30 + nps_score * 0.25 + csat_score * 0.25, 1)
if attendance < 60:
recommendations.append("Meeting attendance is low -- re-evaluate meeting cadence and agenda value")
if nps_raw < 7:
recommendations.append("NPS below threshold -- conduct a feedback deep-dive with customer")
if csat_raw < 3.5:
recommendations.append("CSAT is critically low -- escalate to support leadership")
return score, recommendations
def score_support(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the support dimension (0-100).
Metrics: open_tickets (inverse), escalation_rate (inverse), avg_resolution_hours (inverse).
"""
recommendations: List[str] = []
open_tix = data.get("open_tickets", 0)
open_score = clamp((1.0 - safe_divide(open_tix, benchmarks["open_tickets_max"])) * 100)
esc_rate = data.get("escalation_rate", 0)
esc_score = clamp((1.0 - safe_divide(esc_rate, benchmarks["escalation_rate_max"])) * 100)
res_hours = data.get("avg_resolution_hours", 0)
res_score = clamp((1.0 - safe_divide(res_hours, benchmarks["avg_resolution_hours_max"])) * 100)
score = round(open_score * 0.35 + esc_score * 0.35 + res_score * 0.30, 1)
if open_tix > benchmarks["open_tickets_max"] * 0.5:
recommendations.append("Open ticket count elevated -- prioritise ticket resolution")
if esc_rate > benchmarks["escalation_rate_max"] * 0.5:
recommendations.append("Escalation rate too high -- review support process and training")
if res_hours > benchmarks["avg_resolution_hours_max"] * 0.5:
recommendations.append("Resolution time exceeds SLA target -- engage support leadership")
return score, recommendations
def score_relationship(data: Dict[str, Any], benchmarks: Dict[str, Any]) -> Tuple[float, List[str]]:
"""Score the relationship dimension (0-100).
Metrics: executive_sponsor_engagement, multi_threading_depth, renewal_sentiment.
"""
recommendations: List[str] = []
exec_score = clamp(safe_divide(data.get("executive_sponsor_engagement", 0), benchmarks["exec_sponsor_target"]) * 100)
threading = data.get("multi_threading_depth", 1)
thread_score = clamp(safe_divide(threading, benchmarks["multi_threading_target"]) * 100)
sentiment_str = data.get("renewal_sentiment", "unknown").lower()
sentiment_score = RENEWAL_SENTIMENT_SCORES.get(sentiment_str, 50.0)
score = round(exec_score * 0.35 + thread_score * 0.30 + sentiment_score * 0.35, 1)
if exec_score < 50:
recommendations.append("Executive sponsor engagement is weak -- schedule executive alignment meeting")
if threading < 2:
recommendations.append("Single-threaded relationship -- expand contacts across departments")
if sentiment_str == "negative":
recommendations.append("Renewal sentiment is negative -- initiate save plan immediately")
return score, recommendations
# ---------------------------------------------------------------------------
# Main Scoring
# ---------------------------------------------------------------------------
def calculate_health_score(customer: Dict[str, Any]) -> Dict[str, Any]:
"""Calculate the overall health score for a single customer."""
segment = customer.get("segment", "mid-market").lower()
benchmarks = get_benchmarks(segment)
# Score each dimension
usage_score, usage_recs = score_usage(customer.get("usage", {}), benchmarks)
engagement_score, engagement_recs = score_engagement(customer.get("engagement", {}), benchmarks)
support_score, support_recs = score_support(customer.get("support", {}), benchmarks)
relationship_score, relationship_recs = score_relationship(customer.get("relationship", {}), benchmarks)
# Weighted overall
overall = round(
usage_score * DIMENSION_WEIGHTS["usage"]
+ engagement_score * DIMENSION_WEIGHTS["engagement"]
+ support_score * DIMENSION_WEIGHTS["support"]
+ relationship_score * DIMENSION_WEIGHTS["relationship"],
1,
)
classification = classify(overall, segment)
# Trend analysis
prev = customer.get("previous_period", {})
trends = {
"usage": trend_direction(usage_score, prev.get("usage_score")),
"engagement": trend_direction(engagement_score, prev.get("engagement_score")),
"support": trend_direction(support_score, prev.get("support_score")),
"relationship": trend_direction(relationship_score, prev.get("relationship_score")),
}
overall_prev = prev.get("overall_score")
trends["overall"] = trend_direction(overall, overall_prev)
# Combine recommendations
all_recs = usage_recs + engagement_recs + support_recs + relationship_recs
return {
"customer_id": customer.get("customer_id", "unknown"),
"name": customer.get("name", "Unknown"),
"segment": segment,
"arr": customer.get("arr", 0),
"overall_score": overall,
"classification": classification,
"dimensions": {
"usage": {"score": usage_score, "weight": "30%", "classification": classify(usage_score, segment)},
"engagement": {"score": engagement_score, "weight": "25%", "classification": classify(engagement_score, segment)},
"support": {"score": support_score, "weight": "20%", "classification": classify(support_score, segment)},
"relationship": {"score": relationship_score, "weight": "25%", "classification": classify(relationship_score, segment)},
},
"trends": trends,
"recommendations": all_recs,
}
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
CLASSIFICATION_LABELS = {
"green": "HEALTHY",
"yellow": "NEEDS ATTENTION",
"red": "AT RISK",
}
def format_text(results: List[Dict[str, Any]]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 72)
lines.append("CUSTOMER HEALTH SCORE REPORT")
lines.append("=" * 72)
lines.append("")
# Portfolio summary
total = len(results)
green_count = sum(1 for r in results if r["classification"] == "green")
yellow_count = sum(1 for r in results if r["classification"] == "yellow")
red_count = sum(1 for r in results if r["classification"] == "red")
avg_score = round(safe_divide(sum(r["overall_score"] for r in results), total), 1)
lines.append(f"Portfolio Summary: {total} customers")
lines.append(f" Average Health Score: {avg_score}/100")
lines.append(f" Green (Healthy): {green_count}")
lines.append(f" Yellow (Attention): {yellow_count}")
lines.append(f" Red (At Risk): {red_count}")
lines.append("")
for r in results:
label = CLASSIFICATION_LABELS.get(r["classification"], "UNKNOWN")
lines.append("-" * 72)
lines.append(f"Customer: {r['name']} ({r['customer_id']})")
lines.append(f"Segment: {r['segment'].title()} | ARR: ${r['arr']:,.0f}")
lines.append(f"Overall Score: {r['overall_score']}/100 [{label}]")
lines.append("")
lines.append(" Dimension Scores:")
for dim_name, dim_data in r["dimensions"].items():
dim_label = CLASSIFICATION_LABELS.get(dim_data["classification"], "")
lines.append(f" {dim_name.title():15s} {dim_data['score']:6.1f}/100 ({dim_data['weight']}) [{dim_label}]")
lines.append("")
lines.append(" Trends:")
for dim_name, direction in r["trends"].items():
arrow = {"improving": "+", "declining": "-", "stable": "=", "no_data": "?"}
lines.append(f" {dim_name.title():15s} {arrow.get(direction, '?')} {direction}")
if r["recommendations"]:
lines.append("")
lines.append(" Recommendations:")
for i, rec in enumerate(r["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
lines.append("=" * 72)
return "\n".join(lines)
def format_json(results: List[Dict[str, Any]]) -> str:
"""Format results as JSON."""
total = len(results)
output = {
"report": "customer_health_scores",
"summary": {
"total_customers": total,
"average_score": round(safe_divide(sum(r["overall_score"] for r in results), total), 1),
"green_count": sum(1 for r in results if r["classification"] == "green"),
"yellow_count": sum(1 for r in results if r["classification"] == "yellow"),
"red_count": sum(1 for r in results if r["classification"] == "red"),
},
"customers": results,
}
return json.dumps(output, indent=2)
# ---------------------------------------------------------------------------
# CLI
# ---------------------------------------------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(
description="Calculate multi-dimensional customer health scores with trend analysis."
)
parser.add_argument("input_file", help="Path to JSON file containing customer data")
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
customers = data.get("customers", [])
if not customers:
print("Error: No customer records found in input file.", file=sys.stderr)
sys.exit(1)
results = [calculate_health_score(c) for c in customers]
if args.output_format == "json":
print(format_json(results))
else:
print(format_text(results))
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill business-growth/customer-success Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Productivity
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
business-growth/customer-success-manager/SKILL.md
People who install this also use
CEO Advisor
Executive leadership coaching — strategic decision-making, organizational development, board governance, and navigating high-stakes business challenges.
@alirezarezvani
Campaign Analytics Expert
Analyze marketing campaign performance, calculate ROI, interpret attribution models, and surface actionable insights from ad and content data.
@alirezarezvani