Campaign Analytics Expert
Analyze marketing campaign performance, calculate ROI, interpret attribution models, and surface actionable insights from ad and content data.
What this skill does
Gain a clear picture of marketing performance across every channel to understand exactly where your budget drives results. You will receive precise ROI calculations, conversion rates, and attribution insights that reveal which efforts actually lead to sales. Reach for this whenever you need to optimize ad spend, report on funnel health, or make data-backed decisions about future strategies.
name: “campaign-analytics” description: Analyzes campaign performance with multi-touch attribution, funnel conversion analysis, and ROI calculation for marketing optimization. Use when analyzing marketing campaigns, ad performance, attribution models, conversion rates, or calculating marketing ROI, ROAS, CPA, and campaign metrics across channels. license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: marketing domain: campaign-analytics updated: 2026-02-06 python-tools: attribution_analyzer.py, funnel_analyzer.py, campaign_roi_calculator.py tech-stack: marketing-analytics, attribution-modeling
Campaign Analytics
Production-grade campaign performance analysis with multi-touch attribution modeling, funnel conversion analysis, and ROI calculation. Three Python CLI tools provide deterministic, repeatable analytics using standard library only — no external dependencies, no API calls, no ML models.
Input Requirements
All scripts accept a JSON file as positional input argument. See assets/sample_campaign_data.json for complete examples.
Attribution Analyzer
{
"journeys": [
{
"journey_id": "j1",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-01T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-05T14:30:00", "interaction": "open"},
{"channel": "paid_search", "timestamp": "2025-10-08T09:15:00", "interaction": "click"}
],
"converted": true,
"revenue": 500.00
}
]
}
Funnel Analyzer
{
"funnel": {
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"counts": [10000, 5200, 2800, 1400, 420]
}
}
Campaign ROI Calculator
{
"campaigns": [
{
"name": "Spring Email Campaign",
"channel": "email",
"spend": 5000.00,
"revenue": 25000.00,
"impressions": 50000,
"clicks": 2500,
"leads": 300,
"customers": 45
}
]
}
Input Validation
Before running scripts, verify your JSON is valid and matches the expected schema. Common errors:
- Missing required keys (e.g.,
journeys,funnel.stages,campaigns) → script exits with a descriptiveKeyError - Mismatched array lengths in funnel data (
stagesandcountsmust be the same length) → raisesValueError - Non-numeric monetary values in ROI data → raises
TypeError
Use python -m json.tool your_file.json to validate JSON syntax before passing it to any script.
Output Formats
All scripts support two output formats via the --format flag:
--format text(default): Human-readable tables and summaries for review--format json: Machine-readable JSON for integrations and pipelines
Typical Analysis Workflow
For a complete campaign review, run the three scripts in sequence:
# Step 1 — Attribution: understand which channels drive conversions
python scripts/attribution_analyzer.py campaign_data.json --model time-decay
# Step 2 — Funnel: identify where prospects drop off on the path to conversion
python scripts/funnel_analyzer.py funnel_data.json
# Step 3 — ROI: calculate profitability and benchmark against industry standards
python scripts/campaign_roi_calculator.py campaign_data.json
Use attribution results to identify top-performing channels, then focus funnel analysis on those channels’ segments, and finally validate ROI metrics to prioritize budget reallocation.
How to Use
Attribution Analysis
# Run all 5 attribution models
python scripts/attribution_analyzer.py campaign_data.json
# Run a specific model
python scripts/attribution_analyzer.py campaign_data.json --model time-decay
# JSON output for pipeline integration
python scripts/attribution_analyzer.py campaign_data.json --format json
# Custom time-decay half-life (default: 7 days)
python scripts/attribution_analyzer.py campaign_data.json --model time-decay --half-life 14
Funnel Analysis
# Basic funnel analysis
python scripts/funnel_analyzer.py funnel_data.json
# JSON output
python scripts/funnel_analyzer.py funnel_data.json --format json
Campaign ROI Calculation
# Calculate ROI metrics for all campaigns
python scripts/campaign_roi_calculator.py campaign_data.json
# JSON output
python scripts/campaign_roi_calculator.py campaign_data.json --format json
Scripts
1. attribution_analyzer.py
Implements five industry-standard attribution models to allocate conversion credit across marketing channels:
| Model | Description | Best For |
|---|---|---|
| First-Touch | 100% credit to first interaction | Brand awareness campaigns |
| Last-Touch | 100% credit to last interaction | Direct response campaigns |
| Linear | Equal credit to all touchpoints | Balanced multi-channel evaluation |
| Time-Decay | More credit to recent touchpoints | Short sales cycles |
| Position-Based | 40/20/40 split (first/middle/last) | Full-funnel marketing |
2. funnel_analyzer.py
Analyzes conversion funnels to identify bottlenecks and optimization opportunities:
- Stage-to-stage conversion rates and drop-off percentages
- Automatic bottleneck identification (largest absolute and relative drops)
- Overall funnel conversion rate
- Segment comparison when multiple segments are provided
3. campaign_roi_calculator.py
Calculates comprehensive ROI metrics with industry benchmarking:
- ROI: Return on investment percentage
- ROAS: Return on ad spend ratio
- CPA: Cost per acquisition
- CPL: Cost per lead
- CAC: Customer acquisition cost
- CTR: Click-through rate
- CVR: Conversion rate (leads to customers)
- Flags underperforming campaigns against industry benchmarks
Reference Guides
| Guide | Location | Purpose |
|---|---|---|
| Attribution Models Guide | references/attribution-models-guide.md | Deep dive into 5 models with formulas, pros/cons, selection criteria |
| Campaign Metrics Benchmarks | references/campaign-metrics-benchmarks.md | Industry benchmarks by channel and vertical for CTR, CPC, CPM, CPA, ROAS |
| Funnel Optimization Framework | references/funnel-optimization-framework.md | Stage-by-stage optimization strategies, common bottlenecks, best practices |
Best Practices
- Use multiple attribution models — Compare at least 3 models to triangulate channel value; no single model tells the full story.
- Set appropriate lookback windows — Match your time-decay half-life to your average sales cycle length.
- Segment your funnels — Compare segments (channel, cohort, geography) to identify performance drivers.
- Benchmark against your own history first — Industry benchmarks provide context, but historical data is the most relevant comparison.
- Run ROI analysis at regular intervals — Weekly for active campaigns, monthly for strategic review.
- Include all costs — Factor in creative, tooling, and labor costs alongside media spend for accurate ROI.
- Document A/B tests rigorously — Use the provided template to ensure statistical validity and clear decision criteria.
Limitations
- No statistical significance testing — Scripts provide descriptive metrics only; p-value calculations require external tools.
- Standard library only — No advanced statistical libraries. Suitable for most campaign sizes but not optimized for datasets exceeding 100K journeys.
- Offline analysis — Scripts analyze static JSON snapshots; no real-time data connections or API integrations.
- Single-currency — All monetary values assumed to be in the same currency; no currency conversion support.
- Simplified time-decay — Exponential decay based on configurable half-life; does not account for weekday/weekend or seasonal patterns.
- No cross-device tracking — Attribution operates on provided journey data as-is; cross-device identity resolution must be handled upstream.
Related Skills
- analytics-tracking: For setting up tracking. NOT for analyzing data (that’s this skill).
- ab-test-setup: For designing experiments to test what analytics reveals.
- marketing-ops: For routing insights to the right execution skill.
- paid-ads: For optimizing ad spend based on analytics findings.
A/B Test Analysis
Test Name: [Descriptive test name] Test ID: [Internal tracking ID] Date: [Start Date] - [End Date] Status: [Planning / Running / Complete / Inconclusive]
Hypothesis
If [we change X], then [Y will happen], because [rationale based on data or insight].
Test Design
| Parameter | Detail |
|---|---|
| Variable Tested | [What is being changed] |
| Control (A) | [Description of control variant] |
| Variant (B) | [Description of test variant] |
| Primary Metric | [The main metric being measured] |
| Secondary Metrics | [Additional metrics to monitor] |
| Traffic Split | [50/50, 70/30, etc.] |
| Minimum Sample Size | [Required sample per variant for statistical significance] |
| Minimum Detectable Effect | [Smallest meaningful difference, e.g., 5% lift] |
| Confidence Level | [95% or 99%] |
| Expected Duration | [X days/weeks based on traffic and sample size] |
Targeting
| Criterion | Value |
|---|---|
| Audience | [Who sees the test] |
| Channel | [Where the test runs] |
| Device | [All / Desktop / Mobile] |
| Geography | [Regions included] |
| Exclusions | [Who is excluded and why] |
Results
Primary Metric: [Metric Name]
| Variant | Sample Size | Conversions | Rate | Lift vs Control |
|---|---|---|---|---|
| Control (A) | % | - | ||
| Variant (B) | % | % |
Statistical Significance: [Yes/No] at [X]% confidence P-value: [X.XXX]
Secondary Metrics
| Metric | Control (A) | Variant (B) | Lift | Significant? |
|---|---|---|---|---|
| [Metric 1] | % | [Yes/No] | ||
| [Metric 2] | % | [Yes/No] | ||
| [Metric 3] | % | [Yes/No] |
Segment Analysis
| Segment | Control Rate | Variant Rate | Lift | Notes |
|---|---|---|---|---|
| Desktop | % | % | % | |
| Mobile | % | % | % | |
| New Visitors | % | % | % | |
| Returning Visitors | % | % | % | |
| [Custom Segment] | % | % | % |
Revenue Impact Estimate
| Metric | Value |
|---|---|
| Projected Annual Lift | [X]% |
| Projected Additional Revenue | $[X] |
| Projected Additional Conversions | [X] |
| Confidence in Estimate | [High/Medium/Low] |
Decision
Winner: [Control / Variant / Inconclusive]
Rationale: [Why this decision was made, citing specific metrics and statistical significance]
Implementation Plan:
- [Step 1: e.g., Roll out variant to 100% of traffic]
- [Step 2: e.g., Update creative assets across campaigns]
- [Step 3: e.g., Monitor for X days post-implementation]
- [Step 4: e.g., Document learnings in knowledge base]
Learnings
What we learned:
- [Key learning 1]
- [Key learning 2]
- [Key learning 3]
Follow-up tests to consider:
- [Next test idea based on results]
- [Next test idea based on results]
Quality Checks
- Sample size reached minimum threshold
- Test ran for at least 1 full business cycle (7 days minimum)
- No external factors (holidays, outages, promotions) affected results
- Segments were balanced between variants
- No sample ratio mismatch (SRM) detected
- Results reviewed by at least 2 team members
Template from campaign-analytics skill. Statistical significance calculations require external tools (e.g., online calculators or scipy).
Campaign Performance Report
Report Period: [Start Date] - [End Date] Prepared By: [Name] Date: [Report Date]
Executive Summary
[2-3 sentence summary of overall campaign performance, key wins, and areas of concern.]
Portfolio Overview
| Metric | This Period | Previous Period | Change |
|---|---|---|---|
| Total Spend | $ | $ | % |
| Total Revenue | $ | $ | % |
| Total Profit | $ | $ | % |
| Portfolio ROI | % | % | pp |
| Portfolio ROAS | x | x | % |
| Total Leads | % | ||
| Total Customers | % | ||
| Blended CPA | $ | $ | % |
| Blended CPL | $ | $ | % |
Channel Performance
| Channel | Spend | Revenue | ROI | ROAS | CPA | Leads | Customers |
|---|---|---|---|---|---|---|---|
| $ | $ | % | x | $ | |||
| Paid Search | $ | $ | % | x | $ | ||
| Paid Social | $ | $ | % | x | $ | ||
| Display | $ | $ | % | x | $ | ||
| Organic | $ | $ | % | x | $ | ||
| Total | $ | $ | % | x | $ |
Top Performing Campaigns
1. [Campaign Name]
- Channel: [Channel]
- Spend: $[Amount] | Revenue: $[Amount] | ROI: [X]%
- Key Success Factor: [What made this campaign successful]
2. [Campaign Name]
- Channel: [Channel]
- Spend: $[Amount] | Revenue: $[Amount] | ROI: [X]%
- Key Success Factor: [What made this campaign successful]
3. [Campaign Name]
- Channel: [Channel]
- Spend: $[Amount] | Revenue: $[Amount] | ROI: [X]%
- Key Success Factor: [What made this campaign successful]
Underperforming Campaigns
[Campaign Name]
- Channel: [Channel]
- Issue: [Description of underperformance]
- Benchmark Comparison: [How it compares to benchmarks]
- Recommended Action: [Specific action to take]
[Campaign Name]
- Channel: [Channel]
- Issue: [Description of underperformance]
- Benchmark Comparison: [How it compares to benchmarks]
- Recommended Action: [Specific action to take]
Attribution Analysis
| Channel | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|---|---|---|---|---|---|
| [Channel 1] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Channel 2] | $[X] | $[X] | $[X] | $[X] | $[X] |
| [Channel 3] | $[X] | $[X] | $[X] | $[X] | $[X] |
Key Insight: [What does the attribution analysis tell us about channel value that single-model analysis would miss?]
Funnel Analysis
| Stage | Count | Conversion Rate | Drop-off | vs. Previous Period |
|---|---|---|---|---|
| Awareness | - | - | % | |
| Interest | % | % | pp | |
| Consideration | % | % | pp | |
| Intent | % | % | pp | |
| Purchase | % | % | pp |
Overall Funnel Conversion: [X]% Primary Bottleneck: [Stage transition with largest drop-off] Recommended Focus: [What to optimize next]
Budget Allocation Recommendations
Based on this period's performance data:
| Channel | Current Allocation | Recommended Allocation | Rationale |
|---|---|---|---|
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
| [Channel] | [X]% ($[X]) | [X]% ($[X]) | [Reason] |
Action Items
| Priority | Action | Owner | Deadline | Expected Impact |
|---|---|---|---|---|
| High | [Action] | [Name] | [Date] | [Impact] |
| High | [Action] | [Name] | [Date] | [Impact] |
| Medium | [Action] | [Name] | [Date] | [Impact] |
| Low | [Action] | [Name] | [Date] | [Impact] |
Next Period Goals
| Metric | Current | Target | Strategy |
|---|---|---|---|
| Portfolio ROI | [X]% | [X]% | [How] |
| ROAS | [X]x | [X]x | [How] |
| CPA | $[X] | $[X] | [How] |
| Lead Volume | [X] | [X] | [How] |
Report generated using campaign-analytics toolkit. Data source: [Source system/platform].
Channel Performance Comparison
Period: [Start Date] - [End Date] Compared Against: [Previous period / Industry benchmarks / Both] Prepared By: [Name]
Summary
[1-2 sentence overview: which channels are performing best, which need attention, and the overall channel mix health.]
Channel Scorecard
| Channel | Spend | Revenue | Profit | ROI | ROAS | CTR | CPA | CPL | Grade |
|---|---|---|---|---|---|---|---|---|---|
| $ | $ | $ | % | x | % | $ | $ | [A-F] | |
| Paid Search | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Paid Social | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Display | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Organic Search | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Organic Social | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Referral | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Direct | $ | $ | $ | % | x | % | $ | $ | [A-F] |
| Total | $ | $ | $ | % | x | % | $ | $ |
Grading Scale:
- A: Exceeds all benchmarks
- B: Meets or exceeds target benchmarks
- C: Between low and target benchmarks
- D: Below low benchmark on 1+ key metrics
- F: Underperforming on multiple metrics or unprofitable
Channel Deep Dives
[Channel Name]
Performance Summary: [1-2 sentences]
| Metric | Actual | Target | Benchmark | vs. Target | vs. Benchmark |
|---|---|---|---|---|---|
| Spend | $ | $ | - | % | - |
| Revenue | $ | $ | - | % | - |
| ROI | % | % | % | pp | pp |
| ROAS | x | x | x | % | % |
| CTR | % | % | % | pp | pp |
| CPA | $ | $ | $ | % | % |
| CPL | $ | $ | $ | % | % |
| CPC | $ | $ | $ | % | % |
Trend (Last 3 Periods):
| Period | Spend | Revenue | ROI | ROAS | Key Event |
|---|---|---|---|---|---|
| [Period 1] | $ | $ | % | x | [Note] |
| [Period 2] | $ | $ | % | x | [Note] |
| [Current] | $ | $ | % | x | [Note] |
Assessment: [Improving / Stable / Declining]
Action Items:
- [Specific action for this channel]
- [Specific action for this channel]
[Repeat deep dive section for each channel]
Attribution View
How each channel is valued under different attribution models:
| Channel | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|---|---|---|---|---|---|
| [Channel 1] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
| [Channel 2] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
| [Channel 3] | $ (X%) | $ (X%) | $ (X%) | $ (X%) | $ (X%) |
Insight: [Which channels are over/undervalued by single-touch models?]
Funnel Performance by Channel
| Stage | [Ch 1] | [Ch 2] | [Ch 3] | [Ch 4] | Overall |
|---|---|---|---|---|---|
| Awareness | [Count] | [Count] | [Count] | [Count] | [Count] |
| Interest | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Consideration | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Intent | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Purchase | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
| Overall | [Rate]% | [Rate]% | [Rate]% | [Rate]% | [Rate]% |
Best Funnel: [Channel with highest overall conversion rate] Biggest Bottleneck: [Channel + stage transition with worst drop-off]
Budget Allocation Analysis
Current vs. Optimal Allocation
| Channel | Current % | Current $ | Recommended % | Recommended $ | Rationale |
|---|---|---|---|---|---|
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| [Channel] | % | $ | % | $ | [Why] |
| Total | 100% | $ | 100% | $ |
Reallocation Impact Estimate
| Scenario | Projected Revenue | Projected ROI | Change vs Current |
|---|---|---|---|
| Current allocation | $ | % | - |
| Recommended allocation | $ | % | +% |
| Aggressive growth | $ | % | +% |
| Cost optimization | $ | % | +% |
Competitive Context
| Metric | Our Performance | Industry Average | Gap |
|---|---|---|---|
| Channel Mix Diversity | [X channels active] | [X channels] | |
| Overall ROAS | [X]x | [X]x | |
| Paid vs Organic Split | [X/X]% | [X/X]% | |
| Digital vs Traditional | [X/X]% | [X/X]% |
Recommendations
Immediate Actions (This Week)
- [Action] -- [Expected impact], [Owner]
- [Action] -- [Expected impact], [Owner]
Short-Term (This Month)
- [Action] -- [Expected impact], [Owner]
- [Action] -- [Expected impact], [Owner]
Strategic (This Quarter)
- [Action] -- [Expected impact], [Owner]
- [Action] -- [Expected impact], [Owner]
Template from campaign-analytics skill. Populate with data from attribution_analyzer.py, funnel_analyzer.py, and campaign_roi_calculator.py.
{
"_description": "Expected output from running the 3 scripts against sample_campaign_data.json with --format json",
"attribution_analyzer": {
"_command": "python scripts/attribution_analyzer.py assets/sample_campaign_data.json --format json",
"summary": {
"total_journeys": 8,
"converted_journeys": 6,
"conversion_rate": 75.0,
"total_revenue": 3700.0,
"channels_observed": [
"direct", "display", "email", "organic_search",
"organic_social", "paid_search", "paid_social", "referral"
]
},
"models": {
"first-touch": {
"organic_search": 700.0,
"paid_social": 1200.0,
"display": 350.0,
"organic_social": 800.0,
"referral": 650.0
},
"last-touch": {
"paid_search": 1500.0,
"direct": 2000.0,
"organic_search": 200.0
},
"linear": {
"organic_search": 666.67,
"email": 1003.33,
"paid_search": 718.33,
"paid_social": 300.0,
"direct": 460.0,
"display": 175.0,
"organic_social": 160.0,
"referral": 216.67
},
"time-decay": {
"organic_search": 582.38,
"email": 1053.68,
"paid_search": 881.03,
"paid_social": 178.4,
"direct": 638.82,
"display": 140.62,
"organic_social": 78.48,
"referral": 146.59
},
"position-based": {
"organic_search": 520.0,
"paid_search": 688.33,
"email": 456.67,
"paid_social": 480.0,
"direct": 800.0,
"display": 175.0,
"organic_social": 320.0,
"referral": 260.0
}
}
},
"funnel_analyzer": {
"_command": "python scripts/funnel_analyzer.py assets/sample_campaign_data.json --format json",
"_note": "Uses segment comparison mode since 'segments' key is present in the data",
"rankings": [
{"rank": 1, "segment": "organic", "overall_conversion_rate": 5.6, "total_entries": 5000, "total_conversions": 280},
{"rank": 2, "segment": "paid", "overall_conversion_rate": 3.0, "total_entries": 3000, "total_conversions": 90},
{"rank": 3, "segment": "email", "overall_conversion_rate": 2.5, "total_entries": 2000, "total_conversions": 50}
],
"key_findings": {
"all_segments_bottleneck_absolute": "Awareness -> Interest",
"all_segments_bottleneck_relative": "Intent -> Purchase",
"best_performing_segment": "organic (5.6% overall conversion)",
"worst_performing_segment": "email (2.5% overall conversion)"
}
},
"campaign_roi_calculator": {
"_command": "python scripts/campaign_roi_calculator.py assets/sample_campaign_data.json --format json",
"portfolio_summary": {
"total_campaigns": 5,
"total_spend": 34000.0,
"total_revenue": 99000.0,
"total_profit": 65000.0,
"portfolio_roi_pct": 191.18,
"portfolio_roas": 2.91,
"blended_ctr_pct": 1.04,
"blended_cpl": 27.64,
"blended_cpa": 161.9,
"top_performer": "Spring Email Campaign",
"underperforming_campaigns": [
"Spring Email Campaign",
"Facebook Awareness Q1",
"LinkedIn B2B Outreach"
]
},
"channel_summary": {
"email": {"spend": 5000.0, "revenue": 25000.0, "roi_pct": 400.0, "roas": 5.0},
"paid_search": {"spend": 12000.0, "revenue": 48000.0, "roi_pct": 300.0, "roas": 4.0},
"paid_social": {"spend": 14000.0, "revenue": 17000.0, "roi_pct": 21.43, "roas": 1.21},
"display": {"spend": 3000.0, "revenue": 9000.0, "roi_pct": 200.0, "roas": 3.0}
},
"key_findings": {
"most_profitable_channel": "paid_search ($36,000 profit)",
"highest_roas_channel": "email (5.0x ROAS)",
"unprofitable_campaign": "LinkedIn B2B Outreach (-$1,000 loss)",
"best_ctr": "Spring Email Campaign (5.0%)"
}
}
}
{
"journeys": [
{
"journey_id": "j001",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-01T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-05T14:30:00", "interaction": "open"},
{"channel": "paid_search", "timestamp": "2025-10-08T09:15:00", "interaction": "click"}
],
"converted": true,
"revenue": 500.00
},
{
"journey_id": "j002",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-02T11:00:00", "interaction": "click"},
{"channel": "organic_search", "timestamp": "2025-10-06T16:45:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-09T08:00:00", "interaction": "click"},
{"channel": "direct", "timestamp": "2025-10-10T13:20:00", "interaction": "visit"}
],
"converted": true,
"revenue": 1200.00
},
{
"journey_id": "j003",
"touchpoints": [
{"channel": "display", "timestamp": "2025-10-03T09:30:00", "interaction": "view"},
{"channel": "paid_search", "timestamp": "2025-10-07T10:00:00", "interaction": "click"}
],
"converted": true,
"revenue": 350.00
},
{
"journey_id": "j004",
"touchpoints": [
{"channel": "organic_social", "timestamp": "2025-10-01T08:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-04T12:00:00", "interaction": "click"},
{"channel": "paid_search", "timestamp": "2025-10-08T14:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-11T09:00:00", "interaction": "click"},
{"channel": "direct", "timestamp": "2025-10-12T16:00:00", "interaction": "visit"}
],
"converted": true,
"revenue": 800.00
},
{
"journey_id": "j005",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-05T10:00:00", "interaction": "click"},
{"channel": "display", "timestamp": "2025-10-08T11:30:00", "interaction": "view"}
],
"converted": false,
"revenue": 0
},
{
"journey_id": "j006",
"touchpoints": [
{"channel": "referral", "timestamp": "2025-10-06T14:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-10T09:30:00", "interaction": "click"},
{"channel": "paid_search", "timestamp": "2025-10-13T11:00:00", "interaction": "click"}
],
"converted": true,
"revenue": 650.00
},
{
"journey_id": "j007",
"touchpoints": [
{"channel": "organic_search", "timestamp": "2025-10-04T08:30:00", "interaction": "click"}
],
"converted": true,
"revenue": 200.00
},
{
"journey_id": "j008",
"touchpoints": [
{"channel": "paid_social", "timestamp": "2025-10-07T13:00:00", "interaction": "click"},
{"channel": "organic_search", "timestamp": "2025-10-09T10:00:00", "interaction": "click"},
{"channel": "email", "timestamp": "2025-10-12T15:00:00", "interaction": "click"}
],
"converted": false,
"revenue": 0
}
],
"funnel": {
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"counts": [10000, 5200, 2800, 1400, 420]
},
"segments": {
"organic": {
"counts": [5000, 2800, 1600, 850, 280]
},
"paid": {
"counts": [3000, 1500, 750, 350, 90]
},
"email": {
"counts": [2000, 900, 450, 200, 50]
}
},
"stages": ["Awareness", "Interest", "Consideration", "Intent", "Purchase"],
"campaigns": [
{
"name": "Spring Email Campaign",
"channel": "email",
"spend": 5000.00,
"revenue": 25000.00,
"impressions": 50000,
"clicks": 2500,
"leads": 300,
"customers": 45
},
{
"name": "Google Search - Brand",
"channel": "paid_search",
"spend": 12000.00,
"revenue": 48000.00,
"impressions": 200000,
"clicks": 8000,
"leads": 600,
"customers": 120
},
{
"name": "Facebook Awareness Q1",
"channel": "paid_social",
"spend": 8000.00,
"revenue": 12000.00,
"impressions": 500000,
"clicks": 5000,
"leads": 200,
"customers": 25
},
{
"name": "Display Retargeting",
"channel": "display",
"spend": 3000.00,
"revenue": 9000.00,
"impressions": 800000,
"clicks": 1200,
"leads": 80,
"customers": 15
},
{
"name": "LinkedIn B2B Outreach",
"channel": "paid_social",
"spend": 6000.00,
"revenue": 5000.00,
"impressions": 120000,
"clicks": 600,
"leads": 50,
"customers": 5
}
]
}
Attribution Models Guide
Comprehensive reference for multi-touch attribution modeling in marketing analytics. This guide covers the five standard attribution models, their mathematical foundations, selection criteria, and practical application guidelines.
Overview
Attribution modeling answers the question: Which marketing touchpoints deserve credit for conversions? When a customer interacts with multiple channels before converting, attribution models distribute conversion credit across those touchpoints using different rules.
No single model is "correct." Each reveals different aspects of channel performance. Best practice is to run multiple models and compare results to build a complete picture.
Model 1: First-Touch Attribution
How It Works
All conversion credit (100%) goes to the first touchpoint in the customer journey.
Formula
Credit(channel) = Revenue * 1.0 (if channel is first touchpoint)
Credit(channel) = 0 (otherwise)When to Use
- Brand awareness campaigns: Measures which channels bring new prospects into the funnel
- Top-of-funnel optimization: Identifies the best channels for initial discovery
- New market entry: Evaluating which channels generate first contact in new segments
Pros
- Simple to understand and implement
- Clearly identifies awareness-driving channels
- Useful for budget allocation toward customer acquisition
Cons
- Ignores all touchpoints after the first
- Overvalues awareness channels, undervalues conversion channels
- Does not reflect the reality of multi-touch customer journeys
Best For
Marketing teams focused on expanding reach and entering new markets where understanding initial discovery channels is the priority.
Model 2: Last-Touch Attribution
How It Works
All conversion credit (100%) goes to the last touchpoint before conversion.
Formula
Credit(channel) = Revenue * 1.0 (if channel is last touchpoint)
Credit(channel) = 0 (otherwise)When to Use
- Direct response campaigns: Measures which channels close deals
- Bottom-of-funnel optimization: Identifies the most effective conversion channels
- Short sales cycles: When customers typically convert within 1-2 interactions
Pros
- Simple to implement (default in many analytics platforms)
- Highlights channels that directly drive conversions
- Useful for performance marketing optimization
Cons
- Ignores all touchpoints before the last
- Overvalues conversion channels, undervalues awareness channels
- Can lead to cutting awareness spending that actually feeds the pipeline
Best For
Performance marketing teams running direct-response campaigns where the final interaction is the primary lever.
Model 3: Linear Attribution
How It Works
Conversion credit is split equally across all touchpoints in the journey.
Formula
Credit(channel) = Revenue / N (for each of N touchpoints)When to Use
- Balanced multi-channel evaluation: When all touchpoints are considered equally valuable
- Long sales cycles: Where multiple interactions are required
- Content marketing: Where each piece of content plays a role in nurturing
Pros
- Fair distribution across all channels
- Recognizes the contribution of every touchpoint
- Good starting point for teams new to multi-touch attribution
Cons
- Treats all touchpoints equally, which rarely reflects reality
- Does not account for the relative importance of different positions in the journey
- Can dilute the signal of truly impactful touchpoints
Best For
Teams running consistent multi-channel campaigns where every touchpoint is intentionally designed to contribute to conversion.
Model 4: Time-Decay Attribution
How It Works
Touchpoints closer to conversion receive exponentially more credit. Uses a half-life parameter: a touchpoint occurring one half-life before conversion gets 50% of the credit of the converting touchpoint.
Formula
Weight(touchpoint) = e^(-lambda * days_before_conversion)
where lambda = ln(2) / half_life_days
Credit(channel) = Revenue * (Weight / Sum_of_all_weights)Configurable Parameters
| Parameter | Default | Description |
|---|---|---|
| half_life_days | 7 | Days for weight to decay by 50% |
Guidance on Half-Life Selection
| Sales Cycle Length | Recommended Half-Life |
|---|---|
| 1-3 days (impulse) | 1-2 days |
| 1-2 weeks (considered) | 5-7 days |
| 1-3 months (B2B) | 14-21 days |
| 3-6 months (enterprise) | 30-45 days |
| 6-12 months (complex B2B) | 60-90 days |
When to Use
- Short-to-medium sales cycles: Where recent interactions are more influential
- Promotional campaigns: Where urgency and recency matter
- E-commerce: Where the last few interactions before purchase are most impactful
Pros
- Accounts for recency, which aligns with many buying behaviors
- More sophisticated than first/last-touch
- Configurable half-life allows tuning to specific business contexts
Cons
- May undervalue early-stage awareness that planted the seed
- Half-life selection is subjective and requires testing
- More complex to explain to stakeholders
Best For
E-commerce and B2C companies with identifiable sales cycles where recent interactions carry more decision weight.
Model 5: Position-Based Attribution (U-Shaped)
How It Works
40% of credit goes to the first touchpoint, 40% to the last touchpoint, and the remaining 20% is split equally among middle touchpoints.
Formula
Credit(first_channel) = Revenue * 0.40
Credit(last_channel) = Revenue * 0.40
Credit(middle_channel) = Revenue * 0.20 / (N - 2) (for each middle touchpoint)
Special cases:
- 1 touchpoint: 100% credit
- 2 touchpoints: 50% eachWhen to Use
- Full-funnel marketing: Values both awareness (first) and conversion (last)
- Mature marketing programs: With established multi-channel strategies
- B2B marketing: Where both lead generation and deal closure are distinct priorities
Pros
- Recognizes the importance of first and last interactions
- Still gives credit to middle nurturing touchpoints
- Provides a balanced view of the full journey
Cons
- The 40/20/40 split is arbitrary (some businesses may need 30/40/30 or other splits)
- Middle touchpoints get relatively little credit
- May not suit businesses where middle interactions are the primary differentiator
Best For
B2B and enterprise marketing teams running coordinated campaigns across the full customer journey from awareness through conversion.
Model Comparison Matrix
| Criteria | First-Touch | Last-Touch | Linear | Time-Decay | Position-Based |
|---|---|---|---|---|---|
| Complexity | Low | Low | Low | Medium | Medium |
| Awareness bias | High | None | Neutral | Low | Medium |
| Conversion bias | None | High | Neutral | High | Medium |
| Multi-touch fairness | Poor | Poor | Good | Good | Good |
| Best sales cycle | Any | Short | Long | Short-Medium | Any |
| Stakeholder clarity | High | High | High | Medium | Medium |
Practical Guidelines
Running Multiple Models
Always run at least 3 models and look for channels that rank highly across multiple models. These are your most reliable performers. Channels that rank well in only one model may be overvalued by that model's bias.
Interpreting Divergent Results
When models disagree significantly on a channel's value:
- High in first-touch, low in last-touch: The channel is strong for awareness but does not close. Pair it with stronger conversion channels.
- Low in first-touch, high in last-touch: The channel closes deals but does not generate new prospects. Ensure upstream awareness channels feed it.
- High in linear, low in first/last: The channel plays a critical nurturing role. Cutting it may break the journey without immediately visible impact.
Common Pitfalls
- Over-relying on last-touch: Most analytics platforms default to last-touch, which chronically undervalues awareness spending.
- Ignoring non-converting journeys: Attribution only counts converted journeys. Channels that contribute to unconverted journeys may still have value.
- Confusing correlation with causation: Attribution shows correlation between touchpoints and conversion, not definitive causation.
- Insufficient data volume: Models require statistically meaningful journey counts. With fewer than 100 journeys, results are unreliable.
Data Requirements
Minimum Data
| Field | Required | Description |
|---|---|---|
| journey_id | Yes | Unique identifier for each customer journey |
| touchpoints | Yes | Array of channel interactions with timestamps |
| converted | Yes | Boolean indicating whether the journey converted |
| revenue | Recommended | Conversion value for credit allocation |
Touchpoint Fields
| Field | Required | Description |
|---|---|---|
| channel | Yes | Marketing channel name |
| timestamp | Yes | ISO-format timestamp of the interaction |
| interaction | Optional | Type of interaction (click, view, open, etc.) |
Further Reading
- Google Analytics attribution model comparison documentation
- Facebook/Meta attribution window settings and their impact
- HubSpot multi-touch revenue attribution methodology
- Bizible/Marketo B2B attribution best practices
Campaign Metrics Benchmarks
Industry benchmark reference for marketing campaign performance metrics. Use these benchmarks to contextualize your campaign results, identify underperformance, and set realistic targets.
How to Use This Reference
- Find your industry vertical and channel combination
- Compare your actual metrics to the benchmark ranges
- Use the assessment scale: Below Low = underperforming, Low-Target = below target, Target-High = good, Above High = excellent
- Adjust targets based on your historical performance (your own data is always the best benchmark)
Click-Through Rate (CTR) Benchmarks
CTR = (Clicks / Impressions) * 100
By Channel (Cross-Industry Average)
| Channel | Low | Target | High | Notes |
|---|---|---|---|---|
| 1.0% | 2.5% | 5.0% | Highly dependent on list quality and segmentation | |
| Paid Search (Google) | 1.5% | 3.5% | 7.0% | Brand keywords typically 5-10%, generic 1-3% |
| Paid Social (Facebook) | 0.5% | 1.2% | 3.0% | Video ads trend higher, static lower |
| Paid Social (LinkedIn) | 0.3% | 0.8% | 2.0% | B2B focused, lower volume but higher intent |
| Display Ads | 0.05% | 0.10% | 0.50% | Retargeting typically 0.5-1.0% |
| Organic Search | 1.5% | 3.0% | 8.0% | Position 1 averages 28-31% CTR |
| Organic Social | 0.5% | 1.5% | 4.0% | Platform algorithm changes affect significantly |
| Referral | 1.0% | 3.0% | 6.0% | Quality of referring site matters greatly |
| Direct | 2.0% | 4.0% | 8.0% | Highest intent channel |
By Industry (Paid Search)
| Industry | Average CTR | Low | High |
|---|---|---|---|
| B2B | 2.4% | 1.5% | 4.0% |
| E-commerce | 2.7% | 1.8% | 5.0% |
| Education | 3.3% | 2.0% | 6.0% |
| Finance & Insurance | 2.9% | 1.5% | 5.5% |
| Healthcare | 3.3% | 2.0% | 5.0% |
| Legal | 2.9% | 1.5% | 5.0% |
| Real Estate | 3.7% | 2.5% | 6.0% |
| Retail | 2.5% | 1.5% | 5.0% |
| SaaS | 2.1% | 1.2% | 3.5% |
| Technology | 2.1% | 1.0% | 4.0% |
| Travel & Hospitality | 4.7% | 3.0% | 8.0% |
Cost Per Click (CPC) Benchmarks
CPC = Spend / Clicks
By Channel (USD)
| Channel | Low | Target | High | Notes |
|---|---|---|---|---|
| Google Search | $0.50 | $2.50 | $8.00 | Legal/finance can exceed $50 per click |
| Google Display | $0.10 | $0.50 | $2.00 | Programmatic can be lower |
| $0.30 | $1.00 | $3.00 | B2C typically lower than B2B | |
| $2.00 | $5.50 | $12.00 | Highest CPC among social platforms | |
| $0.40 | $1.20 | $3.50 | Stories ads trending lower | |
| Twitter/X | $0.20 | $0.80 | $2.50 | High variability by topic |
| TikTok | $0.10 | $0.50 | $2.00 | Rapidly evolving, currently lower |
By Industry (Google Ads)
| Industry | Average CPC | Range |
|---|---|---|
| Automotive | $2.46 | $1.00-$6.00 |
| B2B | $3.33 | $1.50-$8.00 |
| E-commerce | $1.16 | $0.50-$3.00 |
| Education | $2.40 | $1.00-$5.00 |
| Finance & Insurance | $3.44 | $1.00-$50.00 |
| Healthcare | $2.62 | $1.00-$6.00 |
| Legal | $6.75 | $2.00-$100.00 |
| Real Estate | $2.37 | $1.00-$5.00 |
| SaaS/Technology | $3.80 | $1.50-$10.00 |
| Travel | $1.53 | $0.50-$4.00 |
Cost Per Mille / Thousand Impressions (CPM) Benchmarks
CPM = (Spend / Impressions) * 1000
By Channel (USD)
| Channel | Low | Target | High | Notes |
|---|---|---|---|---|
| $3.00 | $8.00 | $15.00 | Q4 holiday season can exceed $20 | |
| $4.00 | $10.00 | $18.00 | Reels ads trending lower | |
| $8.00 | $25.00 | $50.00 | Premium B2B audience | |
| Google Display | $1.00 | $3.50 | $8.00 | Programmatic ranges widely |
| TikTok | $2.00 | $6.00 | $12.00 | Growing platform, rates increasing |
| YouTube | $4.00 | $10.00 | $20.00 | Pre-roll vs discovery ads vary |
| Programmatic Display | $0.50 | $2.00 | $6.00 | Dependent on targeting precision |
Cost Per Acquisition (CPA) Benchmarks
CPA = Spend / Customers Acquired
By Channel (USD)
| Channel | Low | Target | High | Notes |
|---|---|---|---|---|
| $5 | $15 | $40 | Existing list; acquisition cost amortized | |
| Paid Search | $20 | $50 | $150 | Highly dependent on industry and competition |
| Paid Social | $15 | $40 | $100 | Retargeting typically lower |
| Display | $30 | $75 | $200 | Awareness-focused; higher CPA expected |
| Organic Search | $5 | $20 | $60 | Excludes SEO investment costs |
| Organic Social | $10 | $30 | $80 | Content production costs excluded |
| Referral | $10 | $25 | $70 | Referral incentive costs included |
By Industry (Across Channels)
| Industry | Average CPA | Acceptable Range |
|---|---|---|
| B2B SaaS | $150-$400 | $75-$700 |
| E-commerce | $25-$80 | $10-$150 |
| Education | $40-$120 | $20-$250 |
| Finance | $75-$200 | $30-$500 |
| Healthcare | $50-$150 | $25-$300 |
| Legal | $100-$300 | $50-$700 |
| Real Estate | $60-$180 | $30-$350 |
| Retail | $15-$50 | $8-$100 |
| Travel | $20-$70 | $10-$150 |
Cost Per Lead (CPL) Benchmarks
CPL = Spend / Leads Generated
By Channel (USD)
| Channel | Low | Target | High |
|---|---|---|---|
| $3 | $10 | $25 | |
| Paid Search | $15 | $35 | $90 |
| Paid Social (Facebook) | $8 | $20 | $50 |
| Paid Social (LinkedIn) | $25 | $75 | $150 |
| Display | $20 | $50 | $120 |
| Content Marketing | $10 | $30 | $80 |
| Webinars | $30 | $70 | $150 |
By Industry
| Industry | Average CPL | Range |
|---|---|---|
| B2B SaaS | $50-$150 | $25-$300 |
| E-commerce | $10-$30 | $5-$60 |
| Education | $25-$70 | $15-$150 |
| Financial Services | $40-$120 | $20-$250 |
| Healthcare | $30-$90 | $15-$180 |
| Manufacturing | $50-$120 | $25-$200 |
| Technology | $40-$100 | $20-$200 |
Return on Ad Spend (ROAS) Benchmarks
ROAS = Revenue / Ad Spend
By Channel
| Channel | Low | Target | High | Notes |
|---|---|---|---|---|
| 30x | 42x | 60x | Highest ROAS channel when list is healthy | |
| Paid Search (Brand) | 8x | 15x | 30x | Brand terms have high ROAS |
| Paid Search (Generic) | 2x | 4x | 8x | Competitive; ROAS varies widely |
| Paid Social | 1.5x | 3x | 6x | Retargeting typically 4-10x |
| Display | 0.5x | 1.5x | 3x | Often used for awareness; lower direct ROAS |
| Organic Search | 5x | 10x | 20x | Excludes SEO investment amortization |
| Organic Social | 3x | 6x | 12x | Excludes content production costs |
By Industry
| Industry | Minimum Viable ROAS | Target ROAS |
|---|---|---|
| E-commerce (low margin) | 4x | 8x+ |
| E-commerce (high margin) | 2x | 4x+ |
| SaaS | 3x | 6x+ |
| B2B Services | 5x | 10x+ |
| Retail | 3x | 5x+ |
| DTC Brands | 2.5x | 5x+ |
ROAS Calculation Notes
- Breakeven ROAS = 1 / Profit Margin (e.g., 25% margin = 4x breakeven)
- Target ROAS should be at least 2x the breakeven ROAS for sustainable growth
- Always include all costs (media, creative, tools, labor) for true ROAS
Conversion Rate Benchmarks
Landing Page Conversion Rate
| Industry | Low | Average | High |
|---|---|---|---|
| B2B SaaS | 2.0% | 4.5% | 9.0% |
| E-commerce | 1.5% | 3.0% | 6.0% |
| Education | 2.5% | 5.5% | 10.0% |
| Finance | 2.0% | 5.0% | 11.0% |
| Healthcare | 2.0% | 4.0% | 8.0% |
| Legal | 3.0% | 7.0% | 13.0% |
| Real Estate | 2.0% | 4.5% | 8.0% |
| Travel | 2.0% | 4.0% | 9.0% |
Email Conversion Rates
| Metric | Low | Average | High |
|---|---|---|---|
| Open Rate | 15% | 22% | 35% |
| Click Rate | 1.0% | 2.5% | 5.0% |
| Click-to-Open Rate | 8% | 12% | 20% |
| Unsubscribe Rate | 0.1% | 0.2% | 0.5% |
Seasonal Adjustments
Campaign benchmarks fluctuate by season. Apply these adjustment factors to normalize your comparisons:
| Quarter | CPC Adjustment | CPM Adjustment | CVR Adjustment |
|---|---|---|---|
| Q1 (Jan-Mar) | -10% to -15% | -15% to -20% | Baseline |
| Q2 (Apr-Jun) | Baseline | Baseline | Baseline |
| Q3 (Jul-Sep) | +5% to +10% | +5% to +10% | -5% |
| Q4 (Oct-Dec) | +15% to +30% | +20% to +40% | +10% to +20% |
Key seasonal events:
- Black Friday/Cyber Monday: CPMs can increase 50-100%
- January: Lowest competition, good for testing
- Back-to-School (Aug-Sep): Education and retail spike
- Tax Season (Jan-Apr): Finance vertical spike
Using Benchmarks Effectively
Do
- Compare against your own historical data first, then industry benchmarks
- Account for seasonality when comparing time periods
- Consider your funnel position (awareness vs conversion campaigns have different benchmarks)
- Update benchmarks annually as industry norms shift
Do Not
- Treat benchmarks as absolute targets (your business context matters more)
- Compare across industries without adjustment
- Ignore sample size (small campaigns have high variance)
- Use benchmarks to justify cutting channels without understanding their full-funnel role
Funnel Optimization Framework
A stage-by-stage guide to diagnosing and improving marketing and sales funnel performance. Use this framework alongside the funnel_analyzer.py tool to identify bottlenecks and implement targeted optimizations.
The Standard Marketing Funnel
AWARENESS (Impressions, Reach)
|
INTEREST (Clicks, Engagement)
|
CONSIDERATION (Leads, Sign-ups)
|
INTENT (Demos, Trials, Cart Adds)
|
PURCHASE (Customers, Revenue)
|
RETENTION (Repeat, Upsell, Referral)Each transition between stages represents a conversion point. The funnel analyzer measures these transitions and identifies where the largest drop-offs occur.
Stage-by-Stage Optimization
Stage 1: Awareness to Interest
What it measures: How effectively you capture attention and generate initial engagement.
Healthy conversion rate: 2-8% (varies widely by channel)
Common bottlenecks:
- Poor targeting: Reaching the wrong audience
- Weak creative: Ads that do not stand out or communicate value
- Message-market mismatch: Content that does not resonate with the audience's needs
- Low brand recognition: No trust or familiarity established
Optimization tactics:
| Tactic | Expected Impact | Effort |
|---|---|---|
| Audience refinement (lookalike, interest targeting) | High | Medium |
| Creative testing (3-5 variants per campaign) | High | Medium |
| Headline optimization (clear value proposition) | Medium | Low |
| Channel diversification (test new platforms) | Medium | High |
| Retargeting past engagers | Medium | Low |
Key metrics to track:
- Impressions and reach
- CTR by creative variant
- Cost per engagement
- Brand lift (if measured)
Stage 2: Interest to Consideration
What it measures: How well you convert initial interest into genuine evaluation.
Healthy conversion rate: 10-30%
Common bottlenecks:
- Landing page disconnect: The page does not match the ad promise
- Poor user experience: Slow load times, confusing layout, mobile issues
- Missing social proof: No testimonials, case studies, or trust signals
- Unclear value proposition: Visitor does not understand "what's in it for me"
- Friction in lead capture: Too many form fields, unclear CTA
Optimization tactics:
| Tactic | Expected Impact | Effort |
|---|---|---|
| Landing page A/B testing | High | Medium |
| Message match (ad copy = page headline) | High | Low |
| Reduce form fields to essential only | High | Low |
| Add social proof (logos, testimonials, numbers) | Medium | Low |
| Improve page load speed (<3 seconds) | Medium | Medium |
| Mobile optimization | Medium | Medium |
| Add exit-intent offers | Low-Medium | Low |
Key metrics to track:
- Landing page conversion rate
- Bounce rate
- Time on page
- Form abandonment rate
Stage 3: Consideration to Intent
What it measures: How effectively you move evaluated prospects toward a purchase decision.
Healthy conversion rate: 15-40%
Common bottlenecks:
- Insufficient nurturing: Leads go cold without follow-up
- Lack of differentiation: Prospects do not understand why you are better than alternatives
- Missing information: Pricing, features, or comparisons not available
- Sales-marketing misalignment: MQLs are not meeting sales expectations
- Poor timing: Follow-up is too slow or too aggressive
Optimization tactics:
| Tactic | Expected Impact | Effort |
|---|---|---|
| Email nurture sequences (5-7 touchpoints) | High | Medium |
| Lead scoring to prioritize sales outreach | High | High |
| Comparison content (vs. competitors) | Medium | Medium |
| Free trial or demo offers | High | Medium |
| Case studies relevant to prospect's industry | Medium | Medium |
| Retargeting with mid-funnel content | Medium | Low |
| Pricing transparency | Medium | Low |
Key metrics to track:
- MQL to SQL conversion rate
- Lead response time
- Email engagement rates (nurture sequences)
- Content engagement (case studies, comparisons)
Stage 4: Intent to Purchase
What it measures: How well you convert ready-to-buy prospects into paying customers.
Healthy conversion rate: 20-50%
Common bottlenecks:
- Complex purchase process: Too many steps, unclear pricing, difficult checkout
- Lack of urgency: No reason to buy now
- Unaddressed objections: Common concerns not proactively handled
- Poor sales process: Inconsistent follow-up, inadequate discovery
- Payment friction: Limited payment options, security concerns
Optimization tactics:
| Tactic | Expected Impact | Effort |
|---|---|---|
| Simplify checkout/purchase flow | High | Medium |
| Add urgency (limited-time offers, scarcity) | Medium | Low |
| Address objections in sales collateral | Medium | Medium |
| Offer guarantees (money-back, free trial extension) | Medium | Low |
| Cart abandonment emails (3-email sequence) | High | Low |
| Live chat or chatbot support at checkout | Medium | Medium |
| Multiple payment options | Low-Medium | Medium |
| Customer success stories at point of purchase | Medium | Low |
Key metrics to track:
- Cart abandonment rate
- Checkout completion rate
- Average deal cycle length
- Win rate (B2B)
- Average order value
Stage 5: Purchase to Retention
What it measures: How well you retain customers and expand their lifetime value.
Healthy retention rate: 70-95% annually (varies by business model)
Common bottlenecks:
- Poor onboarding: Customers do not achieve value quickly
- Lack of engagement: No ongoing communication or community
- Product/service issues: Unmet expectations post-purchase
- No expansion path: No upsell, cross-sell, or referral programs
- Competitor poaching: Better offers from alternatives
Optimization tactics:
| Tactic | Expected Impact | Effort |
|---|---|---|
| Structured onboarding (first 30/60/90 days) | High | High |
| Regular check-ins and health scoring | High | Medium |
| Loyalty programs | Medium | Medium |
| Referral incentives | Medium | Low |
| Cross-sell/upsell email sequences | Medium | Medium |
| Customer community building | Medium | High |
| Proactive support based on usage patterns | High | High |
Key metrics to track:
- Customer retention rate
- Net Promoter Score (NPS)
- Customer Lifetime Value (CLV)
- Expansion revenue
- Churn rate and reasons
Bottleneck Diagnosis Framework
When the funnel analyzer identifies a bottleneck, use this diagnostic framework:
Step 1: Quantify the Problem
- What is the conversion rate at this stage?
- How does it compare to your historical average?
- How does it compare to industry benchmarks?
- What is the absolute number of prospects lost?
Step 2: Segment the Data
Look at the bottleneck broken down by:
- Channel: Is the drop-off worse for certain traffic sources?
- Device: Mobile vs desktop performance gaps
- Geography: Regional differences
- Cohort: Has it changed over time?
- Campaign: Specific campaigns performing worse
Step 3: Identify Root Cause
| Symptom | Likely Root Cause | Diagnostic Action |
|---|---|---|
| High bounce rate | Message mismatch or UX issue | Review landing page vs ad |
| High time on page but low conversion | Confusion or missing CTA | Heatmap analysis |
| Drop-off at form | Too many fields or unclear value | Form analytics review |
| Long time between stages | Insufficient nurturing | Review email engagement |
| Drop-off after pricing page | Pricing concerns | Test pricing presentation |
| High cart abandonment | Checkout friction | Checkout flow analysis |
Step 4: Prioritize Fixes
Use the ICE scoring framework:
- Impact (1-10): How much will fixing this improve the bottleneck?
- Confidence (1-10): How confident are you that this fix will work?
- Ease (1-10): How easy is this to implement?
Score = (Impact + Confidence + Ease) / 3
Prioritize fixes with the highest ICE score.
Funnel Math and Revenue Impact
Calculating the Revenue Impact of Funnel Improvements
A useful way to prioritize is to calculate how much revenue each percentage point of improvement is worth at each stage.
Formula:
Revenue Impact = Current_Revenue * (1 / Current_Conversion_Rate) * Improvement_PercentageExample:
| Stage | Current Rate | +1pp Improvement | Revenue Impact |
|---|---|---|---|
| Awareness -> Interest | 5.0% | 6.0% | +20% more leads entering funnel |
| Interest -> Consideration | 25% | 26% | +4% more MQLs |
| Consideration -> Intent | 30% | 31% | +3.3% more SQLs |
| Intent -> Purchase | 40% | 41% | +2.5% more customers |
Key insight: Improvements at the top of the funnel have a multiplied effect on downstream stages. But improvements at the bottom of the funnel convert to revenue faster.
Common Anti-Patterns
1. Optimizing the Wrong Stage
Fixing a bottom-of-funnel problem when the real issue is top-of-funnel volume. Always diagnose the full funnel before optimizing.
2. Ignoring Segment Differences
Aggregate funnel metrics can hide that one segment performs well while another is broken. Always segment before optimizing.
3. Over-Optimizing for Conversion Rate
Increasing conversion rate by narrowing the funnel (stricter targeting, higher-intent-only leads) can reduce total volume. Balance rate and volume.
4. Single-Metric Focus
Optimizing CTR without watching CPA, or optimizing CPA without watching volume. Always track paired metrics.
5. Not Accounting for Time Lag
B2B funnels can take weeks or months. Measuring a campaign's funnel performance too early produces incomplete data.
Segment Comparison Best Practices
When using the funnel analyzer's segment comparison feature:
- Compare meaningful segments: Channel, campaign type, audience demographic, or time period
- Ensure comparable volume: Do not compare a segment with 100 entries to one with 10,000
- Look for stage-specific differences: Two segments may have similar overall rates but different bottlenecks
- Use insights to inform targeting: If one segment converts better at a specific stage, understand why and apply those lessons
Recommended Review Cadence
| Review Type | Frequency | Focus |
|---|---|---|
| Campaign funnel check | Weekly | Active campaign stage rates |
| Full funnel audit | Monthly | Overall funnel health, bottleneck shifts |
| Segment deep-dive | Monthly | Channel and cohort comparisons |
| Strategic funnel review | Quarterly | Funnel structure, stage definitions, benchmark updates |
| Annual funnel redesign | Annually | Stage definitions, measurement methodology, tool updates |
#!/usr/bin/env python3
"""
Attribution Analyzer - Multi-touch attribution modeling for marketing campaigns.
Implements 5 attribution models:
- first-touch: 100% credit to first interaction
- last-touch: 100% credit to last interaction
- linear: Equal credit across all touchpoints
- time-decay: Exponential decay favoring recent touchpoints
- position-based: 40% first, 40% last, 20% split among middle
Usage:
python attribution_analyzer.py data.json
python attribution_analyzer.py data.json --model time-decay
python attribution_analyzer.py data.json --model time-decay --half-life 14
python attribution_analyzer.py data.json --format json
"""
import argparse
import json
import sys
from datetime import datetime
from typing import Any, Dict, List, Optional
MODELS = ["first-touch", "last-touch", "linear", "time-decay", "position-based"]
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def parse_timestamp(ts: str) -> datetime:
"""Parse an ISO-format timestamp string into a datetime object."""
for fmt in ("%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S", "%Y-%m-%d"):
try:
return datetime.strptime(ts, fmt)
except ValueError:
continue
raise ValueError(f"Cannot parse timestamp: {ts}")
def first_touch_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""First-touch: 100% credit to the first touchpoint in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
channel = sorted_tp[0]["channel"]
revenue = journey.get("revenue", 1.0)
credits[channel] = credits.get(channel, 0.0) + revenue
return credits
def last_touch_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Last-touch: 100% credit to the last touchpoint in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
channel = sorted_tp[-1]["channel"]
revenue = journey.get("revenue", 1.0)
credits[channel] = credits.get(channel, 0.0) + revenue
return credits
def linear_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Linear: Equal credit split across all touchpoints in each journey."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
share = safe_divide(revenue, len(touchpoints))
for tp in touchpoints:
channel = tp["channel"]
credits[channel] = credits.get(channel, 0.0) + share
return credits
def time_decay_attribution(journeys: List[Dict], half_life_days: float = 7.0) -> Dict[str, float]:
"""Time-decay: Exponential decay giving more credit to recent touchpoints.
Uses a configurable half-life (in days). Touchpoints closer to conversion
receive exponentially more credit.
"""
import math
credits: Dict[str, float] = {}
decay_rate = math.log(2) / half_life_days
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
conversion_time = parse_timestamp(sorted_tp[-1]["timestamp"])
# Calculate raw weights
weights: List[float] = []
for tp in sorted_tp:
tp_time = parse_timestamp(tp["timestamp"])
days_before = (conversion_time - tp_time).total_seconds() / 86400.0
weight = math.exp(-decay_rate * days_before)
weights.append(weight)
total_weight = sum(weights)
if total_weight == 0:
continue
for i, tp in enumerate(sorted_tp):
channel = tp["channel"]
share = safe_divide(weights[i], total_weight) * revenue
credits[channel] = credits.get(channel, 0.0) + share
return credits
def position_based_attribution(journeys: List[Dict]) -> Dict[str, float]:
"""Position-based: 40% first, 40% last, 20% split among middle touchpoints."""
credits: Dict[str, float] = {}
for journey in journeys:
if not journey.get("converted", False):
continue
touchpoints = journey.get("touchpoints", [])
if not touchpoints:
continue
revenue = journey.get("revenue", 1.0)
sorted_tp = sorted(touchpoints, key=lambda t: parse_timestamp(t["timestamp"]))
if len(sorted_tp) == 1:
channel = sorted_tp[0]["channel"]
credits[channel] = credits.get(channel, 0.0) + revenue
elif len(sorted_tp) == 2:
first_channel = sorted_tp[0]["channel"]
last_channel = sorted_tp[-1]["channel"]
credits[first_channel] = credits.get(first_channel, 0.0) + revenue * 0.5
credits[last_channel] = credits.get(last_channel, 0.0) + revenue * 0.5
else:
first_channel = sorted_tp[0]["channel"]
last_channel = sorted_tp[-1]["channel"]
credits[first_channel] = credits.get(first_channel, 0.0) + revenue * 0.4
credits[last_channel] = credits.get(last_channel, 0.0) + revenue * 0.4
middle_count = len(sorted_tp) - 2
middle_share = safe_divide(revenue * 0.2, middle_count)
for tp in sorted_tp[1:-1]:
channel = tp["channel"]
credits[channel] = credits.get(channel, 0.0) + middle_share
return credits
def run_model(model_name: str, journeys: List[Dict], half_life: float = 7.0) -> Dict[str, float]:
"""Dispatch to the appropriate attribution model."""
if model_name == "first-touch":
return first_touch_attribution(journeys)
elif model_name == "last-touch":
return last_touch_attribution(journeys)
elif model_name == "linear":
return linear_attribution(journeys)
elif model_name == "time-decay":
return time_decay_attribution(journeys, half_life)
elif model_name == "position-based":
return position_based_attribution(journeys)
else:
raise ValueError(f"Unknown model: {model_name}. Choose from: {', '.join(MODELS)}")
def compute_summary(journeys: List[Dict]) -> Dict[str, Any]:
"""Compute summary statistics about the journey data."""
total_journeys = len(journeys)
converted = sum(1 for j in journeys if j.get("converted", False))
total_revenue = sum(j.get("revenue", 0.0) for j in journeys if j.get("converted", False))
all_channels = set()
for j in journeys:
for tp in j.get("touchpoints", []):
all_channels.add(tp["channel"])
return {
"total_journeys": total_journeys,
"converted_journeys": converted,
"conversion_rate": round(safe_divide(converted, total_journeys) * 100, 2),
"total_revenue": round(total_revenue, 2),
"channels_observed": sorted(all_channels),
}
def format_text(results: Dict[str, Any]) -> str:
"""Format results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("MULTI-TOUCH ATTRIBUTION ANALYSIS")
lines.append("=" * 70)
summary = results["summary"]
lines.append("")
lines.append("SUMMARY")
lines.append(f" Total Journeys: {summary['total_journeys']}")
lines.append(f" Converted: {summary['converted_journeys']}")
lines.append(f" Conversion Rate: {summary['conversion_rate']}%")
lines.append(f" Total Revenue: ${summary['total_revenue']:,.2f}")
lines.append(f" Channels Observed: {', '.join(summary['channels_observed'])}")
for model_name, credits in results["models"].items():
lines.append("")
lines.append("-" * 70)
lines.append(f"MODEL: {model_name.upper()}")
lines.append("-" * 70)
if not credits:
lines.append(" No conversions to attribute.")
continue
total_credit = sum(credits.values())
sorted_channels = sorted(credits.items(), key=lambda x: x[1], reverse=True)
lines.append(f" {'Channel':<25} {'Revenue Credit':>15} {'Share':>10}")
lines.append(f" {'-'*25} {'-'*15} {'-'*10}")
for channel, credit in sorted_channels:
pct = safe_divide(credit, total_credit) * 100
lines.append(f" {channel:<25} ${credit:>13,.2f} {pct:>8.1f}%")
lines.append(f" {'TOTAL':<25} ${total_credit:>13,.2f} {'100.0%':>10}")
# Comparison table
if len(results["models"]) > 1:
lines.append("")
lines.append("=" * 70)
lines.append("CROSS-MODEL COMPARISON")
lines.append("=" * 70)
all_channels = set()
for credits in results["models"].values():
all_channels.update(credits.keys())
all_channels_sorted = sorted(all_channels)
model_names = list(results["models"].keys())
header = f" {'Channel':<20}"
for mn in model_names:
short = mn.replace("-", " ").title()
header += f" {short:>14}"
lines.append(header)
lines.append(f" {'-'*20}" + f" {'-'*14}" * len(model_names))
for ch in all_channels_sorted:
row = f" {ch:<20}"
for mn in model_names:
val = results["models"][mn].get(ch, 0.0)
row += f" ${val:>12,.2f}"
lines.append(row)
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the attribution analyzer."""
parser = argparse.ArgumentParser(
description="Multi-touch attribution analyzer for marketing campaigns.",
epilog="Example: python attribution_analyzer.py data.json --model linear --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing journey/touchpoint data",
)
parser.add_argument(
"--model",
choices=MODELS,
default=None,
help="Run a specific attribution model (default: run all 5 models)",
)
parser.add_argument(
"--half-life",
type=float,
default=7.0,
help="Half-life in days for time-decay model (default: 7)",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
journeys = data.get("journeys", [])
if not journeys:
print("Error: No 'journeys' array found in input data.", file=sys.stderr)
sys.exit(1)
# Determine which models to run
models_to_run = [args.model] if args.model else MODELS
# Run models
model_results: Dict[str, Dict[str, float]] = {}
for model_name in models_to_run:
credits = run_model(model_name, journeys, args.half_life)
model_results[model_name] = {ch: round(v, 2) for ch, v in credits.items()}
# Build output
results: Dict[str, Any] = {
"summary": compute_summary(journeys),
"models": model_results,
}
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Campaign ROI Calculator - Comprehensive campaign ROI and performance metrics.
Calculates:
- ROI (Return on Investment)
- ROAS (Return on Ad Spend)
- CPA (Cost per Acquisition/Customer)
- CPL (Cost per Lead)
- CAC (Customer Acquisition Cost)
- CTR (Click-Through Rate)
- CVR (Conversion Rate - Leads to Customers)
Includes industry benchmarking and underperformance flagging.
Usage:
python campaign_roi_calculator.py campaign_data.json
python campaign_roi_calculator.py campaign_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional
# Industry benchmark ranges by channel
# Format: {metric: {channel: (low, target, high)}}
BENCHMARKS: Dict[str, Dict[str, tuple]] = {
"ctr": {
"email": (1.0, 2.5, 5.0),
"paid_search": (1.5, 3.5, 7.0),
"paid_social": (0.5, 1.2, 3.0),
"display": (0.05, 0.1, 0.5),
"organic_search": (1.5, 3.0, 8.0),
"organic_social": (0.5, 1.5, 4.0),
"referral": (1.0, 3.0, 6.0),
"direct": (2.0, 4.0, 8.0),
"default": (0.5, 2.0, 5.0),
},
"roas": {
"email": (30.0, 42.0, 60.0),
"paid_search": (2.0, 4.0, 8.0),
"paid_social": (1.5, 3.0, 6.0),
"display": (0.5, 1.5, 3.0),
"organic_search": (5.0, 10.0, 20.0),
"organic_social": (3.0, 6.0, 12.0),
"referral": (3.0, 5.0, 10.0),
"direct": (4.0, 8.0, 15.0),
"default": (2.0, 4.0, 8.0),
},
"cpa": {
"email": (5.0, 15.0, 40.0),
"paid_search": (20.0, 50.0, 150.0),
"paid_social": (15.0, 40.0, 100.0),
"display": (30.0, 75.0, 200.0),
"organic_search": (5.0, 20.0, 60.0),
"organic_social": (10.0, 30.0, 80.0),
"referral": (10.0, 25.0, 70.0),
"direct": (5.0, 15.0, 50.0),
"default": (15.0, 45.0, 120.0),
},
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def get_benchmark(metric: str, channel: str) -> tuple:
"""Get benchmark range for a metric and channel.
Returns:
Tuple of (low, target, high) for the given metric and channel.
"""
metric_benchmarks = BENCHMARKS.get(metric, {})
return metric_benchmarks.get(channel, metric_benchmarks.get("default", (0, 0, 0)))
def assess_performance(value: float, benchmark: tuple, higher_is_better: bool = True) -> str:
"""Assess a metric value against its benchmark range.
Args:
value: The metric value to assess.
benchmark: Tuple of (low, target, high).
higher_is_better: Whether higher values are better (True for CTR, ROAS; False for CPA).
Returns:
Performance assessment string.
"""
low, target, high = benchmark
if higher_is_better:
if value >= high:
return "excellent"
elif value >= target:
return "good"
elif value >= low:
return "below_target"
else:
return "underperforming"
else:
# For cost metrics, lower is better
if value <= low:
return "excellent"
elif value <= target:
return "good"
elif value <= high:
return "below_target"
else:
return "underperforming"
def calculate_campaign_metrics(campaign: Dict[str, Any]) -> Dict[str, Any]:
"""Calculate all ROI metrics for a single campaign.
Args:
campaign: Dict with keys: name, channel, spend, revenue, impressions, clicks, leads, customers.
Returns:
Dict with all calculated metrics, benchmarks, and assessments.
"""
name = campaign.get("name", "Unnamed Campaign")
channel = campaign.get("channel", "default")
spend = campaign.get("spend", 0.0)
revenue = campaign.get("revenue", 0.0)
impressions = campaign.get("impressions", 0)
clicks = campaign.get("clicks", 0)
leads = campaign.get("leads", 0)
customers = campaign.get("customers", 0)
# Core metrics
roi = safe_divide(revenue - spend, spend) * 100
roas = safe_divide(revenue, spend)
cpa = safe_divide(spend, customers) if customers > 0 else None
cpl = safe_divide(spend, leads) if leads > 0 else None
cac = safe_divide(spend, customers) if customers > 0 else None
ctr = safe_divide(clicks, impressions) * 100 if impressions > 0 else None
cvr = safe_divide(customers, leads) * 100 if leads > 0 else None
cpc = safe_divide(spend, clicks) if clicks > 0 else None
cpm = safe_divide(spend, impressions) * 1000 if impressions > 0 else None
lead_conversion_rate = safe_divide(leads, clicks) * 100 if clicks > 0 else None
# Profit
profit = revenue - spend
# Benchmark assessments
assessments: Dict[str, Any] = {}
flags: List[str] = []
if ctr is not None:
benchmark = get_benchmark("ctr", channel)
assessment = assess_performance(ctr, benchmark, higher_is_better=True)
assessments["ctr"] = {
"value": round(ctr, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"CTR ({ctr:.2f}%) is below industry low ({benchmark[0]}%) for {channel}")
if roas > 0:
benchmark = get_benchmark("roas", channel)
assessment = assess_performance(roas, benchmark, higher_is_better=True)
assessments["roas"] = {
"value": round(roas, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"ROAS ({roas:.2f}x) is below industry low ({benchmark[0]}x) for {channel}")
if cpa is not None:
benchmark = get_benchmark("cpa", channel)
assessment = assess_performance(cpa, benchmark, higher_is_better=False)
assessments["cpa"] = {
"value": round(cpa, 2),
"benchmark_range": {"low": benchmark[0], "target": benchmark[1], "high": benchmark[2]},
"assessment": assessment,
}
if assessment == "underperforming":
flags.append(f"CPA (${cpa:.2f}) exceeds industry high (${benchmark[2]:.2f}) for {channel}")
if profit < 0:
flags.append(f"Campaign is unprofitable: ${profit:,.2f} net loss")
# Recommendations
recommendations: List[str] = []
if ctr is not None and assessments.get("ctr", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Improve ad creative and targeting to increase CTR")
if assessments.get("roas", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Review targeting and bid strategy to improve ROAS")
if assessments.get("cpa", {}).get("assessment") in ("below_target", "underperforming"):
recommendations.append("Optimize landing pages and conversion flow to reduce CPA")
if cvr is not None and cvr < 10:
recommendations.append("Lead-to-customer conversion is low; review sales process and lead quality")
if lead_conversion_rate is not None and lead_conversion_rate < 2:
recommendations.append("Click-to-lead rate is low; improve landing page relevance and form experience")
if profit > 0 and assessments.get("roas", {}).get("assessment") in ("good", "excellent"):
recommendations.append("Campaign performing well; consider scaling budget")
return {
"name": name,
"channel": channel,
"metrics": {
"spend": round(spend, 2),
"revenue": round(revenue, 2),
"profit": round(profit, 2),
"roi_pct": round(roi, 2),
"roas": round(roas, 2),
"cpa": round(cpa, 2) if cpa is not None else None,
"cpl": round(cpl, 2) if cpl is not None else None,
"cac": round(cac, 2) if cac is not None else None,
"ctr_pct": round(ctr, 2) if ctr is not None else None,
"cvr_pct": round(cvr, 2) if cvr is not None else None,
"cpc": round(cpc, 2) if cpc is not None else None,
"cpm": round(cpm, 2) if cpm is not None else None,
"lead_conversion_rate_pct": round(lead_conversion_rate, 2) if lead_conversion_rate is not None else None,
"impressions": impressions,
"clicks": clicks,
"leads": leads,
"customers": customers,
},
"assessments": assessments,
"flags": flags,
"recommendations": recommendations,
}
def calculate_portfolio_summary(campaign_results: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Calculate aggregate metrics across all campaigns.
Args:
campaign_results: List of individual campaign result dicts.
Returns:
Portfolio-level summary with totals and weighted averages.
"""
total_spend = sum(c["metrics"]["spend"] for c in campaign_results)
total_revenue = sum(c["metrics"]["revenue"] for c in campaign_results)
total_impressions = sum(c["metrics"]["impressions"] for c in campaign_results)
total_clicks = sum(c["metrics"]["clicks"] for c in campaign_results)
total_leads = sum(c["metrics"]["leads"] for c in campaign_results)
total_customers = sum(c["metrics"]["customers"] for c in campaign_results)
total_profit = total_revenue - total_spend
underperforming = [c["name"] for c in campaign_results if c["flags"]]
top_performers = sorted(
campaign_results,
key=lambda c: c["metrics"]["roi_pct"],
reverse=True,
)
# Channel breakdown
channel_totals: Dict[str, Dict[str, float]] = {}
for c in campaign_results:
ch = c["channel"]
if ch not in channel_totals:
channel_totals[ch] = {"spend": 0, "revenue": 0, "leads": 0, "customers": 0}
channel_totals[ch]["spend"] += c["metrics"]["spend"]
channel_totals[ch]["revenue"] += c["metrics"]["revenue"]
channel_totals[ch]["leads"] += c["metrics"]["leads"]
channel_totals[ch]["customers"] += c["metrics"]["customers"]
channel_summary = {}
for ch, totals in channel_totals.items():
channel_summary[ch] = {
"spend": round(totals["spend"], 2),
"revenue": round(totals["revenue"], 2),
"roi_pct": round(safe_divide(totals["revenue"] - totals["spend"], totals["spend"]) * 100, 2),
"roas": round(safe_divide(totals["revenue"], totals["spend"]), 2),
"leads": int(totals["leads"]),
"customers": int(totals["customers"]),
}
return {
"total_campaigns": len(campaign_results),
"total_spend": round(total_spend, 2),
"total_revenue": round(total_revenue, 2),
"total_profit": round(total_profit, 2),
"portfolio_roi_pct": round(safe_divide(total_profit, total_spend) * 100, 2),
"portfolio_roas": round(safe_divide(total_revenue, total_spend), 2),
"total_impressions": total_impressions,
"total_clicks": total_clicks,
"total_leads": total_leads,
"total_customers": total_customers,
"blended_ctr_pct": round(safe_divide(total_clicks, total_impressions) * 100, 2),
"blended_cpl": round(safe_divide(total_spend, total_leads), 2) if total_leads > 0 else None,
"blended_cpa": round(safe_divide(total_spend, total_customers), 2) if total_customers > 0 else None,
"underperforming_campaigns": underperforming,
"top_performer": top_performers[0]["name"] if top_performers else None,
"channel_summary": channel_summary,
}
def format_text(results: Dict[str, Any]) -> str:
"""Format full results as human-readable text."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("CAMPAIGN ROI ANALYSIS")
lines.append("=" * 70)
# Portfolio summary
summary = results["portfolio_summary"]
lines.append("")
lines.append("PORTFOLIO SUMMARY")
lines.append(f" Total Campaigns: {summary['total_campaigns']}")
lines.append(f" Total Spend: ${summary['total_spend']:>12,.2f}")
lines.append(f" Total Revenue: ${summary['total_revenue']:>12,.2f}")
lines.append(f" Total Profit: ${summary['total_profit']:>12,.2f}")
lines.append(f" Portfolio ROI: {summary['portfolio_roi_pct']}%")
lines.append(f" Portfolio ROAS: {summary['portfolio_roas']}x")
lines.append(f" Blended CTR: {summary['blended_ctr_pct']}%")
if summary["blended_cpl"] is not None:
lines.append(f" Blended CPL: ${summary['blended_cpl']:>12,.2f}")
if summary["blended_cpa"] is not None:
lines.append(f" Blended CPA: ${summary['blended_cpa']:>12,.2f}")
if summary["top_performer"]:
lines.append(f" Top Performer: {summary['top_performer']}")
if summary["underperforming_campaigns"]:
lines.append(f" Flagged: {', '.join(summary['underperforming_campaigns'])}")
# Channel summary
if summary["channel_summary"]:
lines.append("")
lines.append("-" * 70)
lines.append("CHANNEL SUMMARY")
lines.append(f" {'Channel':<20} {'Spend':>12} {'Revenue':>12} {'ROI':>10} {'ROAS':>8}")
lines.append(f" {'-'*20} {'-'*12} {'-'*12} {'-'*10} {'-'*8}")
for ch, cs in sorted(summary["channel_summary"].items()):
lines.append(
f" {ch:<20} ${cs['spend']:>10,.2f} ${cs['revenue']:>10,.2f} "
f"{cs['roi_pct']:>8.1f}% {cs['roas']:>6.2f}x"
)
# Individual campaigns
for campaign in results["campaigns"]:
lines.append("")
lines.append("-" * 70)
lines.append(f"CAMPAIGN: {campaign['name']}")
lines.append(f"Channel: {campaign['channel']}")
lines.append("-" * 70)
m = campaign["metrics"]
lines.append(f" {'Metric':<25} {'Value':>15}")
lines.append(f" {'-'*25} {'-'*15}")
lines.append(f" {'Spend':<25} ${m['spend']:>13,.2f}")
lines.append(f" {'Revenue':<25} ${m['revenue']:>13,.2f}")
lines.append(f" {'Profit':<25} ${m['profit']:>13,.2f}")
lines.append(f" {'ROI':<25} {m['roi_pct']:>13.2f}%")
lines.append(f" {'ROAS':<25} {m['roas']:>13.2f}x")
if m["cpa"] is not None:
lines.append(f" {'CPA':<25} ${m['cpa']:>13,.2f}")
if m["cpl"] is not None:
lines.append(f" {'CPL':<25} ${m['cpl']:>13,.2f}")
if m["cac"] is not None:
lines.append(f" {'CAC':<25} ${m['cac']:>13,.2f}")
if m["ctr_pct"] is not None:
lines.append(f" {'CTR':<25} {m['ctr_pct']:>13.2f}%")
if m["cpc"] is not None:
lines.append(f" {'CPC':<25} ${m['cpc']:>13,.2f}")
if m["cpm"] is not None:
lines.append(f" {'CPM':<25} ${m['cpm']:>13,.2f}")
if m["cvr_pct"] is not None:
lines.append(f" {'Lead-to-Customer CVR':<25} {m['cvr_pct']:>13.2f}%")
if m["lead_conversion_rate_pct"] is not None:
lines.append(f" {'Click-to-Lead Rate':<25} {m['lead_conversion_rate_pct']:>13.2f}%")
# Benchmark assessments
if campaign["assessments"]:
lines.append("")
lines.append(" BENCHMARK ASSESSMENT")
for metric_name, a in campaign["assessments"].items():
br = a["benchmark_range"]
status = a["assessment"].upper().replace("_", " ")
lines.append(
f" {metric_name.upper()}: {a['value']} "
f"[low={br['low']}, target={br['target']}, high={br['high']}] "
f"-> {status}"
)
# Flags
if campaign["flags"]:
lines.append("")
lines.append(" WARNING FLAGS")
for flag in campaign["flags"]:
lines.append(f" ! {flag}")
# Recommendations
if campaign["recommendations"]:
lines.append("")
lines.append(" RECOMMENDATIONS")
for i, rec in enumerate(campaign["recommendations"], 1):
lines.append(f" {i}. {rec}")
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the campaign ROI calculator."""
parser = argparse.ArgumentParser(
description="Calculate campaign ROI, ROAS, CPA, CPL, CAC with industry benchmarking.",
epilog="Example: python campaign_roi_calculator.py campaigns.json --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing campaign data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
campaigns = data.get("campaigns", [])
if not campaigns:
print("Error: No 'campaigns' array found in input data.", file=sys.stderr)
sys.exit(1)
# Calculate metrics for each campaign
campaign_results = [calculate_campaign_metrics(c) for c in campaigns]
# Calculate portfolio summary
portfolio_summary = calculate_portfolio_summary(campaign_results)
results = {
"portfolio_summary": portfolio_summary,
"campaigns": campaign_results,
}
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
Funnel Analyzer - Conversion funnel analysis with bottleneck detection.
Analyzes marketing/sales funnels to identify:
- Stage-to-stage conversion rates and drop-off percentages
- Biggest bottleneck (largest absolute and relative drops)
- Overall funnel conversion rate
- Segment comparison when multiple segments are provided
Usage:
python funnel_analyzer.py funnel_data.json
python funnel_analyzer.py funnel_data.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def analyze_funnel(stages: List[str], counts: List[int]) -> Dict[str, Any]:
"""Analyze a single funnel and return stage-by-stage metrics.
Args:
stages: Ordered list of funnel stage names (top to bottom).
counts: Corresponding counts at each stage.
Returns:
Dictionary with stage metrics, bottleneck info, and overall conversion.
"""
if len(stages) != len(counts):
raise ValueError("Number of stages must match number of counts.")
if not stages:
raise ValueError("Funnel must have at least one stage.")
stage_metrics: List[Dict[str, Any]] = []
max_dropoff_abs = 0
max_dropoff_rel = 0.0
bottleneck_abs: Optional[str] = None
bottleneck_rel: Optional[str] = None
for i, (stage, count) in enumerate(zip(stages, counts)):
metric: Dict[str, Any] = {
"stage": stage,
"count": count,
"cumulative_conversion": round(safe_divide(count, counts[0]) * 100, 2),
}
if i > 0:
prev_count = counts[i - 1]
dropoff = prev_count - count
conversion_rate = safe_divide(count, prev_count) * 100
dropoff_rate = 100 - conversion_rate
metric["from_previous"] = stages[i - 1]
metric["conversion_rate"] = round(conversion_rate, 2)
metric["dropoff_count"] = dropoff
metric["dropoff_rate"] = round(dropoff_rate, 2)
# Track biggest absolute drop-off
if dropoff > max_dropoff_abs:
max_dropoff_abs = dropoff
bottleneck_abs = f"{stages[i-1]} -> {stage}"
# Track biggest relative drop-off
if dropoff_rate > max_dropoff_rel:
max_dropoff_rel = dropoff_rate
bottleneck_rel = f"{stages[i-1]} -> {stage}"
else:
metric["conversion_rate"] = 100.0
metric["dropoff_count"] = 0
metric["dropoff_rate"] = 0.0
stage_metrics.append(metric)
overall_conversion = safe_divide(counts[-1], counts[0]) * 100
return {
"stage_metrics": stage_metrics,
"overall_conversion_rate": round(overall_conversion, 2),
"total_entries": counts[0],
"total_conversions": counts[-1],
"total_lost": counts[0] - counts[-1],
"bottleneck_absolute": {
"transition": bottleneck_abs,
"dropoff_count": max_dropoff_abs,
},
"bottleneck_relative": {
"transition": bottleneck_rel,
"dropoff_rate": round(max_dropoff_rel, 2),
},
}
def compare_segments(segments: Dict[str, Dict[str, Any]], stages: List[str]) -> Dict[str, Any]:
"""Compare funnel performance across segments.
Args:
segments: Dict mapping segment name to {"counts": [...]}.
stages: Shared stage names for all segments.
Returns:
Comparison data with per-segment analysis and relative rankings.
"""
segment_results: Dict[str, Dict[str, Any]] = {}
for seg_name, seg_data in segments.items():
counts = seg_data.get("counts", [])
if len(counts) != len(stages):
raise ValueError(
f"Segment '{seg_name}' has {len(counts)} counts but {len(stages)} stages."
)
segment_results[seg_name] = analyze_funnel(stages, counts)
# Rank segments by overall conversion rate
ranked = sorted(
segment_results.items(),
key=lambda x: x[1]["overall_conversion_rate"],
reverse=True,
)
rankings = [
{
"rank": i + 1,
"segment": name,
"overall_conversion_rate": result["overall_conversion_rate"],
"total_entries": result["total_entries"],
"total_conversions": result["total_conversions"],
}
for i, (name, result) in enumerate(ranked)
]
# Stage-by-stage comparison
stage_comparison: List[Dict[str, Any]] = []
for i, stage in enumerate(stages):
stage_data: Dict[str, Any] = {"stage": stage}
for seg_name in segments:
metrics = segment_results[seg_name]["stage_metrics"][i]
stage_data[seg_name] = {
"count": metrics["count"],
"conversion_rate": metrics["conversion_rate"],
}
stage_comparison.append(stage_data)
return {
"segment_results": segment_results,
"rankings": rankings,
"stage_comparison": stage_comparison,
}
def format_single_funnel_text(analysis: Dict[str, Any], title: str = "FUNNEL") -> str:
"""Format a single funnel analysis as human-readable text."""
lines: List[str] = []
lines.append(f" {title}")
lines.append(f" {'='*60}")
lines.append(f" Total Entries: {analysis['total_entries']:,}")
lines.append(f" Total Conversions: {analysis['total_conversions']:,}")
lines.append(f" Total Lost: {analysis['total_lost']:,}")
lines.append(f" Overall Conversion: {analysis['overall_conversion_rate']}%")
lines.append("")
lines.append(f" {'Stage':<20} {'Count':>10} {'Conv Rate':>12} {'Drop-off':>12} {'Cumulative':>12}")
lines.append(f" {'-'*20} {'-'*10} {'-'*12} {'-'*12} {'-'*12}")
for m in analysis["stage_metrics"]:
stage = m["stage"]
count = m["count"]
conv = f"{m['conversion_rate']:.1f}%"
drop = f"-{m['dropoff_count']:,} ({m['dropoff_rate']:.1f}%)" if m["dropoff_count"] > 0 else "-"
cumul = f"{m['cumulative_conversion']:.1f}%"
lines.append(f" {stage:<20} {count:>10,} {conv:>12} {drop:>12} {cumul:>12}")
lines.append("")
bn_abs = analysis["bottleneck_absolute"]
bn_rel = analysis["bottleneck_relative"]
lines.append(f" BOTTLENECK (Absolute): {bn_abs['transition']} (lost {bn_abs['dropoff_count']:,})")
lines.append(f" BOTTLENECK (Relative): {bn_rel['transition']} ({bn_rel['dropoff_rate']}% drop-off)")
return "\n".join(lines)
def format_text(results: Dict[str, Any]) -> str:
"""Format full results as human-readable text output."""
lines: List[str] = []
lines.append("=" * 70)
lines.append("FUNNEL CONVERSION ANALYSIS")
lines.append("=" * 70)
if "stage_comparison" in results:
# Multi-segment output
lines.append("")
lines.append("SEGMENT RANKINGS")
lines.append(f" {'Rank':>4} {'Segment':<25} {'Conversion':>12} {'Entries':>10} {'Conversions':>12}")
lines.append(f" {'-'*4} {'-'*25} {'-'*12} {'-'*10} {'-'*12}")
for r in results["rankings"]:
lines.append(
f" {r['rank']:>4} {r['segment']:<25} {r['overall_conversion_rate']:>11.2f}% "
f"{r['total_entries']:>10,} {r['total_conversions']:>12,}"
)
lines.append("")
for seg_name, seg_result in results["segment_results"].items():
lines.append("")
lines.append(format_single_funnel_text(seg_result, title=f"SEGMENT: {seg_name.upper()}"))
# Stage comparison table
lines.append("")
lines.append("-" * 70)
lines.append("STAGE-BY-STAGE COMPARISON")
lines.append("-" * 70)
seg_names = list(results["segment_results"].keys())
header = f" {'Stage':<20}"
for sn in seg_names:
header += f" {sn:>20}"
lines.append(header)
lines.append(f" {'-'*20}" + f" {'-'*20}" * len(seg_names))
for sc in results["stage_comparison"]:
row = f" {sc['stage']:<20}"
for sn in seg_names:
data = sc[sn]
row += f" {data['count']:>8,} ({data['conversion_rate']:>5.1f}%)"
lines.append(row)
else:
# Single funnel output
lines.append("")
lines.append(format_single_funnel_text(results))
lines.append("")
return "\n".join(lines)
def main() -> None:
"""Main entry point for the funnel analyzer."""
parser = argparse.ArgumentParser(
description="Analyze conversion funnels with bottleneck detection and segment comparison.",
epilog="Example: python funnel_analyzer.py funnel_data.json --format json",
)
parser.add_argument(
"input_file",
help="Path to JSON file containing funnel data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format (default: text)",
)
args = parser.parse_args()
# Load input data
try:
with open(args.input_file, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {args.input_file}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {args.input_file}: {e}", file=sys.stderr)
sys.exit(1)
# Determine mode: single funnel vs. segment comparison
if "segments" in data:
# Multi-segment mode
stages = data.get("funnel", {}).get("stages", data.get("stages", []))
if not stages:
print("Error: 'stages' list required for segment comparison.", file=sys.stderr)
sys.exit(1)
segments = data["segments"]
if not segments:
print("Error: 'segments' dict is empty.", file=sys.stderr)
sys.exit(1)
results = compare_segments(segments, stages)
elif "funnel" in data:
# Single funnel mode
funnel = data["funnel"]
stages = funnel.get("stages", [])
counts = funnel.get("counts", [])
if not stages or not counts:
print("Error: 'funnel' must contain 'stages' and 'counts' arrays.", file=sys.stderr)
sys.exit(1)
results = analyze_funnel(stages, counts)
else:
print("Error: Input must contain 'funnel' or 'segments' key.", file=sys.stderr)
sys.exit(1)
if args.output_format == "json":
print(json.dumps(results, indent=2))
else:
print(format_text(results))
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill marketing-skill/campaign-analytics Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Marketing
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
marketing-skill/campaign-analytics/SKILL.md
People who install this also use
Marketing Content Strategy
Build a content strategy with topic clusters, content calendars, keyword research, and audience-aligned editorial plans.
@alirezarezvani
Marketing Seo Audit
Run a comprehensive SEO audit — keyword research, on-page analysis, content gaps, technical checks, and competitor comparison. Use when assessing a site's SEO health, when finding keyword opportunities and content gaps competitors own, or when you need a prioritized action plan split into quick wins and strategic investments.
@anthropics