Tech Debt Tracker
Scan a codebase for technical debt, categorize by severity and effort, and produce a prioritized remediation roadmap with business impact estimates.
What this skill does
Transform hidden code problems into a clear action plan that balances quick fixes with long-term stability. You receive a prioritized roadmap showing exactly which improvements reduce maintenance costs and speed up development the most. Reach for this when planning cleanup sprints, estimating modernization efforts, or justifying engineering time to business stakeholders.
name: tech-debt-tracker description: Scan codebases for technical debt, score severity, track trends, and generate prioritized remediation plans. Use when users mention tech debt, code quality, refactoring priority, debt scoring, cleanup sprints, or code health assessment. Also use for legacy code modernization planning and maintenance cost estimation.
Tech Debt Tracker
Tier: POWERFUL 🔥
Category: Engineering Process Automation
Expertise: Code Quality, Technical Debt Management, Software Engineering
Overview
Tech debt is one of the most insidious challenges in software development - it compounds over time, slowing down development velocity, increasing maintenance costs, and reducing code quality. This skill provides a comprehensive framework for identifying, analyzing, prioritizing, and tracking technical debt across codebases.
Tech debt isn’t just about messy code - it encompasses architectural shortcuts, missing tests, outdated dependencies, documentation gaps, and infrastructure compromises. Like financial debt, it accrues “interest” through increased development time, higher bug rates, and reduced team velocity.
What This Skill Provides
This skill offers three interconnected tools that form a complete tech debt management system:
- Debt Scanner - Automatically identifies tech debt signals in your codebase
- Debt Prioritizer - Analyzes and prioritizes debt items using cost-of-delay frameworks
- Debt Dashboard - Tracks debt trends over time and provides executive reporting
Together, these tools enable engineering teams to make data-driven decisions about tech debt, balancing new feature development with maintenance work.
Technical Debt Classification Framework
→ See references/debt-frameworks.md for details
Implementation Roadmap
Phase 1: Foundation (Weeks 1-2)
- Set up debt scanning infrastructure
- Establish debt taxonomy and scoring criteria
- Scan initial codebase and create baseline inventory
- Train team on debt identification and reporting
Phase 2: Process Integration (Weeks 3-4)
- Integrate debt tracking into sprint planning
- Establish debt budgets and allocation rules
- Create stakeholder reporting templates
- Set up automated debt scanning in CI/CD
Phase 3: Optimization (Weeks 5-6)
- Refine scoring algorithms based on team feedback
- Implement trend analysis and predictive metrics
- Create specialized debt reduction initiatives
- Establish cross-team debt coordination processes
Phase 4: Maturity (Ongoing)
- Continuous improvement of detection algorithms
- Advanced analytics and prediction models
- Integration with planning and project management tools
- Organization-wide debt management best practices
Success Criteria
Quantitative Metrics:
- 25% reduction in debt interest rate within 6 months
- 15% improvement in development velocity
- 30% reduction in production defects
- 20% faster code review cycles
Qualitative Metrics:
- Improved developer satisfaction scores
- Reduced context switching during feature development
- Faster onboarding for new team members
- Better predictability in feature delivery timelines
Common Pitfalls and How to Avoid Them
1. Analysis Paralysis
Problem: Spending too much time analyzing debt instead of fixing it. Solution: Set time limits for analysis, use “good enough” scoring for most items.
2. Perfectionism
Problem: Trying to eliminate all debt instead of managing it. Solution: Focus on high-impact debt, accept that some debt is acceptable.
3. Ignoring Business Context
Problem: Prioritizing technical elegance over business value. Solution: Always tie debt work to business outcomes and customer impact.
4. Inconsistent Application
Problem: Some teams adopt practices while others ignore them. Solution: Make debt tracking part of standard development workflow.
5. Tool Over-Engineering
Problem: Building complex debt management systems that nobody uses. Solution: Start simple, iterate based on actual usage patterns.
Technical debt management is not just about writing better code - it’s about creating sustainable development practices that balance short-term delivery pressure with long-term system health. Use these tools and frameworks to make informed decisions about when and how to invest in debt reduction.
Tech Debt Tracker
A comprehensive technical debt management system that helps engineering teams identify, prioritize, and track technical debt across codebases. This skill provides three interconnected tools for a complete debt management workflow.
Overview
Technical debt is like financial debt - it compounds over time and reduces team velocity if not managed systematically. This skill provides:
- Automated Debt Detection: Scan codebases to identify various types of technical debt
- Intelligent Prioritization: Use proven frameworks to prioritize debt based on business impact
- Trend Analysis: Track debt evolution over time with executive-friendly dashboards
Tools
1. Debt Scanner (debt_scanner.py)
Scans codebases to automatically detect technical debt signals using AST parsing for Python and regex patterns for other languages.
Features:
- Detects 15+ types of technical debt (large functions, complexity, duplicates, security issues, etc.)
- Multi-language support (Python, JavaScript, Java, C#, Go, etc.)
- Configurable thresholds and rules
- Dual output: JSON for tools, human-readable for reports
Usage:
# Basic scan
python scripts/debt_scanner.py /path/to/codebase
# With custom config and output
python scripts/debt_scanner.py /path/to/codebase --config config.json --output report.json
# Different output formats
python scripts/debt_scanner.py /path/to/codebase --format both2. Debt Prioritizer (debt_prioritizer.py)
Takes debt inventory and creates prioritized backlog using proven prioritization frameworks.
Features:
- Multiple prioritization frameworks (Cost of Delay, WSJF, RICE)
- Business impact analysis with ROI calculations
- Sprint allocation recommendations
- Effort estimation with risk adjustment
- Executive and engineering reports
Usage:
# Basic prioritization
python scripts/debt_prioritizer.py debt_inventory.json
# Custom framework and team size
python scripts/debt_prioritizer.py inventory.json --framework wsjf --team-size 8
# Sprint capacity planning
python scripts/debt_prioritizer.py inventory.json --sprint-capacity 80 --output backlog.json3. Debt Dashboard (debt_dashboard.py)
Analyzes historical debt data to provide trend analysis, health scoring, and executive reporting.
Features:
- Health score trending over time
- Debt velocity analysis (accumulation vs resolution)
- Executive summary with business impact
- Forecasting based on current trends
- Strategic recommendations
Usage:
# Single directory of scans
python scripts/debt_dashboard.py --input-dir ./debt_scans/
# Multiple specific files
python scripts/debt_dashboard.py scan1.json scan2.json scan3.json
# Custom analysis period
python scripts/debt_dashboard.py data.json --period quarterly --team-size 6Quick Start
1. Scan Your Codebase
# Scan your project
python scripts/debt_scanner.py ~/my-project --output initial_scan.json
# Review the results
python scripts/debt_scanner.py ~/my-project --format text2. Prioritize Your Debt
# Create prioritized backlog
python scripts/debt_prioritizer.py initial_scan.json --output backlog.json
# View sprint recommendations
python scripts/debt_prioritizer.py initial_scan.json --format text3. Track Over Time
# After multiple scans, analyze trends
python scripts/debt_dashboard.py scan1.json scan2.json scan3.json --output dashboard.json
# Generate executive report
python scripts/debt_dashboard.py --input-dir ./scans/ --format textConfiguration
Scanner Configuration
Create config.json to customize detection rules:
{
"max_function_length": 50,
"max_complexity": 10,
"max_nesting_depth": 4,
"ignore_patterns": ["*.test.js", "build/", "node_modules/"],
"file_extensions": {
"python": [".py"],
"javascript": [".js", ".jsx", ".ts", ".tsx"]
}
}Team Configuration
Adjust tools for your team size and sprint capacity:
# 8-person team with 2-week sprints
python scripts/debt_prioritizer.py inventory.json --team-size 8 --sprint-capacity 160Sample Data
The assets/ directory contains sample data for testing:
sample_codebase/: Example codebase with various debt typessample_debt_inventory.json: Example debt inventoryhistorical_debt_*.json: Sample historical data for trending
Try the tools on sample data:
# Test scanner
python scripts/debt_scanner.py assets/sample_codebase
# Test prioritizer
python scripts/debt_prioritizer.py assets/sample_debt_inventory.json
# Test dashboard
python scripts/debt_dashboard.py assets/historical_debt_*.jsonUnderstanding the Output
Health Score (0-100)
- 85-100: Excellent - Minimal debt, sustainable practices
- 70-84: Good - Manageable debt level, some attention needed
- 55-69: Fair - Debt accumulating, requires focused effort
- 40-54: Poor - High debt level, impacts productivity
- 0-39: Critical - Immediate action required
Priority Levels
- Critical: Security issues, blocking problems (fix immediately)
- High: Significant impact on quality or velocity (next sprint)
- Medium: Moderate impact, plan for upcoming work (next quarter)
- Low: Minor issues, fix opportunistically (when convenient)
Debt Categories
- Code Quality: Large functions, complexity, duplicates
- Architecture: Design issues, coupling problems
- Security: Vulnerabilities, hardcoded secrets
- Testing: Missing tests, poor coverage
- Documentation: Missing or outdated docs
- Dependencies: Outdated packages, license issues
Integration with Development Workflow
CI/CD Integration
Add debt scanning to your CI pipeline:
# In your CI script
python scripts/debt_scanner.py . --output ci_scan.json
# Compare with baseline, fail build if critical issues foundSprint Planning
- Weekly: Run scanner to detect new debt
- Sprint Planning: Use prioritizer for debt story sizing
- Monthly: Generate dashboard for trend analysis
- Quarterly: Executive review with strategic recommendations
Code Review Integration
Use scanner output to focus code reviews:
# Scan PR branch
python scripts/debt_scanner.py . --output pr_scan.json
# Compare with main branch baseline
# Focus review on areas with new debtBest Practices
Debt Management Strategy
- Prevention: Use scanner in CI to catch debt early
- Prioritization: Always use business impact for priority
- Allocation: Reserve 15-20% sprint capacity for debt work
- Measurement: Track health score and velocity impact
- Communication: Use dashboard reports for stakeholders
Common Pitfalls to Avoid
- Analysis Paralysis: Don't spend too long on perfect prioritization
- Technical Focus Only: Always consider business impact
- Inconsistent Application: Ensure all teams use same approach
- Ignoring Trends: Pay attention to debt accumulation rate
- All-or-Nothing: Incremental debt reduction is better than none
Success Metrics
- Health Score Improvement: Target 5+ point quarterly improvement
- Velocity Impact: Keep debt velocity impact below 20%
- Team Satisfaction: Survey developers on code quality satisfaction
- Incident Reduction: Track correlation between debt and production issues
Advanced Usage
Custom Debt Types
Extend the scanner for organization-specific debt patterns:
- Add patterns to
config.json - Modify detection logic in scanner
- Update categorization in prioritizer
Integration with External Tools
- Jira/GitHub: Import debt items as tickets
- SonarQube: Combine with static analysis metrics
- APM Tools: Correlate debt with performance metrics
- Chat Systems: Send debt alerts to team channels
Automated Reporting
Set up automated debt reporting:
#!/bin/bash
# Daily debt monitoring script
python scripts/debt_scanner.py . --output daily_scan.json
python scripts/debt_dashboard.py daily_scan.json --output daily_report.json
# Send report to stakeholdersTroubleshooting
Common Issues
Scanner not finding files: Check ignore_patterns in config
Prioritizer giving unexpected results: Verify business impact scoring
Dashboard shows flat trends: Need more historical data points
Performance Tips
- Use
.gitignorepatterns to exclude irrelevant files - Limit scan depth for large monorepos
- Run dashboard analysis on subset for faster iteration
Getting Help
- Check the
references/directory for detailed documentation - Review sample data and expected outputs
- Examine the tool source code for customization ideas
Contributing
This skill is designed to be customized for your organization's needs:
- Add Detection Rules: Extend scanner patterns for your tech stack
- Custom Prioritization: Modify scoring algorithms for your business context
- New Report Formats: Add output formats for your stakeholders
- Integration Hooks: Add connectors to your existing tools
The codebase is designed with extensibility in mind - each tool is modular and can be enhanced independently.
Remember: Technical debt management is a journey, not a destination. These tools help you make informed decisions about balancing new feature development with technical excellence. Start small, measure impact, and iterate based on what works for your team.
{
"scan_metadata": {
"directory": "/project/src",
"scan_date": "2024-01-15T09:00:00",
"scanner_version": "1.0.0"
},
"summary": {
"total_files_scanned": 25,
"total_lines_scanned": 12543,
"total_debt_items": 28,
"health_score": 68.5,
"debt_density": 1.12
},
"debt_items": [
{
"id": "DEBT-0001",
"type": "large_function",
"description": "create_user function in user_service.py is 89 lines long",
"file_path": "src/user_service.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0002",
"type": "duplicate_code",
"description": "Password validation logic duplicated in 3 locations",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0003",
"type": "security_risk",
"description": "Hardcoded API key in payment_processor.py",
"file_path": "src/payment_processor.py",
"severity": "critical",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0004",
"type": "high_complexity",
"description": "process_payment function has cyclomatic complexity of 24",
"file_path": "src/payment_processor.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0005",
"type": "missing_docstring",
"description": "PaymentProcessor class missing docstring",
"file_path": "src/payment_processor.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0006",
"type": "todo_comment",
"description": "TODO: Move this to configuration file",
"file_path": "src/user_service.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0007",
"type": "empty_catch_blocks",
"description": "Empty catch block in update_user method",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0008",
"type": "magic_numbers",
"description": "Magic number 1800 used for lock timeout",
"file_path": "src/user_service.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0009",
"type": "deep_nesting",
"description": "Deep nesting detected: 6 levels in preferences handling",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0010",
"type": "long_line",
"description": "Line too long: 156 characters",
"file_path": "src/frontend.js",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0011",
"type": "commented_code",
"description": "Dead code left in comments",
"file_path": "src/frontend.js",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0012",
"type": "global_variables",
"description": "Global variable userCache should be encapsulated",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0013",
"type": "synchronous_ajax",
"description": "Synchronous AJAX call blocks UI thread",
"file_path": "src/frontend.js",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0014",
"type": "hardcoded_values",
"description": "Tax rates hardcoded in payment processing logic",
"file_path": "src/payment_processor.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0015",
"type": "no_error_handling",
"description": "API calls without proper error handling",
"file_path": "src/payment_processor.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0016",
"type": "inefficient_algorithm",
"description": "O(n) user search could be optimized with indexing",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0017",
"type": "memory_leak_risk",
"description": "Event listeners attached without cleanup",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0018",
"type": "sql_injection_risk",
"description": "Potential SQL injection in user query",
"file_path": "src/database.py",
"severity": "critical",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0019",
"type": "outdated_dependency",
"description": "jQuery version 2.1.4 has known security vulnerabilities",
"file_path": "package.json",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0020",
"type": "test_debt",
"description": "No unit tests for critical payment processing logic",
"file_path": "src/payment_processor.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0021",
"type": "large_class",
"description": "UserService class has 15 methods",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0022",
"type": "unused_imports",
"description": "Unused import: sys",
"file_path": "src/utils.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0023",
"type": "missing_type_hints",
"description": "Function get_user_score missing type hints",
"file_path": "src/user_service.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0024",
"type": "circular_dependency",
"description": "Circular import between user_service and auth_service",
"file_path": "src/user_service.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0025",
"type": "inconsistent_naming",
"description": "Variable name userID should be user_id",
"file_path": "src/auth.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0026",
"type": "broad_exception",
"description": "Catching generic Exception instead of specific types",
"file_path": "src/database.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0027",
"type": "deprecated_api",
"description": "Using deprecated datetime.utcnow() method",
"file_path": "src/utils.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0028",
"type": "logging_issue",
"description": "Using print() instead of proper logging",
"file_path": "src/payment_processor.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
}
]
} {
"scan_metadata": {
"directory": "/project/src",
"scan_date": "2024-02-01T14:30:00",
"scanner_version": "1.0.0"
},
"summary": {
"total_files_scanned": 27,
"total_lines_scanned": 13421,
"total_debt_items": 22,
"health_score": 74.2,
"debt_density": 0.81
},
"debt_items": [
{
"id": "DEBT-0001",
"type": "large_function",
"description": "create_user function in user_service.py is 89 lines long",
"file_path": "src/user_service.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0002",
"type": "duplicate_code",
"description": "Password validation logic duplicated in 3 locations",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0004",
"type": "high_complexity",
"description": "process_payment function has cyclomatic complexity of 24",
"file_path": "src/payment_processor.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0005",
"type": "missing_docstring",
"description": "PaymentProcessor class missing docstring",
"file_path": "src/payment_processor.py",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0007",
"type": "empty_catch_blocks",
"description": "Empty catch block in update_user method",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0009",
"type": "deep_nesting",
"description": "Deep nesting detected: 6 levels in preferences handling",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0010",
"type": "long_line",
"description": "Line too long: 156 characters",
"file_path": "src/frontend.js",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0011",
"type": "commented_code",
"description": "Dead code left in comments",
"file_path": "src/frontend.js",
"severity": "low",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0012",
"type": "global_variables",
"description": "Global variable userCache should be encapsulated",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0013",
"type": "synchronous_ajax",
"description": "Synchronous AJAX call blocks UI thread",
"file_path": "src/frontend.js",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0014",
"type": "hardcoded_values",
"description": "Tax rates hardcoded in payment processing logic",
"file_path": "src/payment_processor.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0015",
"type": "no_error_handling",
"description": "API calls without proper error handling",
"file_path": "src/payment_processor.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0016",
"type": "inefficient_algorithm",
"description": "O(n) user search could be optimized with indexing",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0017",
"type": "memory_leak_risk",
"description": "Event listeners attached without cleanup",
"file_path": "src/frontend.js",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0021",
"type": "large_class",
"description": "UserService class has 15 methods",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0024",
"type": "circular_dependency",
"description": "Circular import between user_service and auth_service",
"file_path": "src/user_service.py",
"severity": "high",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0026",
"type": "broad_exception",
"description": "Catching generic Exception instead of specific types",
"file_path": "src/database.py",
"severity": "medium",
"detected_date": "2024-01-15T09:00:00",
"status": "identified"
},
{
"id": "DEBT-0029",
"type": "missing_validation",
"description": "New API endpoint missing input validation",
"file_path": "src/api.py",
"severity": "high",
"detected_date": "2024-02-01T14:30:00",
"status": "identified"
},
{
"id": "DEBT-0030",
"type": "performance_issue",
"description": "N+1 query detected in user listing",
"file_path": "src/user_service.py",
"severity": "medium",
"detected_date": "2024-02-01T14:30:00",
"status": "identified"
},
{
"id": "DEBT-0031",
"type": "css_debt",
"description": "Inline styles should be moved to CSS files",
"file_path": "templates/user_profile.html",
"severity": "low",
"detected_date": "2024-02-01T14:30:00",
"status": "identified"
},
{
"id": "DEBT-0032",
"type": "accessibility_issue",
"description": "Missing alt text for images",
"file_path": "templates/dashboard.html",
"severity": "medium",
"detected_date": "2024-02-01T14:30:00",
"status": "identified"
},
{
"id": "DEBT-0033",
"type": "configuration_debt",
"description": "Environment-specific config hardcoded in application",
"file_path": "src/config.py",
"severity": "medium",
"detected_date": "2024-02-01T14:30:00",
"status": "identified"
}
]
} [
{
"id": "DEBT-0001",
"type": "large_function",
"description": "create_user function in user_service.py is 89 lines long",
"file_path": "src/user_service.py",
"line_number": 13,
"severity": "high",
"metadata": {
"function_name": "create_user",
"length": 89,
"recommended_max": 50
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0002",
"type": "duplicate_code",
"description": "Password validation logic duplicated in 3 locations",
"file_path": "src/user_service.py",
"line_number": 45,
"severity": "medium",
"metadata": {
"duplicate_count": 3,
"other_files": ["src/auth.py", "src/frontend.js"]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0003",
"type": "security_risk",
"description": "Hardcoded API key in payment_processor.py",
"file_path": "src/payment_processor.py",
"line_number": 10,
"severity": "critical",
"metadata": {
"security_issue": "hardcoded_credentials",
"exposure_risk": "high"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0004",
"type": "high_complexity",
"description": "process_payment function has cyclomatic complexity of 24",
"file_path": "src/payment_processor.py",
"line_number": 19,
"severity": "high",
"metadata": {
"function_name": "process_payment",
"complexity": 24,
"recommended_max": 10
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0005",
"type": "missing_docstring",
"description": "PaymentProcessor class missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 8,
"severity": "low",
"metadata": {
"class_name": "PaymentProcessor"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0006",
"type": "todo_comment",
"description": "TODO: Move this to configuration file",
"file_path": "src/user_service.py",
"line_number": 8,
"severity": "low",
"metadata": {
"comment": "TODO: Move this to configuration file"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0007",
"type": "empty_catch_blocks",
"description": "Empty catch block in update_user method",
"file_path": "src/user_service.py",
"line_number": 156,
"severity": "medium",
"metadata": {
"method_name": "update_user",
"exception_type": "generic"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0008",
"type": "magic_numbers",
"description": "Magic number 1800 used for lock timeout",
"file_path": "src/user_service.py",
"line_number": 98,
"severity": "low",
"metadata": {
"value": 1800,
"context": "account_lockout_duration"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0009",
"type": "deep_nesting",
"description": "Deep nesting detected: 6 levels in preferences handling",
"file_path": "src/frontend.js",
"line_number": 32,
"severity": "medium",
"metadata": {
"nesting_level": 6,
"recommended_max": 4
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0010",
"type": "long_line",
"description": "Line too long: 156 characters",
"file_path": "src/frontend.js",
"line_number": 127,
"severity": "low",
"metadata": {
"length": 156,
"recommended_max": 120
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0011",
"type": "commented_code",
"description": "Dead code left in comments",
"file_path": "src/frontend.js",
"line_number": 285,
"severity": "low",
"metadata": {
"lines_of_commented_code": 8
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0012",
"type": "global_variables",
"description": "Global variable userCache should be encapsulated",
"file_path": "src/frontend.js",
"line_number": 7,
"severity": "medium",
"metadata": {
"variable_name": "userCache",
"scope": "global"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0013",
"type": "synchronous_ajax",
"description": "Synchronous AJAX call blocks UI thread",
"file_path": "src/frontend.js",
"line_number": 189,
"severity": "high",
"metadata": {
"method": "XMLHttpRequest",
"async": false
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0014",
"type": "hardcoded_values",
"description": "Tax rates hardcoded in payment processing logic",
"file_path": "src/payment_processor.py",
"line_number": 45,
"severity": "medium",
"metadata": {
"values": ["0.08", "0.085", "0.0625", "0.06"],
"context": "tax_calculation"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0015",
"type": "no_error_handling",
"description": "API calls without proper error handling",
"file_path": "src/payment_processor.py",
"line_number": 78,
"severity": "high",
"metadata": {
"api_endpoint": "stripe",
"error_scenarios": ["network_failure", "invalid_response"]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0016",
"type": "inefficient_algorithm",
"description": "O(n) user search could be optimized with indexing",
"file_path": "src/user_service.py",
"line_number": 178,
"severity": "medium",
"metadata": {
"current_complexity": "O(n)",
"recommended_complexity": "O(log n)",
"method_name": "search_users"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0017",
"type": "memory_leak_risk",
"description": "Event listeners attached without cleanup",
"file_path": "src/frontend.js",
"line_number": 145,
"severity": "medium",
"metadata": {
"event_type": "click",
"cleanup_missing": true
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0018",
"type": "sql_injection_risk",
"description": "Potential SQL injection in user query",
"file_path": "src/database.py",
"line_number": 25,
"severity": "critical",
"metadata": {
"query_type": "dynamic",
"user_input": "unsanitized"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0019",
"type": "outdated_dependency",
"description": "jQuery version 2.1.4 has known security vulnerabilities",
"file_path": "package.json",
"line_number": 15,
"severity": "high",
"metadata": {
"package": "jquery",
"current_version": "2.1.4",
"latest_version": "3.6.4",
"vulnerabilities": ["CVE-2020-11022", "CVE-2020-11023"]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
},
{
"id": "DEBT-0020",
"type": "test_debt",
"description": "No unit tests for critical payment processing logic",
"file_path": "src/payment_processor.py",
"line_number": 19,
"severity": "high",
"metadata": {
"coverage": 0,
"critical_paths": ["process_payment", "refund_payment"],
"risk_level": "high"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified"
}
] {
"metadata": {
"generated_date": "2026-02-16T12:59:34.530390",
"analysis_period": "monthly",
"snapshots_analyzed": 2,
"date_range": {
"start": "2024-01-15T09:00:00",
"end": "2024-02-01T14:30:00"
},
"team_size": 5
},
"executive_summary": {
"overall_status": "excellent",
"health_score": 87.3,
"status_message": "Code quality is excellent with minimal technical debt.",
"key_insights": [
"Good progress on debt reduction"
],
"total_debt_items": 22,
"estimated_effort_hours": 193.5,
"high_priority_items": 6,
"velocity_impact_percent": 12.3
},
"current_health": {
"overall_score": 87.3,
"debt_density": 0.81,
"velocity_impact": 12.3,
"quality_score": 81.8,
"maintainability_score": 72.7,
"technical_risk_score": 38.2,
"date": "2024-02-01T14:30:00"
},
"trend_analysis": {
"overall_score": {
"metric_name": "overall_score",
"trend_direction": "improving",
"change_rate": 3.7,
"correlation_strength": 0.0,
"forecast_next_period": 91.0,
"confidence_interval": [
91.0,
91.0
]
},
"debt_density": {
"metric_name": "debt_density",
"trend_direction": "improving",
"change_rate": -0.31,
"correlation_strength": 0.0,
"forecast_next_period": 0.5,
"confidence_interval": [
0.5,
0.5
]
},
"velocity_impact": {
"metric_name": "velocity_impact",
"trend_direction": "improving",
"change_rate": -2.9,
"correlation_strength": 0.0,
"forecast_next_period": 9.4,
"confidence_interval": [
9.4,
9.4
]
},
"quality_score": {
"metric_name": "quality_score",
"trend_direction": "declining",
"change_rate": -3.9,
"correlation_strength": 0.0,
"forecast_next_period": 77.9,
"confidence_interval": [
77.9,
77.9
]
},
"technical_risk_score": {
"metric_name": "technical_risk_score",
"trend_direction": "improving",
"change_rate": -47.5,
"correlation_strength": 0.0,
"forecast_next_period": -9.3,
"confidence_interval": [
-9.3,
-9.3
]
}
},
"debt_velocity": [
{
"period": "2024-01-15 to 2024-02-01",
"new_debt_items": 0,
"resolved_debt_items": 6,
"net_change": -6,
"velocity_ratio": 10.0,
"effort_hours_added": 0,
"effort_hours_resolved": 77.0,
"net_effort_change": -77.0
}
],
"forecasts": {
"health_score_3_months": 98.4,
"health_score_6_months": 100,
"debt_count_3_months": 4,
"debt_count_6_months": 0,
"risk_score_3_months": 0
},
"recommendations": [
{
"priority": "medium",
"category": "focus_area",
"title": "Focus on Other Debt",
"description": "Other represents the largest debt category (16 items). Consider targeted initiatives.",
"impact": "medium",
"effort": "medium"
}
],
"visualizations": {
"health_timeline": [
{
"date": "2024-01-15",
"overall_score": 83.6,
"quality_score": 85.7,
"technical_risk": 85.7
},
{
"date": "2024-02-01",
"overall_score": 87.3,
"quality_score": 81.8,
"technical_risk": 38.2
}
],
"debt_accumulation": [
{
"date": "2024-01-15",
"total_debt": 28,
"high_priority": 9,
"security_debt": 5
},
{
"date": "2024-02-01",
"total_debt": 22,
"high_priority": 6,
"security_debt": 2
}
],
"category_distribution": [
{
"category": "code_quality",
"count": 5
},
{
"category": "other",
"count": 16
},
{
"category": "maintenance",
"count": 1
}
],
"debt_velocity": [
{
"period": "2024-01-15 to 2024-02-01",
"new_items": 0,
"resolved_items": 6,
"net_change": -6,
"velocity_ratio": 10.0
}
],
"effort_trend": [
{
"date": "2024-01-15",
"total_effort": 270.5
},
{
"date": "2024-02-01",
"total_effort": 193.5
}
]
},
"detailed_metrics": {
"debt_breakdown": {
"large_function": 1,
"duplicate_code": 1,
"high_complexity": 1,
"missing_docstring": 1,
"empty_catch_blocks": 1,
"deep_nesting": 1,
"long_line": 1,
"commented_code": 1,
"global_variables": 1,
"synchronous_ajax": 1,
"hardcoded_values": 1,
"no_error_handling": 1,
"inefficient_algorithm": 1,
"memory_leak_risk": 1,
"large_class": 1,
"circular_dependency": 1,
"broad_exception": 1,
"missing_validation": 1,
"performance_issue": 1,
"css_debt": 1,
"accessibility_issue": 1,
"configuration_debt": 1
},
"severity_breakdown": {
"high": 6,
"medium": 12,
"low": 4
},
"category_breakdown": {
"code_quality": 5,
"other": 16,
"maintenance": 1
},
"files_analyzed": 27,
"debt_density": 0.8148148148148148,
"average_effort_per_item": 8.795454545454545
}
} {
"metadata": {
"analysis_date": "2026-02-16T12:59:31.382843",
"framework_used": "cost_of_delay",
"team_size": 5,
"sprint_capacity_hours": 80,
"total_items_analyzed": 20
},
"prioritized_backlog": [
{
"id": "DEBT-0008",
"type": "magic_numbers",
"description": "Magic number 1800 used for lock timeout",
"file_path": "src/user_service.py",
"line_number": 98,
"severity": "low",
"metadata": {
"value": 1800,
"context": "account_lockout_duration"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 5.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 2.1,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
},
{
"id": "DEBT-0010",
"type": "long_line",
"description": "Line too long: 156 characters",
"file_path": "src/frontend.js",
"line_number": 127,
"severity": "low",
"metadata": {
"length": 156,
"recommended_max": 120
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 0.375,
"risk_factor": 1.0,
"skill_level_required": "junior",
"confidence": 0.95
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 0.16,
"category": "code_quality",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
},
{
"id": "DEBT-0011",
"type": "commented_code",
"description": "Dead code left in comments",
"file_path": "src/frontend.js",
"line_number": 285,
"severity": "low",
"metadata": {
"lines_of_commented_code": 8
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 5.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 2.1,
"category": "maintenance",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
},
{
"id": "DEBT-0007",
"type": "empty_catch_blocks",
"description": "Empty catch block in update_user method",
"file_path": "src/user_service.py",
"line_number": 156,
"severity": "medium",
"metadata": {
"method_name": "update_user",
"exception_type": "generic"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0009",
"type": "deep_nesting",
"description": "Deep nesting detected: 6 levels in preferences handling",
"file_path": "src/frontend.js",
"line_number": 32,
"severity": "medium",
"metadata": {
"nesting_level": 6,
"recommended_max": 4
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0012",
"type": "global_variables",
"description": "Global variable userCache should be encapsulated",
"file_path": "src/frontend.js",
"line_number": 7,
"severity": "medium",
"metadata": {
"variable_name": "userCache",
"scope": "global"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0014",
"type": "hardcoded_values",
"description": "Tax rates hardcoded in payment processing logic",
"file_path": "src/payment_processor.py",
"line_number": 45,
"severity": "medium",
"metadata": {
"values": [
"0.08",
"0.085",
"0.0625",
"0.06"
],
"context": "tax_calculation"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0016",
"type": "inefficient_algorithm",
"description": "O(n) user search could be optimized with indexing",
"file_path": "src/user_service.py",
"line_number": 178,
"severity": "medium",
"metadata": {
"current_complexity": "O(n)",
"recommended_complexity": "O(log n)",
"method_name": "search_users"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0017",
"type": "memory_leak_risk",
"description": "Event listeners attached without cleanup",
"file_path": "src/frontend.js",
"line_number": 145,
"severity": "medium",
"metadata": {
"event_type": "click",
"cleanup_missing": true
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
},
{
"id": "DEBT-0001",
"type": "large_function",
"description": "create_user function in user_service.py is 89 lines long",
"file_path": "src/user_service.py",
"line_number": 13,
"severity": "high",
"metadata": {
"function_name": "create_user",
"length": 89,
"recommended_max": 50
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 15.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.7
},
"business_impact": {
"customer_impact": 4,
"revenue_impact": 6,
"team_velocity_impact": 10,
"quality_impact": 8,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 7.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.03
},
"cost_of_delay": 19.48,
"category": "code_quality",
"impact_tags": [
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 4.26
},
{
"id": "DEBT-0005",
"type": "missing_docstring",
"description": "PaymentProcessor class missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 8,
"severity": "low",
"metadata": {
"class_name": "PaymentProcessor"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 1.25,
"risk_factor": 1.0,
"skill_level_required": "junior",
"confidence": 0.9
},
"business_impact": {
"customer_impact": 1,
"revenue_impact": 1,
"team_velocity_impact": 2,
"quality_impact": 2,
"security_impact": 1
},
"interest_rate": {
"daily_cost": 1.6,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 0.35,
"category": "code_quality",
"impact_tags": [
"quick-win"
],
"priority_score": 4.1
},
{
"id": "DEBT-0013",
"type": "synchronous_ajax",
"description": "Synchronous AJAX call blocks UI thread",
"file_path": "src/frontend.js",
"line_number": 189,
"severity": "high",
"metadata": {
"method": "XMLHttpRequest",
"async": false
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 15.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 4,
"revenue_impact": 4,
"team_velocity_impact": 7,
"quality_impact": 7,
"security_impact": 4
},
"interest_rate": {
"daily_cost": 5.6,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 14.73,
"category": "other",
"impact_tags": [
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 3.73
},
{
"id": "DEBT-0015",
"type": "no_error_handling",
"description": "API calls without proper error handling",
"file_path": "src/payment_processor.py",
"line_number": 78,
"severity": "high",
"metadata": {
"api_endpoint": "stripe",
"error_scenarios": [
"network_failure",
"invalid_response"
]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 15.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 4,
"revenue_impact": 4,
"team_velocity_impact": 7,
"quality_impact": 7,
"security_impact": 4
},
"interest_rate": {
"daily_cost": 5.6,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 14.73,
"category": "other",
"impact_tags": [
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 3.73
},
{
"id": "DEBT-0019",
"type": "outdated_dependency",
"description": "jQuery version 2.1.4 has known security vulnerabilities",
"file_path": "package.json",
"line_number": 15,
"severity": "high",
"metadata": {
"package": "jquery",
"current_version": "2.1.4",
"latest_version": "3.6.4",
"vulnerabilities": [
"CVE-2020-11022",
"CVE-2020-11023"
]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 15.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 4,
"revenue_impact": 4,
"team_velocity_impact": 7,
"quality_impact": 7,
"security_impact": 4
},
"interest_rate": {
"daily_cost": 5.6,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 14.73,
"category": "other",
"impact_tags": [
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 3.73
},
{
"id": "DEBT-0018",
"type": "sql_injection_risk",
"description": "Potential SQL injection in user query",
"file_path": "src/database.py",
"line_number": 25,
"severity": "critical",
"metadata": {
"query_type": "dynamic",
"user_input": "unsanitized"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 3,
"hours_estimate": 20.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 5,
"revenue_impact": 5,
"team_velocity_impact": 9,
"quality_impact": 9,
"security_impact": 5
},
"interest_rate": {
"daily_cost": 7.199999999999999,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 25.26,
"category": "other",
"impact_tags": [
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 3.24
},
{
"id": "DEBT-0006",
"type": "todo_comment",
"description": "TODO: Move this to configuration file",
"file_path": "src/user_service.py",
"line_number": 8,
"severity": "low",
"metadata": {
"comment": "TODO: Move this to configuration file"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 0.75,
"risk_factor": 1.0,
"skill_level_required": "junior",
"confidence": 0.9
},
"business_impact": {
"customer_impact": 1,
"revenue_impact": 1,
"team_velocity_impact": 1,
"quality_impact": 1,
"security_impact": 1
},
"interest_rate": {
"daily_cost": 0.8,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.01
},
"cost_of_delay": 0.11,
"category": "maintenance",
"impact_tags": [
"quick-win"
],
"priority_score": 3.1
},
{
"id": "DEBT-0002",
"type": "duplicate_code",
"description": "Password validation logic duplicated in 3 locations",
"file_path": "src/user_service.py",
"line_number": 45,
"severity": "medium",
"metadata": {
"duplicate_count": 3,
"other_files": [
"src/auth.py",
"src/frontend.js"
]
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 15.0,
"risk_factor": 1.4,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 4,
"team_velocity_impact": 6,
"quality_impact": 6,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 4.8,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.08
},
"cost_of_delay": 12.69,
"category": "code_quality",
"impact_tags": [
"quick-win"
],
"priority_score": 2.39
},
{
"id": "DEBT-0020",
"type": "test_debt",
"description": "No unit tests for critical payment processing logic",
"file_path": "src/payment_processor.py",
"line_number": 19,
"severity": "high",
"metadata": {
"coverage": 0,
"critical_paths": [
"process_payment",
"refund_payment"
],
"risk_level": "high"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 6,
"hours_estimate": 36.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 7,
"revenue_impact": 7,
"team_velocity_impact": 10,
"quality_impact": 10,
"security_impact": 4
},
"interest_rate": {
"daily_cost": 8.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.04
},
"cost_of_delay": 50.82,
"category": "testing",
"impact_tags": [
"customer-facing",
"revenue-impact",
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 1.94
},
{
"id": "DEBT-0004",
"type": "high_complexity",
"description": "process_payment function has cyclomatic complexity of 24",
"file_path": "src/payment_processor.py",
"line_number": 19,
"severity": "high",
"metadata": {
"function_name": "process_payment",
"complexity": 24,
"recommended_max": 10
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 5,
"hours_estimate": 30.0,
"risk_factor": 1.4,
"skill_level_required": "senior",
"confidence": 0.5
},
"business_impact": {
"customer_impact": 6,
"revenue_impact": 7,
"team_velocity_impact": 10,
"quality_impact": 10,
"security_impact": 4
},
"interest_rate": {
"daily_cost": 8.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.05
},
"cost_of_delay": 42.36,
"category": "code_quality",
"impact_tags": [
"revenue-impact",
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 1.65
},
{
"id": "DEBT-0003",
"type": "security_risk",
"description": "Hardcoded API key in payment_processor.py",
"file_path": "src/payment_processor.py",
"line_number": 10,
"severity": "critical",
"metadata": {
"security_issue": "hardcoded_credentials",
"exposure_risk": "high"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 7,
"hours_estimate": 44.0,
"risk_factor": 1.8,
"skill_level_required": "senior",
"confidence": 0.4
},
"business_impact": {
"customer_impact": 10,
"revenue_impact": 10,
"team_velocity_impact": 10,
"quality_impact": 10,
"security_impact": 10
},
"interest_rate": {
"daily_cost": 8.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 61.91,
"category": "security",
"impact_tags": [
"security-critical",
"customer-facing",
"revenue-impact",
"velocity-blocker",
"quality-risk",
"quick-win"
],
"priority_score": 1.01
}
],
"sprint_allocation": {
"total_debt_hours": 277.4,
"debt_capacity_per_sprint": 16.0,
"total_sprints_needed": 17,
"high_priority_items": 0,
"sprint_plan": [
{
"sprint_number": 1,
"items": [
{
"id": "DEBT-0008",
"type": "magic_numbers",
"description": "Magic number 1800 used for lock timeout",
"file_path": "src/user_service.py",
"line_number": 98,
"severity": "low",
"metadata": {
"value": 1800,
"context": "account_lockout_duration"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 5.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 2.1,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
},
{
"id": "DEBT-0010",
"type": "long_line",
"description": "Line too long: 156 characters",
"file_path": "src/frontend.js",
"line_number": 127,
"severity": "low",
"metadata": {
"length": 156,
"recommended_max": 120
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 0.375,
"risk_factor": 1.0,
"skill_level_required": "junior",
"confidence": 0.95
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 0.16,
"category": "code_quality",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
},
{
"id": "DEBT-0011",
"type": "commented_code",
"description": "Dead code left in comments",
"file_path": "src/frontend.js",
"line_number": 285,
"severity": "low",
"metadata": {
"lines_of_commented_code": 8
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 1,
"hours_estimate": 5.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 2,
"revenue_impact": 2,
"team_velocity_impact": 3,
"quality_impact": 3,
"security_impact": 2
},
"interest_rate": {
"daily_cost": 2.4,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 2.1,
"category": "maintenance",
"impact_tags": [
"quick-win"
],
"priority_score": 4.8
}
],
"total_hours": 10.375,
"capacity_used": 0.6484375
},
{
"sprint_number": 2,
"items": [
{
"id": "DEBT-0007",
"type": "empty_catch_blocks",
"description": "Empty catch block in update_user method",
"file_path": "src/user_service.py",
"line_number": 156,
"severity": "medium",
"metadata": {
"method_name": "update_user",
"exception_type": "generic"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
}
],
"total_hours": 10.0,
"capacity_used": 0.625
},
{
"sprint_number": 3,
"items": [
{
"id": "DEBT-0009",
"type": "deep_nesting",
"description": "Deep nesting detected: 6 levels in preferences handling",
"file_path": "src/frontend.js",
"line_number": 32,
"severity": "medium",
"metadata": {
"nesting_level": 6,
"recommended_max": 4
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
}
],
"total_hours": 10.0,
"capacity_used": 0.625
},
{
"sprint_number": 4,
"items": [
{
"id": "DEBT-0012",
"type": "global_variables",
"description": "Global variable userCache should be encapsulated",
"file_path": "src/frontend.js",
"line_number": 7,
"severity": "medium",
"metadata": {
"variable_name": "userCache",
"scope": "global"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
}
],
"total_hours": 10.0,
"capacity_used": 0.625
},
{
"sprint_number": 5,
"items": [
{
"id": "DEBT-0014",
"type": "hardcoded_values",
"description": "Tax rates hardcoded in payment processing logic",
"file_path": "src/payment_processor.py",
"line_number": 45,
"severity": "medium",
"metadata": {
"values": [
"0.08",
"0.085",
"0.0625",
"0.06"
],
"context": "tax_calculation"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
}
],
"total_hours": 10.0,
"capacity_used": 0.625
},
{
"sprint_number": 6,
"items": [
{
"id": "DEBT-0016",
"type": "inefficient_algorithm",
"description": "O(n) user search could be optimized with indexing",
"file_path": "src/user_service.py",
"line_number": 178,
"severity": "medium",
"metadata": {
"current_complexity": "O(n)",
"recommended_complexity": "O(log n)",
"method_name": "search_users"
},
"detected_date": "2024-02-10T10:30:00",
"status": "identified",
"effort_estimate": {
"size_points": 2,
"hours_estimate": 10.0,
"risk_factor": 1.0,
"skill_level_required": "mid",
"confidence": 0.6
},
"business_impact": {
"customer_impact": 3,
"revenue_impact": 3,
"team_velocity_impact": 5,
"quality_impact": 5,
"security_impact": 3
},
"interest_rate": {
"daily_cost": 4.0,
"frequency_multiplier": 1.0,
"team_impact_multiplier": 1.0,
"compound_rate": 0.02
},
"cost_of_delay": 7.01,
"category": "other",
"impact_tags": [
"quick-win"
],
"priority_score": 4.72
}
],
"total_hours": 10.0,
"capacity_used": 0.625
}
],
"recommendations": [
"Allocate 16.0 hours per sprint to tech debt",
"Focus on 0 high-priority items first",
"Estimated 17 sprints to clear current backlog"
]
},
"insights": {
"category_distribution": {
"other": 11,
"code_quality": 5,
"maintenance": 2,
"testing": 1,
"security": 1
},
"total_effort_hours": 277.4,
"effort_by_category": {
"other": 130.0,
"code_quality": 61.6,
"maintenance": 5.8,
"testing": 36.0,
"security": 44.0
},
"priority_distribution": {
"medium": 17,
"low": 3
},
"high_risk_items_count": 1,
"quick_wins_count": 5,
"total_cost_of_delay": 303.6,
"average_daily_interest_rate": 4.69,
"top_categories_by_effort": [
[
"other",
130.0
],
[
"code_quality",
61.625
],
[
"security",
44.0
]
]
},
"charts_data": {
"priority_effort_scatter": [
{
"x": 5.0,
"y": 4.8,
"label": "Magic number 1800 used for lock timeout",
"category": "other",
"size": 2.1
},
{
"x": 0.375,
"y": 4.8,
"label": "Line too long: 156 characters",
"category": "code_quality",
"size": 0.16
},
{
"x": 5.0,
"y": 4.8,
"label": "Dead code left in comments",
"category": "maintenance",
"size": 2.1
},
{
"x": 10.0,
"y": 4.72,
"label": "Empty catch block in update_user method",
"category": "other",
"size": 7.01
},
{
"x": 10.0,
"y": 4.72,
"label": "Deep nesting detected: 6 levels in preferences han",
"category": "other",
"size": 7.01
},
{
"x": 10.0,
"y": 4.72,
"label": "Global variable userCache should be encapsulated",
"category": "other",
"size": 7.01
},
{
"x": 10.0,
"y": 4.72,
"label": "Tax rates hardcoded in payment processing logic",
"category": "other",
"size": 7.01
},
{
"x": 10.0,
"y": 4.72,
"label": "O(n) user search could be optimized with indexing",
"category": "other",
"size": 7.01
},
{
"x": 10.0,
"y": 4.72,
"label": "Event listeners attached without cleanup",
"category": "other",
"size": 7.01
},
{
"x": 15.0,
"y": 4.26,
"label": "create_user function in user_service.py is 89 line",
"category": "code_quality",
"size": 19.48
},
{
"x": 1.25,
"y": 4.1,
"label": "PaymentProcessor class missing docstring",
"category": "code_quality",
"size": 0.35
},
{
"x": 15.0,
"y": 3.73,
"label": "Synchronous AJAX call blocks UI thread",
"category": "other",
"size": 14.73
},
{
"x": 15.0,
"y": 3.73,
"label": "API calls without proper error handling",
"category": "other",
"size": 14.73
},
{
"x": 15.0,
"y": 3.73,
"label": "jQuery version 2.1.4 has known security vulnerabil",
"category": "other",
"size": 14.73
},
{
"x": 20.0,
"y": 3.24,
"label": "Potential SQL injection in user query",
"category": "other",
"size": 25.26
},
{
"x": 0.75,
"y": 3.1,
"label": "TODO: Move this to configuration file",
"category": "maintenance",
"size": 0.11
},
{
"x": 15.0,
"y": 2.39,
"label": "Password validation logic duplicated in 3 location",
"category": "code_quality",
"size": 12.69
},
{
"x": 36.0,
"y": 1.94,
"label": "No unit tests for critical payment processing logi",
"category": "testing",
"size": 50.82
},
{
"x": 30.0,
"y": 1.65,
"label": "process_payment function has cyclomatic complexity",
"category": "code_quality",
"size": 42.36
},
{
"x": 44.0,
"y": 1.01,
"label": "Hardcoded API key in payment_processor.py",
"category": "security",
"size": 61.91
}
],
"category_effort_distribution": [
{
"category": "other",
"effort": 130.0
},
{
"category": "code_quality",
"effort": 61.6
},
{
"category": "maintenance",
"effort": 5.8
},
{
"category": "testing",
"effort": 36.0
},
{
"category": "security",
"effort": 44.0
}
],
"priority_timeline": [
{
"item_rank": 1,
"description": "Magic number 1800 used for loc",
"effort": 5.0,
"cumulative_effort": 5.0,
"priority_score": 4.8
},
{
"item_rank": 2,
"description": "Line too long: 156 characters",
"effort": 0.375,
"cumulative_effort": 5.4,
"priority_score": 4.8
},
{
"item_rank": 3,
"description": "Dead code left in comments",
"effort": 5.0,
"cumulative_effort": 10.4,
"priority_score": 4.8
},
{
"item_rank": 4,
"description": "Empty catch block in update_us",
"effort": 10.0,
"cumulative_effort": 20.4,
"priority_score": 4.72
},
{
"item_rank": 5,
"description": "Deep nesting detected: 6 level",
"effort": 10.0,
"cumulative_effort": 30.4,
"priority_score": 4.72
},
{
"item_rank": 6,
"description": "Global variable userCache shou",
"effort": 10.0,
"cumulative_effort": 40.4,
"priority_score": 4.72
},
{
"item_rank": 7,
"description": "Tax rates hardcoded in payment",
"effort": 10.0,
"cumulative_effort": 50.4,
"priority_score": 4.72
},
{
"item_rank": 8,
"description": "O(n) user search could be opti",
"effort": 10.0,
"cumulative_effort": 60.4,
"priority_score": 4.72
},
{
"item_rank": 9,
"description": "Event listeners attached witho",
"effort": 10.0,
"cumulative_effort": 70.4,
"priority_score": 4.72
},
{
"item_rank": 10,
"description": "create_user function in user_s",
"effort": 15.0,
"cumulative_effort": 85.4,
"priority_score": 4.26
},
{
"item_rank": 11,
"description": "PaymentProcessor class missing",
"effort": 1.25,
"cumulative_effort": 86.6,
"priority_score": 4.1
},
{
"item_rank": 12,
"description": "Synchronous AJAX call blocks U",
"effort": 15.0,
"cumulative_effort": 101.6,
"priority_score": 3.73
},
{
"item_rank": 13,
"description": "API calls without proper error",
"effort": 15.0,
"cumulative_effort": 116.6,
"priority_score": 3.73
},
{
"item_rank": 14,
"description": "jQuery version 2.1.4 has known",
"effort": 15.0,
"cumulative_effort": 131.6,
"priority_score": 3.73
},
{
"item_rank": 15,
"description": "Potential SQL injection in use",
"effort": 20.0,
"cumulative_effort": 151.6,
"priority_score": 3.24
},
{
"item_rank": 16,
"description": "TODO: Move this to configurati",
"effort": 0.75,
"cumulative_effort": 152.4,
"priority_score": 3.1
},
{
"item_rank": 17,
"description": "Password validation logic dupl",
"effort": 15.0,
"cumulative_effort": 167.4,
"priority_score": 2.39
},
{
"item_rank": 18,
"description": "No unit tests for critical pay",
"effort": 36.0,
"cumulative_effort": 203.4,
"priority_score": 1.94
},
{
"item_rank": 19,
"description": "process_payment function has c",
"effort": 30.0,
"cumulative_effort": 233.4,
"priority_score": 1.65
},
{
"item_rank": 20,
"description": "Hardcoded API key in payment_p",
"effort": 44.0,
"cumulative_effort": 277.4,
"priority_score": 1.01
}
],
"interest_rate_trend": [
{
"item_index": 0,
"daily_cost": 2.4,
"category": "other"
},
{
"item_index": 1,
"daily_cost": 2.4,
"category": "code_quality"
},
{
"item_index": 2,
"daily_cost": 2.4,
"category": "maintenance"
},
{
"item_index": 3,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 4,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 5,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 6,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 7,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 8,
"daily_cost": 4.0,
"category": "other"
},
{
"item_index": 9,
"daily_cost": 7.4,
"category": "code_quality"
},
{
"item_index": 10,
"daily_cost": 1.6,
"category": "code_quality"
},
{
"item_index": 11,
"daily_cost": 5.6,
"category": "other"
},
{
"item_index": 12,
"daily_cost": 5.6,
"category": "other"
},
{
"item_index": 13,
"daily_cost": 5.6,
"category": "other"
},
{
"item_index": 14,
"daily_cost": 7.199999999999999,
"category": "other"
},
{
"item_index": 15,
"daily_cost": 0.8,
"category": "maintenance"
},
{
"item_index": 16,
"daily_cost": 4.8,
"category": "code_quality"
},
{
"item_index": 17,
"daily_cost": 8.0,
"category": "testing"
},
{
"item_index": 18,
"daily_cost": 8.0,
"category": "code_quality"
},
{
"item_index": 19,
"daily_cost": 8.0,
"category": "security"
}
]
},
"recommendations": [
"Start with 5 quick wins to build momentum and demonstrate immediate value from tech debt reduction efforts.",
"Focus initial efforts on 'other' category debt, which represents the largest effort investment (130.0 hours)."
]
} {
"scan_metadata": {
"directory": "assets/sample_codebase",
"scan_date": "2026-02-16T12:59:28.141103",
"scanner_version": "1.0.0",
"config": {
"max_function_length": 50,
"max_complexity": 10,
"max_nesting_depth": 4,
"max_file_size_lines": 500,
"min_duplicate_lines": 3,
"ignore_patterns": [
"*.pyc",
"__pycache__",
".git",
".svn",
"node_modules",
"build",
"dist",
"*.min.js",
"*.map"
],
"file_extensions": {
"python": [
".py"
],
"javascript": [
".js",
".jsx",
".ts",
".tsx"
],
"java": [
".java"
],
"csharp": [
".cs"
],
"cpp": [
".cpp",
".cc",
".cxx",
".c",
".h",
".hpp"
],
"ruby": [
".rb"
],
"php": [
".php"
],
"go": [
".go"
],
"rust": [
".rs"
],
"kotlin": [
".kt"
]
},
"comment_patterns": {
"todo": "(?i)(TODO|FIXME|HACK|XXX|BUG)[\\s:]*(.+)",
"commented_code": "^\\s*#.*[=(){}\\[\\];].*",
"magic_numbers": "\\b\\d{2,}\\b",
"long_strings": "[\"\\'](.{100,})[\"\\']"
},
"severity_weights": {
"critical": 10,
"high": 7,
"medium": 5,
"low": 2,
"info": 1
}
}
},
"summary": {
"total_files_scanned": 3,
"total_lines_scanned": 986,
"total_debt_items": 122,
"health_score": 0,
"debt_density": 40.67,
"priority_breakdown": {
"medium": 81,
"low": 41
},
"type_breakdown": {
"high_complexity": 3,
"large_function": 2,
"duplicate_code": 68,
"too_many_parameters": 2,
"empty_catch": 1,
"hardcoded_paths": 5,
"missing_docstring": 22,
"long_line": 2,
"todo_comment": 17
}
},
"debt_items": [
{
"id": "DEBT-0005",
"type": "high_complexity",
"description": "Function 'create_user' has high complexity: 26",
"file_path": "src/user_service.py",
"line_number": 24,
"severity": "high",
"metadata": {
"function_name": "create_user",
"complexity": 26
},
"detected_date": "2026-02-16T12:59:28.115457",
"status": "identified",
"priority_score": 9,
"priority": "medium"
},
{
"id": "DEBT-0004",
"type": "high_complexity",
"description": "Function 'process_payment' has high complexity: 36",
"file_path": "src/payment_processor.py",
"line_number": 20,
"severity": "high",
"metadata": {
"function_name": "process_payment",
"complexity": 36
},
"detected_date": "2026-02-16T12:59:28.125126",
"status": "identified",
"priority_score": 9,
"priority": "medium"
},
{
"id": "DEBT-0010",
"type": "high_complexity",
"description": "Function 'validate_credit_card' has high complexity: 16",
"file_path": "src/payment_processor.py",
"line_number": 244,
"severity": "high",
"metadata": {
"function_name": "validate_credit_card",
"complexity": 16
},
"detected_date": "2026-02-16T12:59:28.126081",
"status": "identified",
"priority_score": 9,
"priority": "medium"
},
{
"id": "DEBT-0003",
"type": "large_function",
"description": "Function 'create_user' is too long: 101 lines",
"file_path": "src/user_service.py",
"line_number": 24,
"severity": "medium",
"metadata": {
"function_name": "create_user",
"length": 101
},
"detected_date": "2026-02-16T12:59:28.114676",
"status": "identified",
"priority_score": 7,
"priority": "medium"
},
{
"id": "DEBT-0003",
"type": "large_function",
"description": "Function 'process_payment' is too long: 196 lines",
"file_path": "src/payment_processor.py",
"line_number": 20,
"severity": "medium",
"metadata": {
"function_name": "process_payment",
"length": 196
},
"detected_date": "2026-02-16T12:59:28.124441",
"status": "identified",
"priority_score": 7,
"priority": "medium"
},
{
"id": "DEBT-0055",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 28,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140697",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0056",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 138,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140705",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0057",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 29,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140709",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0058",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 139,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140712",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0059",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 87,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140716",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0060",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 88,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140718",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0061",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 90,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140721",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0062",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 91,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140723",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0063",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 122,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140726",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0064",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 123,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140729",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0065",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 190,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140733",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0066",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 191,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140735",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0067",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 251,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140739",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0068",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 252,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140741",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0069",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 255,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140743",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0070",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 256,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140745",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0071",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 28,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140751",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0072",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 29,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140754",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0073",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 31,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140756",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0074",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 32,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140758",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0075",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 34,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140761",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0076",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 35,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140763",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0077",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 37,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140766",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0078",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 38,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140768",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0079",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 83,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140771",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0080",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 84,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140774",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0081",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 102,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140777",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0082",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 145,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140779",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0083",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 114,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140782",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0084",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 156,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140784",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0085",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 115,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140786",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0086",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 157,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140788",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0087",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 116,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140790",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0088",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 158,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140793",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0089",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 117,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140795",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0090",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 159,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140797",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0091",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 119,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140800",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0092",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 120,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140802",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0093",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 121,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140804",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0094",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 162,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140806",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0095",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 122,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140808",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0096",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 163,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140813",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0097",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 161,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140816",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0098",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 203,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140818",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0099",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 213,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140822",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0100",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 214,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140824",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0101",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 223,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140827",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0102",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 224,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140829",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0103",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 235,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140832",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0104",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 236,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140834",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0105",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 265,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140837",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0106",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 266,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140839",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0107",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 306,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140842",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0108",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/payment_processor.py",
"severity": "medium",
"metadata": {
"line_number": 307,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140844",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0109",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 99,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140849",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0110",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 100,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140851",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0111",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 111,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140854",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0112",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 136,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140856",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0113",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 112,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140858",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0114",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 137,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140861",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0115",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 147,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140863",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0116",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 148,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140866",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0117",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 221,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140870",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0118",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 222,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140872",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0119",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 234,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140874",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0120",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 271,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140876",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0121",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 235,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140878",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0122",
"type": "duplicate_code",
"description": "Duplicate code block found in 2 files",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 272,
"duplicate_count": 2,
"other_files": []
},
"detected_date": "2026-02-16T12:59:28.140885",
"status": "identified",
"priority_score": 6,
"priority": "medium"
},
{
"id": "DEBT-0006",
"type": "too_many_parameters",
"description": "Function 'create_user' has too many parameters: 14",
"file_path": "src/user_service.py",
"line_number": 24,
"severity": "medium",
"metadata": {
"function_name": "create_user",
"parameter_count": 14
},
"detected_date": "2026-02-16T12:59:28.115465",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0025",
"type": "empty_catch",
"description": "Code smell detected: empty_catch",
"file_path": "src/user_service.py",
"severity": "medium",
"metadata": {
"line_number": 170,
"pattern": "except:\n pass\n "
},
"detected_date": "2026-02-16T12:59:28.120298",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0005",
"type": "too_many_parameters",
"description": "Function 'process_payment' has too many parameters: 12",
"file_path": "src/payment_processor.py",
"line_number": 20,
"severity": "medium",
"metadata": {
"function_name": "process_payment",
"parameter_count": 12
},
"detected_date": "2026-02-16T12:59:28.125130",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0050",
"type": "hardcoded_paths",
"description": "Code smell detected: hardcoded_paths",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 166,
"pattern": "'/users/'"
},
"detected_date": "2026-02-16T12:59:28.139558",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0051",
"type": "hardcoded_paths",
"description": "Code smell detected: hardcoded_paths",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 233,
"pattern": "'/users/'"
},
"detected_date": "2026-02-16T12:59:28.139584",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0052",
"type": "hardcoded_paths",
"description": "Code smell detected: hardcoded_paths",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 252,
"pattern": "'/users/'"
},
"detected_date": "2026-02-16T12:59:28.139595",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0053",
"type": "hardcoded_paths",
"description": "Code smell detected: hardcoded_paths",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 270,
"pattern": "'/users/'"
},
"detected_date": "2026-02-16T12:59:28.139606",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0054",
"type": "hardcoded_paths",
"description": "Code smell detected: hardcoded_paths",
"file_path": "src/frontend.js",
"severity": "medium",
"metadata": {
"line_number": 355,
"pattern": "'/auth/login'"
},
"detected_date": "2026-02-16T12:59:28.139636",
"status": "identified",
"priority_score": 5,
"priority": "medium"
},
{
"id": "DEBT-0001",
"type": "missing_docstring",
"description": "Class 'UserService' missing docstring",
"file_path": "src/user_service.py",
"line_number": 17,
"severity": "low",
"metadata": {
"class_name": "UserService"
},
"detected_date": "2026-02-16T12:59:28.114513",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0002",
"type": "missing_docstring",
"description": "Function '__init__' missing docstring",
"file_path": "src/user_service.py",
"line_number": 18,
"severity": "low",
"metadata": {
"function_name": "__init__"
},
"detected_date": "2026-02-16T12:59:28.114546",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0004",
"type": "missing_docstring",
"description": "Function 'create_user' missing docstring",
"file_path": "src/user_service.py",
"line_number": 24,
"severity": "low",
"metadata": {
"function_name": "create_user"
},
"detected_date": "2026-02-16T12:59:28.114684",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0007",
"type": "missing_docstring",
"description": "Function 'validate_email' missing docstring",
"file_path": "src/user_service.py",
"line_number": 126,
"severity": "low",
"metadata": {
"function_name": "validate_email"
},
"detected_date": "2026-02-16T12:59:28.116045",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0008",
"type": "missing_docstring",
"description": "Function 'authenticate_user' missing docstring",
"file_path": "src/user_service.py",
"line_number": 136,
"severity": "low",
"metadata": {
"function_name": "authenticate_user"
},
"detected_date": "2026-02-16T12:59:28.116159",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0009",
"type": "missing_docstring",
"description": "Function 'get_user' missing docstring",
"file_path": "src/user_service.py",
"line_number": 162,
"severity": "low",
"metadata": {
"function_name": "get_user"
},
"detected_date": "2026-02-16T12:59:28.116637",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0010",
"type": "missing_docstring",
"description": "Function 'update_user' missing docstring",
"file_path": "src/user_service.py",
"line_number": 166,
"severity": "low",
"metadata": {
"function_name": "update_user"
},
"detected_date": "2026-02-16T12:59:28.116694",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0011",
"type": "missing_docstring",
"description": "Function 'delete_user' missing docstring",
"file_path": "src/user_service.py",
"line_number": 194,
"severity": "low",
"metadata": {
"function_name": "delete_user"
},
"detected_date": "2026-02-16T12:59:28.117074",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0012",
"type": "missing_docstring",
"description": "Function 'search_users' missing docstring",
"file_path": "src/user_service.py",
"line_number": 199,
"severity": "low",
"metadata": {
"function_name": "search_users"
},
"detected_date": "2026-02-16T12:59:28.117131",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0013",
"type": "missing_docstring",
"description": "Function 'export_users' missing docstring",
"file_path": "src/user_service.py",
"line_number": 211,
"severity": "low",
"metadata": {
"function_name": "export_users"
},
"detected_date": "2026-02-16T12:59:28.117460",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0014",
"type": "missing_docstring",
"description": "Function 'import_users' missing docstring",
"file_path": "src/user_service.py",
"line_number": 215,
"severity": "low",
"metadata": {
"function_name": "import_users"
},
"detected_date": "2026-02-16T12:59:28.117523",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0015",
"type": "missing_docstring",
"description": "Function 'calculate_user_score' missing docstring",
"file_path": "src/user_service.py",
"line_number": 224,
"severity": "low",
"metadata": {
"function_name": "calculate_user_score"
},
"detected_date": "2026-02-16T12:59:28.117609",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0016",
"type": "missing_docstring",
"description": "Function 'get_user_service' missing docstring",
"file_path": "src/user_service.py",
"line_number": 256,
"severity": "low",
"metadata": {
"function_name": "get_user_service"
},
"detected_date": "2026-02-16T12:59:28.118051",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0017",
"type": "missing_docstring",
"description": "Function 'hash_password' missing docstring",
"file_path": "src/user_service.py",
"line_number": 261,
"severity": "low",
"metadata": {
"function_name": "hash_password"
},
"detected_date": "2026-02-16T12:59:28.118083",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0018",
"type": "missing_docstring",
"description": "Function 'validate_password' missing docstring",
"file_path": "src/user_service.py",
"line_number": 267,
"severity": "low",
"metadata": {
"function_name": "validate_password"
},
"detected_date": "2026-02-16T12:59:28.118172",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0001",
"type": "missing_docstring",
"description": "Class 'PaymentProcessor' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 12,
"severity": "low",
"metadata": {
"class_name": "PaymentProcessor"
},
"detected_date": "2026-02-16T12:59:28.124344",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0002",
"type": "missing_docstring",
"description": "Function '__init__' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 14,
"severity": "low",
"metadata": {
"function_name": "__init__"
},
"detected_date": "2026-02-16T12:59:28.124356",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0006",
"type": "missing_docstring",
"description": "Function 'send_payment_confirmation_email' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 217,
"severity": "low",
"metadata": {
"function_name": "send_payment_confirmation_email"
},
"detected_date": "2026-02-16T12:59:28.125733",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0007",
"type": "missing_docstring",
"description": "Function 'refund_payment' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 227,
"severity": "low",
"metadata": {
"function_name": "refund_payment"
},
"detected_date": "2026-02-16T12:59:28.125816",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0008",
"type": "missing_docstring",
"description": "Function 'get_transaction' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 239,
"severity": "low",
"metadata": {
"function_name": "get_transaction"
},
"detected_date": "2026-02-16T12:59:28.125889",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0009",
"type": "missing_docstring",
"description": "Function 'validate_credit_card' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 244,
"severity": "low",
"metadata": {
"function_name": "validate_credit_card"
},
"detected_date": "2026-02-16T12:59:28.125917",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0011",
"type": "missing_docstring",
"description": "Function 'get_payment_processor' missing docstring",
"file_path": "src/payment_processor.py",
"line_number": 311,
"severity": "low",
"metadata": {
"function_name": "get_payment_processor"
},
"detected_date": "2026-02-16T12:59:28.126436",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0044",
"type": "long_line",
"description": "Line too long: 140 characters",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 161,
"length": 140
},
"detected_date": "2026-02-16T12:59:28.128066",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0045",
"type": "long_line",
"description": "Line too long: 122 characters",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 180,
"length": 122
},
"detected_date": "2026-02-16T12:59:28.128072",
"status": "identified",
"priority_score": 2,
"priority": "low"
},
{
"id": "DEBT-0019",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Move this to configuration file",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 12,
"comment": "TODO: Move this to configuration file"
},
"detected_date": "2026-02-16T12:59:28.118649",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0020",
"type": "todo_comment",
"description": "TODO/FIXME comment: FIXME: This should be in environment variables",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 14,
"comment": "FIXME: This should be in environment variables"
},
"detected_date": "2026-02-16T12:59:28.118681",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0021",
"type": "todo_comment",
"description": "TODO/FIXME comment: HACK: Using dict for now, should be proper database connection",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 21,
"comment": "HACK: Using dict for now, should be proper database connection"
},
"detected_date": "2026-02-16T12:59:28.118720",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0022",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Implement proper user ID generation",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 88,
"comment": "TODO: Implement proper user ID generation"
},
"detected_date": "2026-02-16T12:59:28.119140",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0023",
"type": "todo_comment",
"description": "TODO/FIXME comment: XXX: This is terrible for production",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 89,
"comment": "XXX: This is terrible for production"
},
"detected_date": "2026-02-16T12:59:28.119154",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0024",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Implement soft delete instead",
"file_path": "src/user_service.py",
"severity": "low",
"metadata": {
"line_number": 196,
"comment": "TODO: Implement soft delete instead"
},
"detected_date": "2026-02-16T12:59:28.119807",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0037",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: These should come from environment or config",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 15,
"comment": "TODO: These should come from environment or config"
},
"detected_date": "2026-02-16T12:59:28.126594",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0038",
"type": "todo_comment",
"description": "TODO/FIXME comment: FIXME: This should query a discount service",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 67,
"comment": "FIXME: This should query a discount service"
},
"detected_date": "2026-02-16T12:59:28.126782",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0039",
"type": "todo_comment",
"description": "TODO/FIXME comment: HACK: Using print instead of actual email service",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 219,
"comment": "HACK: Using print instead of actual email service"
},
"detected_date": "2026-02-16T12:59:28.127356",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0040",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Implement actual email sending",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 224,
"comment": "TODO: Implement actual email sending"
},
"detected_date": "2026-02-16T12:59:28.127385",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0041",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Implement refund for different providers",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 229,
"comment": "TODO: Implement refund for different providers"
},
"detected_date": "2026-02-16T12:59:28.127402",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0042",
"type": "todo_comment",
"description": "TODO/FIXME comment: XXX: This doesn't actually process the refund",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 236,
"comment": "XXX: This doesn't actually process the refund"
},
"detected_date": "2026-02-16T12:59:28.127434",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0043",
"type": "todo_comment",
"description": "TODO/FIXME comment: FIXME: Implement actual transaction lookup",
"file_path": "src/payment_processor.py",
"severity": "low",
"metadata": {
"line_number": 241,
"comment": "FIXME: Implement actual transaction lookup"
},
"detected_date": "2026-02-16T12:59:28.127455",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0046",
"type": "todo_comment",
"description": "TODO/FIXME comment: TODO: Move configuration to separate file",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 3,
"comment": "TODO: Move configuration to separate file"
},
"detected_date": "2026-02-16T12:59:28.138142",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0047",
"type": "todo_comment",
"description": "TODO/FIXME comment: FIXME: Should be in environment",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 5,
"comment": "FIXME: Should be in environment"
},
"detected_date": "2026-02-16T12:59:28.138158",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0048",
"type": "todo_comment",
"description": "TODO/FIXME comment: HACK: Polyfill for older browsers - should use proper build system",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 12,
"comment": "HACK: Polyfill for older browsers - should use proper build system"
},
"detected_date": "2026-02-16T12:59:28.138174",
"status": "identified",
"priority_score": 1,
"priority": "low"
},
{
"id": "DEBT-0049",
"type": "todo_comment",
"description": "TODO/FIXME comment: XXX: This method is never used",
"file_path": "src/frontend.js",
"severity": "low",
"metadata": {
"line_number": 287,
"comment": "XXX: This method is never used"
},
"detected_date": "2026-02-16T12:59:28.139089",
"status": "identified",
"priority_score": 1,
"priority": "low"
}
],
"file_statistics": {
"src/user_service.py": {
"path": "src/user_service.py",
"lines": 276,
"size_kb": 9.275390625,
"language": "python",
"debt_count": 16
},
"src/payment_processor.py": {
"path": "src/payment_processor.py",
"lines": 315,
"size_kb": 13.041015625,
"language": "python",
"debt_count": 38
},
"src/frontend.js": {
"path": "src/frontend.js",
"lines": 395,
"size_kb": 14.5419921875,
"language": "javascript",
"debt_count": 14
}
},
"recommendations": [
"Extract duplicate code into reusable functions or modules. This reduces maintenance burden and potential for inconsistent changes.",
"High debt density detected. Consider establishing coding standards and regular code review processes to prevent debt accumulation."
]
} Technical Debt Classification Taxonomy
Overview
This document provides a comprehensive taxonomy for classifying technical debt across different dimensions. Consistent classification is essential for tracking, prioritizing, and managing technical debt effectively across teams and projects.
Primary Categories
1. Code Debt
Definition: Issues at the code level that make software harder to understand, modify, or maintain.
Subcategories:
Structural Issues
large_function: Functions exceeding recommended size limitshigh_complexity: High cyclomatic complexity (>10)deep_nesting: Excessive indentation levels (>4)long_parameter_list: Too many function parameters (>5)data_clumps: Related data that should be grouped together
Naming and Documentation
poor_naming: Unclear or misleading variable/function namesmissing_docstring: Functions/classes without documentationmagic_numbers: Hardcoded numeric values without explanationcommented_code: Dead code left in comments
Duplication and Patterns
duplicate_code: Identical or similar code blockscopy_paste_programming: Evidence of code duplicationinconsistent_patterns: Mixed coding styles within codebase
Error Handling
empty_catch_blocks: Exception handling without proper actiongeneric_exceptions: Catching overly broad exception typesmissing_error_handling: No error handling for failure scenarios
Severity Indicators:
- Critical: Security vulnerabilities, syntax errors
- High: Functions >100 lines, complexity >20
- Medium: Functions 50-100 lines, complexity 10-20
- Low: Minor style issues, short functions with minor problems
2. Architecture Debt
Definition: High-level design decisions that limit system flexibility, scalability, or maintainability.
Subcategories:
Structural Issues
monolithic_design: Components that should be separatedcircular_dependencies: Modules depending on each other cyclicallygod_object: Classes/modules with too many responsibilitiesinappropriate_intimacy: Excessive coupling between modules
Layer Violations
abstraction_inversion: Lower-level modules depending on higher-level onesleaky_abstractions: Implementation details exposed through interfacesbroken_hierarchy: Inheritance relationships that don't make sense
Scalability Issues
performance_bottlenecks: Known architectural performance limitationsresource_contention: Shared resources creating bottleneckssingle_point_failure: Critical components without redundancy
Impact Assessment:
- High Impact: Affects system scalability, blocks major features
- Medium Impact: Makes changes more difficult, affects team productivity
- Low Impact: Minor architectural inconsistencies
3. Test Debt
Definition: Inadequate testing infrastructure, coverage, or quality that increases risk and slows development.
Subcategories:
Coverage Issues
low_coverage: Test coverage below team standards (<80%)missing_unit_tests: No tests for critical business logicmissing_integration_tests: No tests for component interactionsmissing_end_to_end_tests: No full system workflow validation
Test Quality
flaky_tests: Tests that pass/fail inconsistentlyslow_tests: Test suite taking too long to executebrittle_tests: Tests that break with minor code changesunclear_test_intent: Tests without clear purpose or documentation
Infrastructure
manual_testing_only: No automated testing processesmissing_test_data: No proper test data managementenvironment_dependencies: Tests requiring specific environments
Priority Matrix:
- Critical Path Coverage: High priority for business-critical features
- Regression Risk: High priority for frequently changed code
- Development Velocity: Medium priority for developer productivity
- Documentation Value: Low priority for test clarity improvements
4. Documentation Debt
Definition: Missing, outdated, or poor-quality documentation that hinders understanding and maintenance.
Subcategories:
API Documentation
missing_api_docs: No documentation for public APIsoutdated_api_docs: Documentation doesn't match implementationincomplete_examples: No usage examples for complex APIs
Code Documentation
missing_comments: Complex algorithms without explanationoutdated_comments: Comments contradicting current implementationredundant_comments: Comments that just restate the code
System Documentation
missing_architecture_docs: No high-level system design documentationmissing_deployment_docs: No deployment or operations guidemissing_onboarding_docs: No guide for new team members
Freshness Assessment:
- Stale: Documentation >6 months out of date
- Outdated: Documentation 3-6 months out of date
- Current: Documentation <3 months out of date
5. Dependency Debt
Definition: Issues with external libraries, frameworks, and system dependencies.
Subcategories:
Version Management
outdated_dependencies: Libraries with available updatesvulnerable_dependencies: Dependencies with known security issuesdeprecated_dependencies: Dependencies no longer maintainedversion_conflicts: Incompatible dependency versions
License and Compliance
license_violations: Dependencies with incompatible licenseslicense_unknown: Dependencies without clear licensingcompliance_risk: Dependencies creating legal/regulatory risks
Usage Optimization
unused_dependencies: Dependencies included but not usedoversized_dependencies: Heavy libraries for simple functionalityredundant_dependencies: Multiple libraries solving same problem
Risk Assessment:
- Security Risk: Known vulnerabilities, unmaintained dependencies
- Legal Risk: License conflicts, compliance issues
- Technical Risk: Breaking changes, deprecation notices
- Maintenance Risk: Outdated versions, unsupported libraries
6. Infrastructure Debt
Definition: Operations, deployment, and infrastructure-related technical debt.
Subcategories:
Deployment and CI/CD
manual_deployment: No automated deployment processesmissing_pipeline: No CI/CD pipeline automationbrittle_deployments: Deployment process prone to failureenvironment_drift: Inconsistencies between environments
Monitoring and Observability
missing_monitoring: No application/system monitoringinadequate_logging: Insufficient logging for troubleshootingmissing_alerting: No alerts for critical system conditionspoor_observability: Can't understand system behavior in production
Configuration Management
hardcoded_config: Configuration embedded in codemanual_configuration: No automated configuration managementsecrets_in_code: Sensitive information stored in codeinconsistent_environments: Dev/staging/prod differences
Operational Impact:
- Availability: Affects system uptime and reliability
- Debuggability: Affects ability to troubleshoot issues
- Scalability: Affects ability to handle load increases
- Security: Affects system security posture
Severity Classification
Critical (Score: 9-10)
- Security vulnerabilities
- Production-breaking issues
- Legal/compliance violations
- Blocking issues for team productivity
High (Score: 7-8)
- Significant technical risk
- Major productivity impact
- Customer-visible quality issues
- Architecture limitations
Medium (Score: 4-6)
- Moderate productivity impact
- Code quality concerns
- Maintenance difficulties
- Minor security concerns
Low (Score: 1-3)
- Style and convention issues
- Documentation gaps
- Minor optimizations
- Cosmetic improvements
Impact Dimensions
Business Impact
- Customer Experience: User-facing quality and performance
- Revenue: Direct impact on business metrics
- Compliance: Regulatory and legal requirements
- Market Position: Competitive advantage considerations
Technical Impact
- Development Velocity: Speed of feature development
- Code Quality: Maintainability and reliability
- System Reliability: Uptime and performance
- Security Posture: Vulnerability and risk exposure
Team Impact
- Developer Productivity: Individual efficiency
- Team Morale: Job satisfaction and engagement
- Knowledge Sharing: Team collaboration and learning
- Onboarding Speed: New team member integration
Effort Estimation Guidelines
T-Shirt Sizing
- XS (1-4 hours): Simple fixes, documentation updates
- S (1-2 days): Minor refactoring, simple feature additions
- M (3-5 days): Moderate refactoring, component changes
- L (1-2 weeks): Major refactoring, architectural changes
- XL (3+ weeks): System-wide changes, major migrations
Complexity Factors
- Technical Complexity: How difficult is the change technically?
- Business Risk: What's the risk if something goes wrong?
- Testing Requirements: How much testing is needed?
- Team Knowledge: Does the team understand this area well?
- Dependencies: How many other systems/teams are involved?
Usage Guidelines
When Classifying Debt
- Start with primary category (code, architecture, test, etc.)
- Identify specific subcategory for precise tracking
- Assess severity based on business and technical impact
- Estimate effort using t-shirt sizing
- Tag with relevant impact dimensions
Consistency Rules
- Use consistent terminology across teams
- Document custom categories for domain-specific debt
- Regular reviews to ensure classification accuracy
- Training for team members on taxonomy usage
Review and Updates
- Quarterly review of taxonomy relevance
- Add new categories as patterns emerge
- Remove unused categories to keep taxonomy lean
- Update severity and impact criteria based on experience
This taxonomy should be adapted to your organization's specific context, technology stack, and business priorities. The key is consistency in application across teams and over time.
tech-debt-tracker reference
Technical Debt Classification Framework
1. Code Debt
Code-level issues that make the codebase harder to understand, modify, and maintain.
Indicators:
- Long functions (>50 lines for complex logic, >20 for simple operations)
- Deep nesting (>4 levels of indentation)
- High cyclomatic complexity (>10)
- Duplicate code patterns (>3 similar blocks)
- Missing or inadequate error handling
- Poor variable/function naming
- Magic numbers and hardcoded values
- Commented-out code blocks
Impact:
- Increased debugging time
- Higher defect rates
- Slower feature development
- Knowledge silos (only original author understands the code)
Detection Methods:
- AST parsing for structural analysis
- Pattern matching for common anti-patterns
- Complexity metrics calculation
- Duplicate code detection algorithms
2. Architecture Debt
High-level design decisions that seemed reasonable at the time but now limit scalability or maintainability.
Indicators:
- Monolithic components that should be modular
- Circular dependencies between modules
- Violation of separation of concerns
- Inconsistent data flow patterns
- Over-engineering or under-engineering for current scale
- Tightly coupled components
- Missing abstraction layers
Impact:
- Difficult to scale individual components
- Cascading changes required for simple modifications
- Testing becomes complex and brittle
- Onboarding new team members takes longer
Detection Methods:
- Dependency analysis
- Module coupling metrics
- Component size analysis
- Interface consistency checks
3. Test Debt
Inadequate or missing test coverage, poor test quality, and testing infrastructure issues.
Indicators:
- Low test coverage (<80% for critical paths)
- Missing unit tests for complex logic
- No integration tests for key workflows
- Flaky tests that pass/fail intermittently
- Slow test execution (>10 minutes for unit tests)
- Tests that don't test meaningful behavior
- Missing test data management strategy
Impact:
- Fear of refactoring ("don't touch it, it works")
- Regression bugs in production
- Slow feedback cycles during development
- Difficulty validating complex business logic
Detection Methods:
- Coverage report analysis
- Test execution time monitoring
- Test failure pattern analysis
- Test code quality assessment
4. Documentation Debt
Missing, outdated, or poor-quality documentation that makes the system harder to understand and maintain.
Indicators:
- Missing API documentation
- Outdated README files
- No architectural decision records (ADRs)
- Missing code comments for complex algorithms
- No onboarding documentation for new team members
- Inconsistent documentation formats
- Documentation that contradicts actual implementation
Impact:
- Increased onboarding time for new team members
- Knowledge loss when team members leave
- Miscommunication between teams
- Repeated questions in team channels
Detection Methods:
- Documentation coverage analysis
- Freshness checking (last modified dates)
- Link validation
- Comment density analysis
5. Dependency Debt
Issues related to external libraries, frameworks, and system dependencies.
Indicators:
- Outdated packages with known security vulnerabilities
- Dependencies with incompatible licenses
- Unused dependencies bloating the build
- Version conflicts between packages
- Deprecated APIs still in use
- Heavy dependencies for simple tasks
- Missing dependency pinning
Impact:
- Security vulnerabilities
- Build instability
- Longer build times
- Legal compliance issues
- Difficulty upgrading core frameworks
Detection Methods:
- Vulnerability scanning
- License compliance checking
- Usage analysis
- Version compatibility checking
6. Infrastructure Debt
Operations and deployment-related technical debt.
Indicators:
- Manual deployment processes
- Missing monitoring and alerting
- Inadequate logging
- No disaster recovery plan
- Inconsistent environments (dev/staging/prod)
- Missing CI/CD pipelines
- Infrastructure as code gaps
Impact:
- Deployment risks and downtime
- Difficult troubleshooting
- Inconsistent behavior across environments
- Manual work that should be automated
Detection Methods:
- Infrastructure audit checklists
- Configuration drift detection
- Monitoring coverage analysis
- Deployment process documentation review
Severity Scoring Framework
Each piece of tech debt is scored on multiple dimensions to determine overall severity:
Impact Assessment (1-10 scale)
Development Velocity Impact
- 1-2: Negligible impact on development speed
- 3-4: Minor slowdown, workarounds available
- 5-6: Moderate impact, affects some features
- 7-8: Significant slowdown, affects most work
- 9-10: Critical blocker, prevents new development
Quality Impact
- 1-2: No impact on defect rates
- 3-4: Minor increase in minor bugs
- 5-6: Moderate increase in defects
- 7-8: Regular production issues
- 9-10: Critical reliability problems
Team Productivity Impact
- 1-2: No impact on team morale or efficiency
- 3-4: Occasional frustration
- 5-6: Regular complaints from developers
- 7-8: Team actively avoiding the area
- 9-10: Causing developer turnover
Business Impact
- 1-2: No customer-facing impact
- 3-4: Minor UX degradation
- 5-6: Moderate performance impact
- 7-8: Customer complaints or churn
- 9-10: Revenue-impacting issues
Effort Assessment
Size (Story Points or Hours)
- XS (1-4 hours): Simple refactor or documentation update
- S (1-2 days): Minor architectural change
- M (3-5 days): Moderate refactoring effort
- L (1-2 weeks): Major component restructuring
- XL (3+ weeks): System-wide architectural changes
Risk Level
- Low: Well-understood change with clear scope
- Medium: Some unknowns but manageable
- High: Significant unknowns, potential for scope creep
Skill Requirements
- Junior: Can be handled by any team member
- Mid: Requires experienced developer
- Senior: Needs architectural expertise
- Expert: Requires deep system knowledge
Interest Rate Calculation
Technical debt accrues "interest" - the additional cost of leaving it unfixed. This interest rate helps prioritize which debt to pay down first.
Interest Rate Formula
Interest Rate = (Impact Score × Frequency of Encounter) / Time PeriodWhere:
- Impact Score: Average severity score (1-10)
- Frequency of Encounter: How often developers interact with this code
- Time Period: Usually measured per sprint or month
Cost of Delay Calculation
Cost of Delay = Interest Rate × Time Until Fix × Team Size MultiplierExample Calculation
Scenario: Legacy authentication module with poor error handling
- Impact Score: 7 (causes regular production issues)
- Frequency: 15 encounters per sprint (3 developers × 5 times each)
- Team Size: 8 developers
- Current sprint: 1, planned fix: sprint 4
Interest Rate = 7 × 15 = 105 points per sprint
Cost of Delay = 105 × 3 × 1.2 = 378 total cost pointsThis debt item should be prioritized over lower-cost items.
Debt Inventory Management
Data Structure
Each debt item is tracked with the following attributes:
{
"id": "DEBT-2024-001",
"title": "Legacy user authentication module",
"category": "code",
"subcategory": "error_handling",
"location": "src/auth/legacy_auth.py:45-120",
"description": "Authentication error handling uses generic exceptions",
"impact": {
"velocity": 7,
"quality": 8,
"productivity": 6,
"business": 5
},
"effort": {
"size": "M",
"risk": "medium",
"skill_required": "mid"
},
"interest_rate": 105,
"cost_of_delay": 378,
"priority": "high",
"created_date": "2024-01-15",
"last_updated": "2024-01-20",
"assigned_to": null,
"status": "identified",
"tags": ["security", "user-experience", "maintainability"]
}Status Lifecycle
- Identified - Debt detected but not yet analyzed
- Analyzed - Impact and effort assessed
- Prioritized - Added to backlog with priority
- Planned - Assigned to specific sprint/release
- In Progress - Actively being worked on
- Review - Implementation complete, under review
- Done - Debt resolved and verified
- Won't Fix - Consciously decided not to address
Prioritization Frameworks
1. Cost-of-Delay vs Effort Matrix
Plot debt items on a 2D matrix:
- X-axis: Effort (XS to XL)
- Y-axis: Cost of Delay (calculated value)
Priority Quadrants:
- High Cost, Low Effort: Immediate (quick wins)
- High Cost, High Effort: Planned (major initiatives)
- Low Cost, Low Effort: Opportunistic (during related work)
- Low Cost, High Effort: Backlog (consider for future)
2. Weighted Shortest Job First (WSJF)
WSJF Score = (Business Value + Time Criticality + Risk Reduction) / EffortWhere each component is scored 1-10:
- Business Value: Direct impact on customer value
- Time Criticality: How much value decreases over time
- Risk Reduction: How much risk is mitigated by fixing this debt
3. Technical Debt Quadrant
Based on Martin Fowler's framework:
Quadrant 1: Reckless & Deliberate
- "We don't have time for design"
- Highest priority for remediation
Quadrant 2: Prudent & Deliberate
- "We must ship now and deal with consequences"
- Schedule for near-term resolution
Quadrant 3: Reckless & Inadvertent
- "What's layering?"
- Focus on education and process improvement
Quadrant 4: Prudent & Inadvertent
- "Now we know how we should have done it"
- Normal part of learning, lowest priority
Refactoring Strategies
1. Strangler Fig Pattern
Gradually replace old system by building new functionality around it.
When to use:
- Large, monolithic systems
- High-risk changes to critical paths
- Long-term architectural migrations
Implementation:
- Identify boundaries for extraction
- Create abstraction layer
- Route new features to new implementation
- Gradually migrate existing features
- Remove old implementation
2. Branch by Abstraction
Create abstraction layer to allow parallel implementations.
When to use:
- Need to support old and new systems simultaneously
- High-risk changes with rollback requirements
- A/B testing infrastructure changes
Implementation:
- Create abstraction interface
- Implement abstraction for current system
- Replace direct calls with abstraction calls
- Implement new version behind same abstraction
- Switch implementations via configuration
- Remove old implementation
3. Feature Toggles
Use configuration flags to control code execution.
When to use:
- Gradual rollout of refactored components
- Risk mitigation during large changes
- Experimental refactoring approaches
Implementation:
- Identify decision points in code
- Add toggle checks at decision points
- Implement both old and new paths
- Test both paths thoroughly
- Gradually move toggle to new implementation
- Remove old path and toggle
4. Parallel Run
Run old and new implementations simultaneously to verify correctness.
When to use:
- Critical business logic changes
- Data processing pipeline changes
- Algorithm improvements
Implementation:
- Implement new version alongside old
- Run both versions with same inputs
- Compare outputs and log discrepancies
- Investigate and fix discrepancies
- Build confidence through parallel execution
- Switch to new implementation
- Remove old implementation
Sprint Allocation Recommendations
Debt-to-Feature Ratio
Maintain healthy balance between new features and debt reduction:
Team Velocity < 70% of capacity:
- 60% tech debt, 40% features
- Focus on removing major blockers
Team Velocity 70-85% of capacity:
- 30% tech debt, 70% features
- Balanced maintenance approach
Team Velocity > 85% of capacity:
- 15% tech debt, 85% features
- Opportunistic debt reduction only
Sprint Planning Integration
Story Point Allocation:
- Reserve 20% of sprint capacity for tech debt
- Prioritize debt items with highest interest rates
- Include "debt tax" in feature estimates when working in high-debt areas
Debt Budget Tracking:
- Track debt points completed per sprint
- Monitor debt interest rate trend
- Alert when debt accumulation exceeds team's paydown rate
Quarterly Planning
Debt Initiatives:
- Identify 1-2 major debt themes per quarter
- Allocate dedicated sprints for large-scale refactoring
- Plan debt work around major feature releases
Success Metrics:
- Debt interest rate reduction
- Developer velocity improvements
- Defect rate reduction
- Code review cycle time improvement
Stakeholder Reporting
Executive Dashboard
Key Metrics:
- Overall tech debt health score (0-100)
- Debt trend direction (improving/declining)
- Cost of delayed fixes (in development days)
- High-risk debt items count
Monthly Report Structure:
- Executive Summary (3 bullet points)
- Health Score Trend (6-month view)
- Top 3 Risk Items (business impact focus)
- Investment Recommendation (resource allocation)
- Success Stories (debt reduced last month)
Engineering Team Dashboard
Daily Metrics:
- New debt items identified
- Debt items resolved
- Interest rate by team/component
- Debt hotspots (most problematic areas)
Sprint Reviews:
- Debt points completed vs. planned
- Velocity impact from debt work
- Newly discovered debt during feature work
- Team sentiment on code quality
Product Manager Reports
Feature Impact Analysis:
- How debt affects feature development time
- Quality risk assessment for upcoming features
- Debt that blocks planned features
- Recommendations for feature sequence planning
Customer Impact Translation:
- Debt that affects performance
- Debt that increases bug rates
- Debt that limits feature flexibility
- Investment required to maintain current quality
Technical Debt Prioritization Framework
Introduction
Technical debt prioritization is a critical capability that separates high-performing engineering teams from those struggling with maintenance burden. This framework provides multiple approaches to systematically prioritize technical debt based on business value, risk, effort, and strategic alignment.
Core Principles
1. Business Value Alignment
Technical debt work must connect to business outcomes. Every debt item should have a clear story about how fixing it supports business goals.
2. Evidence-Based Decisions
Use data, not opinions, to drive prioritization. Measure impact, track trends, and validate assumptions with evidence.
3. Cost-Benefit Optimization
Balance the cost of fixing debt against the cost of leaving it unfixed. Sometimes living with debt is the right business decision.
4. Risk Management
Consider both the probability and impact of negative outcomes. High-probability, high-impact issues get priority.
5. Sustainable Pace
Debt work should be sustainable over time. Avoid boom-bust cycles of neglect followed by emergency remediation.
Prioritization Frameworks
Framework 1: Cost of Delay (CoD)
Best For: Teams with clear business metrics and well-understood customer impact.
Formula: Priority Score = (Business Value + Urgency + Risk Reduction) / Effort
Components:
Business Value (1-10 scale)
- Customer impact: How many users affected?
- Revenue impact: Direct effect on business metrics
- Strategic value: Alignment with business goals
- Competitive advantage: Market positioning benefits
Urgency (1-10 scale)
- Time sensitivity: How quickly does value decay?
- Dependency criticality: Does this block other work?
- Market timing: External deadlines or windows
- Regulatory pressure: Compliance requirements
Risk Reduction (1-10 scale)
- Security risk mitigation: Vulnerability reduction
- Reliability improvement: Stability gains
- Compliance risk: Regulatory issue prevention
- Technical risk: Architectural problem prevention
Effort Estimation
- Development time in story points or days
- Risk multiplier for uncertainty (1.0-2.0x)
- Skill requirements and availability
- Cross-team coordination needs
Example Calculation:
Authentication module refactor:
- Business Value: 8 (affects all users, blocks SSO)
- Urgency: 7 (blocks Q2 enterprise features)
- Risk Reduction: 9 (high security risk)
- Total Numerator: 24
- Effort: 3 weeks = 15 story points
- CoD Score: 24/15 = 1.6Framework 2: Weighted Shortest Job First (WSJF)
Best For: SAFe/Agile environments with portfolio-level planning.
Formula: WSJF = (Business Value + Time Criticality + Risk Reduction) / Job Size
Scoring Guidelines:
Business Value (1-20 scale)
- User/business value from fixing this debt
- Direct revenue or cost impact
- Strategic importance to business objectives
Time Criticality (1-20 scale)
- How user/business value declines over time
- Dependency on other work items
- Fixed deadlines or time-sensitive opportunities
Risk Reduction/Opportunity Enablement (1-20 scale)
- Risk mitigation value
- Future opportunities this enables
- Options this preserves or creates
Job Size (1-20 scale)
- Relative sizing compared to other debt items
- Include uncertainty and risk factors
- Consider dependencies and coordination overhead
WSJF Bands:
- Highest (WSJF > 10): Do immediately
- High (WSJF 5-10): Next quarter priority
- Medium (WSJF 2-5): Planned work
- Low (WSJF < 2): Backlog
Framework 3: RICE (Reach, Impact, Confidence, Effort)
Best For: Product-focused teams with user-centric metrics.
Formula: RICE Score = (Reach × Impact × Confidence) / Effort
Components:
Reach (number or percentage)
- How many developers/users affected per period?
- Percentage of codebase impacted
- Number of features that would benefit
Impact (1-3 scale)
- 3 = Massive impact
- 2 = High impact
- 1 = Medium impact
- 0.5 = Low impact
- 0.25 = Minimal impact
Confidence (percentage)
- How confident are you in your estimates?
- Based on evidence, not gut feeling
- 100% = High confidence with data
- 80% = Medium confidence with some data
- 50% = Low confidence, mostly assumptions
Effort (story points or person-months)
- Total effort from all team members
- Include design, development, testing, deployment
- Account for coordination and communication overhead
Example:
Legacy API cleanup:
- Reach: 5 teams × 4 developers = 20 people per quarter
- Impact: 2 (high - significantly improves developer experience)
- Confidence: 80% (have done similar cleanups before)
- Effort: 8 story points
- RICE: (20 × 2 × 0.8) / 8 = 4.0Framework 4: Technical Debt Quadrants
Best For: Teams needing to understand debt context and strategy.
Based on Martin Fowler's framework, categorize debt into quadrants:
Quadrant 1: Reckless & Deliberate
- "We don't have time for design"
- Strategy: Immediate remediation
- Priority: Highest - created knowingly with poor justification
Quadrant 2: Prudent & Deliberate
- "We must ship now and deal with consequences"
- Strategy: Planned remediation
- Priority: High - was right decision at time, now needs attention
Quadrant 3: Reckless & Inadvertent
- "What's layering?"
- Strategy: Education and process improvement
- Priority: Medium - focus on preventing more
Quadrant 4: Prudent & Inadvertent
- "Now we know how we should have done it"
- Strategy: Opportunistic improvement
- Priority: Low - normal part of learning
Framework 5: Risk-Impact Matrix
Best For: Risk-averse organizations or regulated environments.
Plot debt items on 2D matrix:
- X-axis: Likelihood of negative impact (1-5)
- Y-axis: Severity of negative impact (1-5)
Priority Quadrants:
- Critical (High likelihood, High impact): Immediate action
- Important (High likelihood, Low impact OR Low likelihood, High impact): Planned action
- Monitor (Medium likelihood, Medium impact): Watch and assess
- Accept (Low likelihood, Low impact): Document decision to accept
Impact Categories:
- Security: Data breaches, vulnerability exploitation
- Reliability: System outages, data corruption
- Performance: User experience degradation
- Compliance: Regulatory violations, audit findings
- Productivity: Team velocity reduction, developer frustration
Multi-Framework Approach
When to Use Multiple Frameworks
Portfolio-Level Planning:
- Use WSJF for quarterly planning
- Use CoD for sprint-level decisions
- Use Risk-Impact for security review
Team Maturity Progression:
- Start with simple Risk-Impact matrix
- Progress to RICE as metrics improve
- Advanced teams can use CoD effectively
Context-Dependent Selection:
- Regulated industries: Risk-Impact primary, WSJF secondary
- Product companies: RICE primary, CoD secondary
- Enterprise software: CoD primary, WSJF secondary
Combining Framework Results
Weighted Scoring:
Final Priority = 0.4 × CoD_Score + 0.3 × RICE_Score + 0.3 × Risk_ScoreTier-Based Approach:
- Security/compliance items (Risk-Impact)
- High business value items (RICE/CoD)
- Developer productivity items (WSJF)
- Technical excellence items (Quadrants)
Implementation Guidelines
Setting Up Prioritization
Step 1: Choose Primary Framework
- Consider team maturity, organization culture, available data
- Start simple, evolve complexity over time
- Ensure framework aligns with business planning cycles
Step 2: Define Scoring Criteria
- Create rubrics for each scoring dimension
- Use organization-specific examples
- Train team on consistent application
Step 3: Establish Review Cadence
- Weekly: New urgent items
- Bi-weekly: Sprint planning integration
- Monthly: Portfolio review and reprioritization
- Quarterly: Framework effectiveness review
Step 4: Tool Integration
- Use existing project management tools
- Automate scoring where possible
- Create dashboards for stakeholder communication
Common Pitfalls
Analysis Paralysis
- Problem: Spending too much time on perfect prioritization
- Solution: Use "good enough" decisions, iterate quickly
Ignoring Business Context
- Problem: Purely technical prioritization
- Solution: Always include business stakeholder perspective
Inconsistent Application
- Problem: Different teams using different approaches
- Solution: Standardize framework, provide training
Over-Engineering the Process
- Problem: Complex frameworks nobody uses
- Solution: Start simple, add complexity only when needed
Neglecting Stakeholder Buy-In
- Problem: Engineering-only prioritization decisions
- Solution: Include product, business stakeholders in framework design
Measuring Framework Effectiveness
Leading Indicators:
- Framework adoption rate across teams
- Time to prioritization decision
- Stakeholder satisfaction with decisions
- Consistency of scoring across team members
Lagging Indicators:
- Debt reduction velocity
- Business outcome improvements
- Technical incident reduction
- Developer satisfaction improvements
Review Questions:
- Are we making better debt decisions than before?
- Do stakeholders trust our prioritization process?
- Are we delivering measurable business value from debt work?
- Is the framework sustainable for long-term use?
Stakeholder Communication
For Engineering Leaders
Monthly Dashboard:
- Debt portfolio health score
- Priority distribution by framework
- Progress on high-priority items
- Framework effectiveness metrics
Quarterly Business Review:
- Debt work business impact
- Framework ROI analysis
- Resource allocation recommendations
- Strategic debt initiative proposals
For Product Managers
Sprint Planning Input:
- Debt items affecting feature velocity
- User experience impact from debt
- Feature delivery risk from debt
- Opportunity cost of debt work vs features
Roadmap Integration:
- Debt work timing with feature releases
- Dependencies between debt work and features
- Resource allocation for debt vs features
- Customer impact communication
for Executive Leadership
Executive Summary:
- Overall technical health trend
- Business risk from technical debt
- Investment recommendations
- Competitive implications
Key Metrics:
- Debt-adjusted development velocity
- Technical incident trends
- Customer satisfaction correlations
- Team retention and satisfaction
This prioritization framework should be adapted to your organization's context, but the core principles of evidence-based, business-aligned, systematic prioritization should remain constant.
Stakeholder Communication Templates
Introduction
Effective communication about technical debt is crucial for securing resources, setting expectations, and maintaining stakeholder trust. This document provides templates and guidelines for communicating technical debt status, impact, and recommendations to different stakeholder groups.
Executive Summary Templates
Monthly Executive Report
Subject: Technical Health Report - [Month] [Year]
EXECUTIVE SUMMARY
Overall Status: [EXCELLENT/GOOD/FAIR/POOR] - Health Score: [X]/100
Key Message: [One sentence summary of current state and trend]
Immediate Actions Required: [Yes/No] - [Brief explanation if yes]
BUSINESS IMPACT
• Development Velocity: [X]% impact on feature delivery speed • Quality Risk: [LOW/MEDIUM/HIGH] - [Brief explanation] • Security Posture: [X] critical issues, [X] high-priority issues • Customer Impact: [Direct customer-facing implications]
FINANCIAL IMPLICATIONS
• Current Cost: $[X]K monthly in reduced velocity • Investment Needed: $[X]K for critical issues (next quarter) • ROI Projection: [X]% velocity improvement, $[X]K annual savings • Risk Cost: Up to $[X]K if critical issues materialize
STRATEGIC RECOMMENDATIONS
- [Priority 1]: [Action] - [Business justification] - [Timeline]
- [Priority 2]: [Action] - [Business justification] - [Timeline]
- [Priority 3]: [Action] - [Business justification] - [Timeline]
TREND ANALYSIS
• Health Score: [Previous] → [Current] ([Improving/Declining/Stable]) • Debt Items: [Previous] → [Current] ([Net change]) • High-Priority Issues: [Previous] → [Current]
NEXT STEPS
• This Quarter: [Key initiatives and expected outcomes] • Resource Request: [Additional resources needed, if any] • Dependencies: [External dependencies or blockers]
Quarterly Board-Level Report
Subject: Technical Debt & Engineering Health - Q[X] [Year]
KEY METRICS
| Metric | Current | Target | Trend |
|---|---|---|---|
| Health Score | [X]/100 | [X]/100 | [↑/↓/→] |
| Velocity Impact | [X]% | <[X]% | [↑/↓/→] |
| Critical Issues | [X] | 0 | [↑/↓/→] |
| Security Risk | [LOW/MED/HIGH] | LOW | [↑/↓/→] |
STRATEGIC CONTEXT
Technical debt represents deferred investment in our technology platform. Our current debt portfolio has [positive/negative/neutral] implications for:
• Growth Capacity: [Impact on ability to scale]
• Competitive Position: [Impact on market responsiveness]
• Risk Profile: [Impact on operational risk]
• Team Retention: [Impact on engineering talent]
INVESTMENT ANALYSIS
• Current Annual Cost: $[X]M in reduced productivity • Proposed Investment: $[X]M over [timeframe] • Expected ROI: [X]% productivity improvement, $[X]M NPV • Risk Mitigation: $[X]M in avoided incident costs
RECOMMENDATIONS
- [Immediate]: [Strategic action with business rationale]
- [This Year]: [Medium-term initiative with expected outcomes]
- [Ongoing]: [Process or cultural change needed]
Product Management Templates
Sprint Planning Discussion
Subject: Tech Debt Impact on Sprint [X] Planning
SPRINT CAPACITY IMPACT
Affected User Stories: • [Story 1]: [X] point increase due to [debt issue] • [Story 2]: [X]% risk of scope reduction due to [debt issue] • [Story 3]: Blocked by [debt issue] - requires [X] points of debt work first
Recommended Debt Work This Sprint: • [Debt Item 1] ([X] points): Unblocks [Story Y], reduces future story complexity • [Debt Item 2] ([X] points): Prevents [specific risk] in upcoming features
Trade-off Analysis: • If we fix debt: [X] points for features, [benefits for future sprints] • If we don't fix debt: [X] points for features, [accumulated costs and risks]
Recommendation: [Specific allocation suggestion with rationale]
Feature Impact Assessment
Subject: Technical Debt Impact Assessment - [Feature Name]
DEBT AFFECTING THIS FEATURE
| Debt Item | Impact | Effort to Fix | Recommendation |
|---|---|---|---|
| [Item 1] | [Description] | [X] points | Fix before/Work around/Accept |
| [Item 2] | [Description] | [X] points | Fix before/Work around/Accept |
DELIVERY IMPACT
• Timeline Risk: [LOW/MEDIUM/HIGH]
- Base estimate: [X] points
- Debt-adjusted estimate: [X] points ([X]% increase)
- Risk factors: [Specific risks and probabilities]
• Quality Risk: [LOW/MEDIUM/HIGH]
- [Specific quality concerns from debt]
- Mitigation strategies: [Options for reducing risk]
• Future Feature Impact:
- This feature will [add to/reduce/not affect] debt burden
- Related future features will be [easier/harder/unaffected]
RECOMMENDATIONS
- [Option 1]: [Approach with pros/cons]
- [Option 2]: [Alternative approach with trade-offs]
- Recommended: [Chosen approach with justification]
Engineering Team Templates
Team Health Check
Subject: Weekly Team Health Check - [Date]
DEBT BURDEN THIS WEEK
• New Debt Identified: [X] items ([categories])
• Debt Resolved: [X] items ([X] hours saved)
• Net Change: [Positive/Negative] [X] items
• Top Pain Points: [Developer-reported friction areas]
VELOCITY IMPACT
• Stories Affected by Debt: [X] of [Y] planned stories • Estimated Overhead: [X] hours of extra work due to debt • Blocked Work: [Any stories waiting on debt resolution]
TEAM SENTIMENT
• Frustration Level: [1-5 scale] ([trend])
• Confidence in Codebase: [1-5 scale] ([trend])
• Top Complaints: [Most common developer concerns]
ACTIONS THIS WEEK
• Debt Work Planned: [Specific items and assignees] • Prevention Measures: [Process improvements or reviews] • Escalations: [Issues needing management attention]
Architecture Decision Record (ADR) Template
Subject: ADR-[XXX]: [Decision Title] - Technical Debt Consideration
Status: [Proposed/Accepted/Deprecated] Date: [YYYY-MM-DD] Decision Makers: [Names]
CONTEXT
[Background and current situation]
TECHNICAL DEBT ANALYSIS
• Debt Created by This Decision:
- [Specific debt that will be introduced]
- [Estimated effort to resolve later: X points]
- [Interest rate: impact over time]
• Debt Resolved by This Decision:
- [Existing debt this addresses]
- [Estimated effort saved: X points]
- [Risk reduction achieved]
• Net Debt Impact: [Positive/Negative/Neutral]
DECISION
[What we decided to do]
RATIONALE
[Why we made this decision, including debt trade-offs]
DEBT MANAGEMENT PLAN
• Monitoring: [How we'll track the debt introduced] • Timeline: [When we plan to address the debt] • Success Criteria: [How we'll know it's time to pay down the debt]
CONSEQUENCES
[Expected outcomes, including debt implications]
Customer-Facing Templates
Release Notes - Quality Improvements
Subject: Platform Stability and Performance Improvements - Release [X.Y]
QUALITY IMPROVEMENTS
We've invested significant effort in improving the reliability and performance of our platform. While these changes aren't feature additions, they provide important benefits:
RELIABILITY ENHANCEMENTS
• Reduced Error Rates: [X]% fewer errors in [specific area] • Improved Uptime: [X]% improvement in system availability • Faster Recovery: [X]% faster recovery from service interruptions
PERFORMANCE IMPROVEMENTS
• Page Load Speed: [X]% faster loading for [specific features] • API Response Time: [X]% improvement in response times • Resource Usage: [X]% reduction in memory/CPU usage
SECURITY STRENGTHENING
• Vulnerability Resolution: Addressed [X] security findings • Authentication Improvements: Enhanced login security and reliability • Data Protection: Improved data encryption and access controls
WHAT THIS MEANS FOR YOU
• Better User Experience: Fewer interruptions, faster responses • Increased Reliability: Less downtime, more predictable performance • Enhanced Security: Your data is better protected
We continue to balance new feature development with platform investments to ensure a reliable, secure, and performant experience.
Service Incident Communication
Subject: Service Update - [Brief Description] - [Status]
INCIDENT SUMMARY
• Impact: [Description of customer impact] • Duration: [Start time] - [End time / Ongoing] • Root Cause: [High-level, customer-appropriate explanation] • Resolution: [What was done to fix it]
TECHNICAL DEBT CONNECTION
This incident was [directly caused by / contributed to by / unrelated to] technical debt in our system. Specifically:
• Contributing Factors: [How debt played a role, if any] • Prevention Measures: [Debt work planned to prevent recurrence] • Timeline: [When preventive measures will be completed]
IMMEDIATE ACTIONS
- [Action 1 with timeline]
- [Action 2 with timeline]
- [Action 3 with timeline]
LONG-TERM IMPROVEMENTS
We're investing in [specific technical improvements] to prevent similar issues:
• Infrastructure: [Relevant infrastructure debt work] • Monitoring: [Observability improvements planned] • Process: [Development process improvements]
We apologize for the inconvenience and appreciate your patience as we continue to strengthen our platform.
Internal Communication Templates
Engineering All-Hands Presentation
Slide Template: Technical Debt State of the Union
SLIDE 1: Current State
- Health Score: [X]/100 [Trend arrow]
- Total Debt Items: [X] ([X]% of codebase)
- High Priority: [X] items requiring immediate attention
- Team Impact: [X]% velocity reduction
SLIDE 2: What We've Accomplished
- Resolved [X] debt items ([X] hours of future work saved)
- Improved health score by [X] points
- Key wins: [2-3 specific examples with business impact]
SLIDE 3: Current Focus Areas
- [Category 1]: [X] items, [business impact]
- [Category 2]: [X] items, [business impact]
- [Category 3]: [X] items, [business impact]
SLIDE 4: Success Stories
- [Specific example]: [Problem] → [Solution] → [Outcome]
- Metrics: [Before/after comparison]
- Team feedback: [Developer quotes]
SLIDE 5: Looking Forward
- Q[X] Goals: [Specific targets]
- Major Initiatives: [2-3 big-picture improvements]
- How You Can Help: [Specific asks of the team]
Retrospective Templates
Sprint Retrospective - Debt Focus
What Went Well: • Debt work completed: [Specific items and impact] • Process improvements: [What worked for debt management] • Team collaboration: [Cross-functional debt work successes]
What Didn't Go Well: • Debt work challenges: [Obstacles encountered] • Scope creep: [Debt work that expanded beyond estimates] • Communication gaps: [Information that wasn't shared effectively]
Action Items:
• Process: [Changes to how we handle debt work]
• Planning: [Improvements to debt estimation/prioritization]
• Prevention: [Changes to prevent new debt creation]
• Tools: [Tooling improvements needed]
Communication Best Practices
Do's and Don'ts
DO: • Use business language, not technical jargon • Quantify impact with specific metrics • Provide clear timelines and expectations • Acknowledge trade-offs and constraints • Connect debt work to business outcomes • Be proactive in communication
DON'T: • Blame previous decisions or developers • Use fear-based messaging exclusively • Overwhelm stakeholders with technical details • Make promises without clear plans • Ignore the business context • Assume stakeholders understand technical implications
Tailoring Messages
For Executives: Focus on business impact, ROI, and strategic implications For Product: Focus on feature impact, timeline risks, and user experience For Engineering: Focus on technical details, process improvements, and developer experience For Customers: Focus on reliability, performance, and security benefits
Frequency Guidelines
Real-time: Critical security issues, production incidents
Weekly: Team health checks, sprint impacts
Monthly: Stakeholder updates, trend analysis
Quarterly: Strategic reviews, investment planning
As-needed: Major decisions, significant changes
These templates should be customized for your organization's communication style, stakeholder preferences, and business context.
#!/usr/bin/env python3
"""
Tech Debt Dashboard
Takes historical debt inventories (multiple scans over time) and generates trend analysis,
debt velocity (accruing vs paying down), health score, and executive summary.
Usage:
python debt_dashboard.py historical_data.json
python debt_dashboard.py data1.json data2.json data3.json
python debt_dashboard.py --input-dir ./debt_scans/ --output dashboard_report.json
python debt_dashboard.py historical_data.json --period quarterly --team-size 8
"""
import json
import argparse
import sys
import os
from collections import defaultdict, Counter
from datetime import datetime, timedelta
from pathlib import Path
from typing import Dict, List, Any, Optional, Tuple
from dataclasses import dataclass, asdict
from statistics import mean, median, stdev
import re
@dataclass
class HealthMetrics:
"""Health metrics for a specific time period."""
overall_score: float # 0-100
debt_density: float # debt items per file
velocity_impact: float # estimated velocity reduction %
quality_score: float # 0-100
maintainability_score: float # 0-100
technical_risk_score: float # 0-100
@dataclass
class TrendAnalysis:
"""Trend analysis for debt metrics over time."""
metric_name: str
trend_direction: str # "improving", "declining", "stable"
change_rate: float # rate of change per period
correlation_strength: float # -1 to 1
forecast_next_period: float
confidence_interval: Tuple[float, float]
@dataclass
class DebtVelocity:
"""Debt velocity tracking - how fast debt is being created vs resolved."""
period: str
new_debt_items: int
resolved_debt_items: int
net_change: int
velocity_ratio: float # resolved/new, >1 is good
effort_hours_added: float
effort_hours_resolved: float
net_effort_change: float
class DebtDashboard:
"""Main dashboard class for debt trend analysis and reporting."""
def __init__(self, team_size: int = 5):
self.team_size = team_size
self.historical_data = []
self.processed_snapshots = []
self.trend_analyses = {}
self.health_history = []
self.velocity_history = []
# Configuration for health scoring
self.health_weights = {
"debt_density": 0.25,
"complexity_score": 0.20,
"test_coverage_proxy": 0.15,
"documentation_proxy": 0.10,
"security_score": 0.15,
"maintainability": 0.15
}
# Thresholds for categorization
self.thresholds = {
"excellent": 85,
"good": 70,
"fair": 55,
"poor": 40
}
def load_historical_data(self, file_paths: List[str]) -> bool:
"""Load multiple debt inventory files for historical analysis."""
self.historical_data = []
for file_path in file_paths:
try:
with open(file_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# Normalize data format
if isinstance(data, dict) and 'debt_items' in data:
# Scanner output format
snapshot = {
"file_path": file_path,
"scan_date": data.get("scan_metadata", {}).get("scan_date",
self._extract_date_from_filename(file_path)),
"debt_items": data["debt_items"],
"summary": data.get("summary", {}),
"file_statistics": data.get("file_statistics", {})
}
elif isinstance(data, dict) and 'prioritized_backlog' in data:
# Prioritizer output format
snapshot = {
"file_path": file_path,
"scan_date": data.get("metadata", {}).get("analysis_date",
self._extract_date_from_filename(file_path)),
"debt_items": data["prioritized_backlog"],
"summary": data.get("insights", {}),
"file_statistics": {}
}
elif isinstance(data, list):
# Raw debt items array
snapshot = {
"file_path": file_path,
"scan_date": self._extract_date_from_filename(file_path),
"debt_items": data,
"summary": {},
"file_statistics": {}
}
else:
raise ValueError(f"Unrecognized data format in {file_path}")
self.historical_data.append(snapshot)
except Exception as e:
print(f"Error loading {file_path}: {e}")
continue
if not self.historical_data:
print("No valid data files loaded.")
return False
# Sort by date
self.historical_data.sort(key=lambda x: x["scan_date"])
print(f"Loaded {len(self.historical_data)} historical snapshots")
return True
def load_from_directory(self, directory_path: str, pattern: str = "*.json") -> bool:
"""Load all JSON files from a directory."""
directory = Path(directory_path)
if not directory.exists():
print(f"Directory does not exist: {directory_path}")
return False
file_paths = []
for file_path in directory.glob(pattern):
if file_path.is_file():
file_paths.append(str(file_path))
if not file_paths:
print(f"No matching files found in {directory_path}")
return False
return self.load_historical_data(file_paths)
def _extract_date_from_filename(self, file_path: str) -> str:
"""Extract date from filename if possible, otherwise use current date."""
filename = Path(file_path).name
# Try to find date patterns in filename
date_patterns = [
r"(\d{4}-\d{2}-\d{2})", # YYYY-MM-DD
r"(\d{4}\d{2}\d{2})", # YYYYMMDD
r"(\d{2}-\d{2}-\d{4})", # MM-DD-YYYY
]
for pattern in date_patterns:
match = re.search(pattern, filename)
if match:
date_str = match.group(1)
try:
if len(date_str) == 8: # YYYYMMDD
date_str = f"{date_str[:4]}-{date_str[4:6]}-{date_str[6:]}"
datetime.strptime(date_str, "%Y-%m-%d")
return date_str + "T12:00:00"
except ValueError:
continue
# Fallback to file modification time
try:
mtime = os.path.getmtime(file_path)
return datetime.fromtimestamp(mtime).isoformat()
except:
return datetime.now().isoformat()
def generate_dashboard(self, period: str = "monthly") -> Dict[str, Any]:
"""
Generate comprehensive debt dashboard.
Args:
period: Analysis period ("weekly", "monthly", "quarterly")
Returns:
Dictionary containing dashboard data and analysis
"""
print(f"Generating debt dashboard for {len(self.historical_data)} snapshots...")
print(f"Analysis period: {period}")
print("=" * 50)
# Step 1: Process historical snapshots
self._process_snapshots()
# Step 2: Calculate health metrics for each snapshot
self._calculate_health_metrics()
# Step 3: Analyze trends
self._analyze_trends(period)
# Step 4: Calculate debt velocity
self._calculate_debt_velocity(period)
# Step 5: Generate forecasts
forecasts = self._generate_forecasts()
# Step 6: Create executive summary
executive_summary = self._generate_executive_summary()
# Step 7: Generate recommendations
recommendations = self._generate_strategic_recommendations()
# Step 8: Create visualizations data
visualizations = self._generate_visualization_data()
dashboard_data = {
"metadata": {
"generated_date": datetime.now().isoformat(),
"analysis_period": period,
"snapshots_analyzed": len(self.historical_data),
"date_range": {
"start": self.historical_data[0]["scan_date"] if self.historical_data else None,
"end": self.historical_data[-1]["scan_date"] if self.historical_data else None
},
"team_size": self.team_size
},
"executive_summary": executive_summary,
"current_health": self.health_history[-1] if self.health_history else None,
"trend_analysis": {name: asdict(trend) for name, trend in self.trend_analyses.items()},
"debt_velocity": [asdict(v) for v in self.velocity_history],
"forecasts": forecasts,
"recommendations": recommendations,
"visualizations": visualizations,
"detailed_metrics": self._get_detailed_metrics()
}
return dashboard_data
def _process_snapshots(self):
"""Process raw snapshots into standardized format."""
self.processed_snapshots = []
for snapshot in self.historical_data:
processed = {
"date": snapshot["scan_date"],
"total_debt_items": len(snapshot["debt_items"]),
"debt_by_type": Counter(item.get("type", "unknown") for item in snapshot["debt_items"]),
"debt_by_severity": Counter(item.get("severity", "medium") for item in snapshot["debt_items"]),
"debt_by_category": Counter(self._categorize_debt_item(item) for item in snapshot["debt_items"]),
"total_files": snapshot["summary"].get("total_files_scanned",
len(snapshot["file_statistics"])),
"total_effort_estimate": self._calculate_total_effort(snapshot["debt_items"]),
"high_priority_count": len([item for item in snapshot["debt_items"]
if self._is_high_priority(item)]),
"security_debt_count": len([item for item in snapshot["debt_items"]
if self._is_security_related(item)]),
"raw_data": snapshot
}
self.processed_snapshots.append(processed)
def _categorize_debt_item(self, item: Dict[str, Any]) -> str:
"""Categorize debt item into high-level categories."""
debt_type = item.get("type", "unknown")
categories = {
"code_quality": ["large_function", "high_complexity", "duplicate_code",
"long_line", "missing_docstring"],
"architecture": ["architecture_debt", "large_file"],
"security": ["security_risk", "hardcoded_secrets", "sql_injection_risk"],
"testing": ["test_debt", "missing_tests", "low_coverage"],
"maintenance": ["todo_comment", "commented_code"],
"dependencies": ["dependency_debt", "outdated_packages"],
"infrastructure": ["deployment_debt", "monitoring_gaps"],
"documentation": ["missing_docstring", "outdated_docs"]
}
for category, types in categories.items():
if debt_type in types:
return category
return "other"
def _calculate_total_effort(self, debt_items: List[Dict[str, Any]]) -> float:
"""Calculate total estimated effort for debt items."""
total_effort = 0.0
for item in debt_items:
# Try to get effort from existing analysis
if "effort_estimate" in item:
total_effort += item["effort_estimate"].get("hours_estimate", 0)
else:
# Estimate based on debt type and severity
effort = self._estimate_item_effort(item)
total_effort += effort
return total_effort
def _estimate_item_effort(self, item: Dict[str, Any]) -> float:
"""Estimate effort for a debt item."""
debt_type = item.get("type", "unknown")
severity = item.get("severity", "medium")
base_efforts = {
"todo_comment": 2,
"missing_docstring": 2,
"long_line": 1,
"large_function": 8,
"high_complexity": 16,
"duplicate_code": 12,
"large_file": 32,
"syntax_error": 4,
"security_risk": 20,
"architecture_debt": 80,
"test_debt": 16
}
base_effort = base_efforts.get(debt_type, 8)
severity_multipliers = {
"low": 0.5,
"medium": 1.0,
"high": 1.5,
"critical": 2.0
}
return base_effort * severity_multipliers.get(severity, 1.0)
def _is_high_priority(self, item: Dict[str, Any]) -> bool:
"""Determine if debt item is high priority."""
severity = item.get("severity", "medium")
priority_score = item.get("priority_score", 0)
debt_type = item.get("type", "")
return (severity in ["high", "critical"] or
priority_score >= 7 or
debt_type in ["security_risk", "syntax_error", "architecture_debt"])
def _is_security_related(self, item: Dict[str, Any]) -> bool:
"""Determine if debt item is security-related."""
debt_type = item.get("type", "")
description = item.get("description", "").lower()
security_types = ["security_risk", "hardcoded_secrets", "sql_injection_risk"]
security_keywords = ["password", "token", "key", "secret", "auth", "security"]
return (debt_type in security_types or
any(keyword in description for keyword in security_keywords))
def _calculate_health_metrics(self):
"""Calculate health metrics for each snapshot."""
self.health_history = []
for snapshot in self.processed_snapshots:
# Debt density (lower is better)
debt_density = snapshot["total_debt_items"] / max(1, snapshot["total_files"])
debt_density_score = max(0, 100 - (debt_density * 20)) # Scale to 0-100
# Complexity score (based on high complexity debt)
complex_debt_ratio = (snapshot["debt_by_type"].get("high_complexity", 0) +
snapshot["debt_by_type"].get("large_function", 0)) / max(1, snapshot["total_debt_items"])
complexity_score = max(0, 100 - (complex_debt_ratio * 100))
# Test coverage proxy (based on test debt)
test_debt_ratio = snapshot["debt_by_category"].get("testing", 0) / max(1, snapshot["total_debt_items"])
test_coverage_proxy = max(0, 100 - (test_debt_ratio * 150))
# Documentation proxy (based on documentation debt)
doc_debt_ratio = snapshot["debt_by_category"].get("documentation", 0) / max(1, snapshot["total_debt_items"])
documentation_proxy = max(0, 100 - (doc_debt_ratio * 100))
# Security score (based on security debt)
security_debt_ratio = snapshot["security_debt_count"] / max(1, snapshot["total_debt_items"])
security_score = max(0, 100 - (security_debt_ratio * 200))
# Maintainability (based on architecture and code quality debt)
maint_debt_count = (snapshot["debt_by_category"].get("architecture", 0) +
snapshot["debt_by_category"].get("code_quality", 0))
maint_debt_ratio = maint_debt_count / max(1, snapshot["total_debt_items"])
maintainability = max(0, 100 - (maint_debt_ratio * 120))
# Calculate weighted overall score
weights = self.health_weights
overall_score = (
debt_density_score * weights["debt_density"] +
complexity_score * weights["complexity_score"] +
test_coverage_proxy * weights["test_coverage_proxy"] +
documentation_proxy * weights["documentation_proxy"] +
security_score * weights["security_score"] +
maintainability * weights["maintainability"]
)
# Velocity impact (estimated percentage reduction in team velocity)
high_impact_ratio = snapshot["high_priority_count"] / max(1, snapshot["total_debt_items"])
velocity_impact = min(50, high_impact_ratio * 30 + debt_density * 5)
# Technical risk (0-100, higher is more risky)
risk_factors = snapshot["security_debt_count"] + snapshot["debt_by_type"].get("architecture_debt", 0)
technical_risk = min(100, risk_factors * 10 + (100 - security_score))
health_metrics = HealthMetrics(
overall_score=round(overall_score, 1),
debt_density=round(debt_density, 2),
velocity_impact=round(velocity_impact, 1),
quality_score=round((complexity_score + maintainability) / 2, 1),
maintainability_score=round(maintainability, 1),
technical_risk_score=round(technical_risk, 1)
)
# Add timestamp
health_entry = asdict(health_metrics)
health_entry["date"] = snapshot["date"]
self.health_history.append(health_entry)
def _analyze_trends(self, period: str):
"""Analyze trends in various metrics."""
self.trend_analyses = {}
if len(self.health_history) < 2:
return
# Define metrics to analyze
metrics_to_analyze = [
"overall_score",
"debt_density",
"velocity_impact",
"quality_score",
"technical_risk_score"
]
for metric in metrics_to_analyze:
values = [entry[metric] for entry in self.health_history]
dates = [datetime.fromisoformat(entry["date"].replace('Z', '+00:00'))
for entry in self.health_history]
trend = self._calculate_trend(values, dates, metric)
self.trend_analyses[metric] = trend
def _calculate_trend(self, values: List[float], dates: List[datetime], metric_name: str) -> TrendAnalysis:
"""Calculate trend analysis for a specific metric."""
if len(values) < 2:
return TrendAnalysis(metric_name, "stable", 0.0, 0.0, values[-1], (values[-1], values[-1]))
# Calculate simple linear trend
n = len(values)
x = list(range(n)) # Time periods as numbers
# Linear regression
x_mean = mean(x)
y_mean = mean(values)
numerator = sum((x[i] - x_mean) * (values[i] - y_mean) for i in range(n))
denominator = sum((x[i] - x_mean) ** 2 for i in range(n))
if denominator == 0:
slope = 0
else:
slope = numerator / denominator
# Correlation strength
if n > 2 and len(set(values)) > 1:
try:
correlation = numerator / (
(sum((x[i] - x_mean) ** 2 for i in range(n)) *
sum((values[i] - y_mean) ** 2 for i in range(n))) ** 0.5
)
except ZeroDivisionError:
correlation = 0.0
else:
correlation = 0.0
# Determine trend direction
if abs(slope) < 0.1:
trend_direction = "stable"
elif slope > 0:
if metric_name in ["overall_score", "quality_score"]:
trend_direction = "improving" # Higher is better
else:
trend_direction = "declining" # Higher is worse
else:
if metric_name in ["overall_score", "quality_score"]:
trend_direction = "declining"
else:
trend_direction = "improving"
# Forecast next period
forecast = values[-1] + slope
# Confidence interval (simple approach)
if n > 2:
residuals = [values[i] - (y_mean + slope * (x[i] - x_mean)) for i in range(n)]
std_error = (sum(r**2 for r in residuals) / (n - 2)) ** 0.5
confidence_interval = (forecast - std_error, forecast + std_error)
else:
confidence_interval = (forecast, forecast)
return TrendAnalysis(
metric_name=metric_name,
trend_direction=trend_direction,
change_rate=round(slope, 3),
correlation_strength=round(correlation, 3),
forecast_next_period=round(forecast, 2),
confidence_interval=(round(confidence_interval[0], 2), round(confidence_interval[1], 2))
)
def _calculate_debt_velocity(self, period: str):
"""Calculate debt velocity between snapshots."""
self.velocity_history = []
if len(self.processed_snapshots) < 2:
return
for i in range(1, len(self.processed_snapshots)):
current = self.processed_snapshots[i]
previous = self.processed_snapshots[i-1]
# Track debt by unique identifiers when possible
current_debt_ids = set()
previous_debt_ids = set()
current_effort = current["total_effort_estimate"]
previous_effort = previous["total_effort_estimate"]
# Simple approach: compare total counts and effort
debt_change = current["total_debt_items"] - previous["total_debt_items"]
effort_change = current_effort - previous_effort
# Estimate new vs resolved (rough approximation)
if debt_change >= 0:
new_debt_items = debt_change
resolved_debt_items = 0
else:
new_debt_items = 0
resolved_debt_items = abs(debt_change)
# Calculate velocity ratio
if new_debt_items > 0:
velocity_ratio = resolved_debt_items / new_debt_items
else:
velocity_ratio = float('inf') if resolved_debt_items > 0 else 1.0
velocity = DebtVelocity(
period=f"{previous['date'][:10]} to {current['date'][:10]}",
new_debt_items=new_debt_items,
resolved_debt_items=resolved_debt_items,
net_change=debt_change,
velocity_ratio=min(10.0, velocity_ratio), # Cap at 10 for display
effort_hours_added=max(0, effort_change),
effort_hours_resolved=max(0, -effort_change),
net_effort_change=effort_change
)
self.velocity_history.append(velocity)
def _generate_forecasts(self) -> Dict[str, Any]:
"""Generate forecasts based on trend analysis."""
if not self.trend_analyses:
return {}
forecasts = {}
# Overall health forecast
health_trend = self.trend_analyses.get("overall_score")
if health_trend:
current_score = self.health_history[-1]["overall_score"]
forecasts["health_score_3_months"] = max(0, min(100,
current_score + (health_trend.change_rate * 3)))
forecasts["health_score_6_months"] = max(0, min(100,
current_score + (health_trend.change_rate * 6)))
# Debt accumulation forecast
if self.velocity_history:
avg_net_change = mean([v.net_change for v in self.velocity_history[-3:]]) # Last 3 periods
current_debt = self.processed_snapshots[-1]["total_debt_items"]
forecasts["debt_count_3_months"] = max(0, current_debt + (avg_net_change * 3))
forecasts["debt_count_6_months"] = max(0, current_debt + (avg_net_change * 6))
# Risk forecast
risk_trend = self.trend_analyses.get("technical_risk_score")
if risk_trend:
current_risk = self.health_history[-1]["technical_risk_score"]
forecasts["risk_score_3_months"] = max(0, min(100,
current_risk + (risk_trend.change_rate * 3)))
return forecasts
def _generate_executive_summary(self) -> Dict[str, Any]:
"""Generate executive summary of debt status."""
if not self.health_history:
return {}
current_health = self.health_history[-1]
# Determine overall status
score = current_health["overall_score"]
if score >= self.thresholds["excellent"]:
status = "excellent"
status_message = "Code quality is excellent with minimal technical debt."
elif score >= self.thresholds["good"]:
status = "good"
status_message = "Code quality is good with manageable technical debt."
elif score >= self.thresholds["fair"]:
status = "fair"
status_message = "Code quality needs attention. Technical debt is accumulating."
else:
status = "poor"
status_message = "Critical: High levels of technical debt requiring immediate action."
# Key insights
insights = []
if len(self.health_history) > 1:
prev_health = self.health_history[-2]
score_change = current_health["overall_score"] - prev_health["overall_score"]
if score_change > 5:
insights.append("Health score improving significantly")
elif score_change < -5:
insights.append("Health score declining - attention needed")
if current_health["velocity_impact"] > 20:
insights.append("High velocity impact detected - development speed affected")
if current_health["technical_risk_score"] > 70:
insights.append("High technical risk - security and stability concerns")
# Debt velocity insight
if self.velocity_history:
recent_velocity = self.velocity_history[-1]
if recent_velocity.velocity_ratio < 0.5:
insights.append("Debt accumulating faster than resolution")
elif recent_velocity.velocity_ratio > 1.5:
insights.append("Good progress on debt reduction")
return {
"overall_status": status,
"health_score": current_health["overall_score"],
"status_message": status_message,
"key_insights": insights,
"total_debt_items": self.processed_snapshots[-1]["total_debt_items"] if self.processed_snapshots else 0,
"estimated_effort_hours": self.processed_snapshots[-1]["total_effort_estimate"] if self.processed_snapshots else 0,
"high_priority_items": self.processed_snapshots[-1]["high_priority_count"] if self.processed_snapshots else 0,
"velocity_impact_percent": current_health["velocity_impact"]
}
def _generate_strategic_recommendations(self) -> List[Dict[str, Any]]:
"""Generate strategic recommendations for debt management."""
recommendations = []
if not self.health_history:
return recommendations
current_health = self.health_history[-1]
current_snapshot = self.processed_snapshots[-1] if self.processed_snapshots else {}
# Health-based recommendations
if current_health["overall_score"] < 50:
recommendations.append({
"priority": "critical",
"category": "immediate_action",
"title": "Initiate Emergency Debt Reduction",
"description": "Current health score is critically low. Consider dedicating 50%+ of development capacity to debt reduction.",
"impact": "high",
"effort": "high"
})
# Velocity impact recommendations
if current_health["velocity_impact"] > 25:
recommendations.append({
"priority": "high",
"category": "productivity",
"title": "Address Velocity Blockers",
"description": f"Technical debt is reducing team velocity by {current_health['velocity_impact']:.1f}%. Focus on high-impact debt items first.",
"impact": "high",
"effort": "medium"
})
# Security recommendations
if current_health["technical_risk_score"] > 70:
recommendations.append({
"priority": "high",
"category": "security",
"title": "Security Debt Review Required",
"description": "High technical risk score indicates security vulnerabilities. Conduct immediate security debt audit.",
"impact": "high",
"effort": "medium"
})
# Trend-based recommendations
health_trend = self.trend_analyses.get("overall_score")
if health_trend and health_trend.trend_direction == "declining":
recommendations.append({
"priority": "medium",
"category": "process",
"title": "Implement Debt Prevention Measures",
"description": "Health score is declining over time. Establish coding standards, automated quality gates, and regular debt reviews.",
"impact": "medium",
"effort": "medium"
})
# Category-specific recommendations
if current_snapshot:
debt_by_category = current_snapshot["debt_by_category"]
top_category = debt_by_category.most_common(1)[0] if debt_by_category else None
if top_category and top_category[1] > 10:
category, count = top_category
recommendations.append({
"priority": "medium",
"category": "focus_area",
"title": f"Focus on {category.replace('_', ' ').title()} Debt",
"description": f"{category.replace('_', ' ').title()} represents the largest debt category ({count} items). Consider targeted initiatives.",
"impact": "medium",
"effort": "medium"
})
# Velocity-based recommendations
if self.velocity_history:
recent_velocities = self.velocity_history[-3:] if len(self.velocity_history) >= 3 else self.velocity_history
avg_velocity_ratio = mean([v.velocity_ratio for v in recent_velocities])
if avg_velocity_ratio < 0.8:
recommendations.append({
"priority": "medium",
"category": "capacity",
"title": "Increase Debt Resolution Capacity",
"description": "Debt is accumulating faster than resolution. Consider increasing debt budget or improving resolution efficiency.",
"impact": "medium",
"effort": "low"
})
return recommendations
def _generate_visualization_data(self) -> Dict[str, Any]:
"""Generate data for dashboard visualizations."""
visualizations = {}
# Health score timeline
visualizations["health_timeline"] = [
{
"date": entry["date"][:10], # Date only
"overall_score": entry["overall_score"],
"quality_score": entry["quality_score"],
"technical_risk": entry["technical_risk_score"]
}
for entry in self.health_history
]
# Debt accumulation trend
visualizations["debt_accumulation"] = [
{
"date": snapshot["date"][:10],
"total_debt": snapshot["total_debt_items"],
"high_priority": snapshot["high_priority_count"],
"security_debt": snapshot["security_debt_count"]
}
for snapshot in self.processed_snapshots
]
# Category distribution (latest snapshot)
if self.processed_snapshots:
latest_categories = self.processed_snapshots[-1]["debt_by_category"]
visualizations["category_distribution"] = [
{"category": category, "count": count}
for category, count in latest_categories.items()
]
# Velocity chart
visualizations["debt_velocity"] = [
{
"period": velocity.period,
"new_items": velocity.new_debt_items,
"resolved_items": velocity.resolved_debt_items,
"net_change": velocity.net_change,
"velocity_ratio": velocity.velocity_ratio
}
for velocity in self.velocity_history
]
# Effort estimation trend
visualizations["effort_trend"] = [
{
"date": snapshot["date"][:10],
"total_effort": snapshot["total_effort_estimate"]
}
for snapshot in self.processed_snapshots
]
return visualizations
def _get_detailed_metrics(self) -> Dict[str, Any]:
"""Get detailed metrics for the current state."""
if not self.processed_snapshots:
return {}
current = self.processed_snapshots[-1]
return {
"debt_breakdown": dict(current["debt_by_type"]),
"severity_breakdown": dict(current["debt_by_severity"]),
"category_breakdown": dict(current["debt_by_category"]),
"files_analyzed": current["total_files"],
"debt_density": current["total_debt_items"] / max(1, current["total_files"]),
"average_effort_per_item": current["total_effort_estimate"] / max(1, current["total_debt_items"])
}
def format_dashboard_report(dashboard_data: Dict[str, Any]) -> str:
"""Format dashboard data into human-readable report."""
output = []
# Header
output.append("=" * 60)
output.append("TECHNICAL DEBT DASHBOARD")
output.append("=" * 60)
metadata = dashboard_data["metadata"]
output.append(f"Generated: {metadata['generated_date'][:19]}")
output.append(f"Analysis Period: {metadata['analysis_period']}")
output.append(f"Snapshots Analyzed: {metadata['snapshots_analyzed']}")
if metadata["date_range"]["start"]:
output.append(f"Date Range: {metadata['date_range']['start'][:10]} to {metadata['date_range']['end'][:10]}")
output.append("")
# Executive Summary
exec_summary = dashboard_data["executive_summary"]
output.append("EXECUTIVE SUMMARY")
output.append("-" * 30)
output.append(f"Overall Status: {exec_summary['overall_status'].upper()}")
output.append(f"Health Score: {exec_summary['health_score']:.1f}/100")
output.append(f"Status: {exec_summary['status_message']}")
output.append("")
output.append("Key Metrics:")
output.append(f" • Total Debt Items: {exec_summary['total_debt_items']}")
output.append(f" • High Priority Items: {exec_summary['high_priority_items']}")
output.append(f" • Estimated Effort: {exec_summary['estimated_effort_hours']:.1f} hours")
output.append(f" • Velocity Impact: {exec_summary['velocity_impact_percent']:.1f}%")
output.append("")
if exec_summary["key_insights"]:
output.append("Key Insights:")
for insight in exec_summary["key_insights"]:
output.append(f" • {insight}")
output.append("")
# Current Health
if dashboard_data["current_health"]:
health = dashboard_data["current_health"]
output.append("CURRENT HEALTH METRICS")
output.append("-" * 30)
output.append(f"Overall Score: {health['overall_score']:.1f}/100")
output.append(f"Quality Score: {health['quality_score']:.1f}/100")
output.append(f"Maintainability: {health['maintainability_score']:.1f}/100")
output.append(f"Technical Risk: {health['technical_risk_score']:.1f}/100")
output.append(f"Debt Density: {health['debt_density']:.2f} items/file")
output.append("")
# Trend Analysis
trends = dashboard_data["trend_analysis"]
if trends:
output.append("TREND ANALYSIS")
output.append("-" * 30)
for metric, trend in trends.items():
direction_symbol = {
"improving": "↑",
"declining": "↓",
"stable": "→"
}.get(trend["trend_direction"], "→")
output.append(f"{metric.replace('_', ' ').title()}: {direction_symbol} {trend['trend_direction']}")
output.append(f" Change Rate: {trend['change_rate']:.3f} per period")
output.append(f" Forecast: {trend['forecast_next_period']:.1f}")
output.append("")
# Top Recommendations
recommendations = dashboard_data["recommendations"]
if recommendations:
output.append("TOP RECOMMENDATIONS")
output.append("-" * 30)
for i, rec in enumerate(recommendations[:5], 1):
output.append(f"{i}. [{rec['priority'].upper()}] {rec['title']}")
output.append(f" {rec['description']}")
output.append(f" Impact: {rec['impact']}, Effort: {rec['effort']}")
output.append("")
return "\n".join(output)
def main():
"""Main entry point for the debt dashboard."""
parser = argparse.ArgumentParser(description="Generate technical debt dashboard")
parser.add_argument("files", nargs="*", help="Debt inventory files")
parser.add_argument("--input-dir", help="Directory containing debt inventory files")
parser.add_argument("--output", help="Output file path")
parser.add_argument("--format", choices=["json", "text", "both"],
default="both", help="Output format")
parser.add_argument("--period", choices=["weekly", "monthly", "quarterly"],
default="monthly", help="Analysis period")
parser.add_argument("--team-size", type=int, default=5, help="Team size")
args = parser.parse_args()
# Initialize dashboard
dashboard = DebtDashboard(args.team_size)
# Load data
if args.input_dir:
success = dashboard.load_from_directory(args.input_dir)
elif args.files:
success = dashboard.load_historical_data(args.files)
else:
print("Error: Must specify either files or --input-dir")
sys.exit(1)
if not success:
sys.exit(1)
# Generate dashboard
try:
dashboard_data = dashboard.generate_dashboard(args.period)
except Exception as e:
print(f"Dashboard generation failed: {e}")
sys.exit(1)
# Output results
if args.format in ["json", "both"]:
json_output = json.dumps(dashboard_data, indent=2, default=str)
if args.output:
output_path = args.output if args.output.endswith('.json') else f"{args.output}.json"
with open(output_path, 'w') as f:
f.write(json_output)
print(f"JSON dashboard written to: {output_path}")
else:
print("JSON DASHBOARD:")
print("=" * 50)
print(json_output)
if args.format in ["text", "both"]:
text_output = format_dashboard_report(dashboard_data)
if args.output:
output_path = args.output if args.output.endswith('.txt') else f"{args.output}.txt"
with open(output_path, 'w') as f:
f.write(text_output)
print(f"Text dashboard written to: {output_path}")
else:
print("\nTEXT DASHBOARD:")
print("=" * 50)
print(text_output)
if __name__ == "__main__":
main() #!/usr/bin/env python3
"""
Tech Debt Prioritizer
Takes a debt inventory (from scanner or manual JSON) and calculates interest rate,
effort estimates, and produces a prioritized backlog with recommended sprint allocation.
Uses cost-of-delay vs effort scoring and various prioritization frameworks.
Usage:
python debt_prioritizer.py debt_inventory.json
python debt_prioritizer.py debt_inventory.json --output prioritized_backlog.json
python debt_prioritizer.py debt_inventory.json --team-size 6 --sprint-capacity 80
python debt_prioritizer.py debt_inventory.json --framework wsjf --output results.json
"""
import json
import argparse
import sys
import math
from collections import defaultdict, Counter
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional, Tuple
from dataclasses import dataclass, asdict
@dataclass
class EffortEstimate:
"""Represents effort estimation for a debt item."""
size_points: int
hours_estimate: float
risk_factor: float # 1.0 = low risk, 1.5 = medium, 2.0+ = high
skill_level_required: str # junior, mid, senior, expert
confidence: float # 0.0-1.0
@dataclass
class BusinessImpact:
"""Represents business impact assessment for a debt item."""
customer_impact: int # 1-10 scale
revenue_impact: int # 1-10 scale
team_velocity_impact: int # 1-10 scale
quality_impact: int # 1-10 scale
security_impact: int # 1-10 scale
@dataclass
class InterestRate:
"""Represents the interest rate calculation for technical debt."""
daily_cost: float # cost per day if left unfixed
frequency_multiplier: float # how often this code is touched
team_impact_multiplier: float # how many developers affected
compound_rate: float # how quickly this debt makes other debt worse
class DebtPrioritizer:
"""Main class for prioritizing technical debt items."""
def __init__(self, team_size: int = 5, sprint_capacity_hours: int = 80):
self.team_size = team_size
self.sprint_capacity_hours = sprint_capacity_hours
self.debt_items = []
self.prioritized_items = []
# Prioritization framework weights
self.framework_weights = {
"cost_of_delay": {
"business_value": 0.3,
"urgency": 0.3,
"risk_reduction": 0.2,
"team_productivity": 0.2
},
"wsjf": {
"business_value": 0.25,
"time_criticality": 0.25,
"risk_reduction": 0.25,
"effort": 0.25
},
"rice": {
"reach": 0.25,
"impact": 0.25,
"confidence": 0.25,
"effort": 0.25
}
}
def load_debt_inventory(self, file_path: str) -> bool:
"""Load debt inventory from JSON file."""
try:
with open(file_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# Handle different input formats
if isinstance(data, dict) and 'debt_items' in data:
self.debt_items = data['debt_items']
elif isinstance(data, list):
self.debt_items = data
else:
raise ValueError("Invalid debt inventory format")
print(f"Loaded {len(self.debt_items)} debt items from {file_path}")
return True
except Exception as e:
print(f"Error loading debt inventory: {e}")
return False
def analyze_and_prioritize(self, framework: str = "cost_of_delay") -> Dict[str, Any]:
"""
Analyze debt items and create prioritized backlog.
Args:
framework: Prioritization framework to use
Returns:
Dictionary containing prioritized backlog and analysis
"""
print(f"Analyzing {len(self.debt_items)} debt items...")
print(f"Using {framework} prioritization framework")
print("=" * 50)
# Step 1: Enrich debt items with estimates
enriched_items = []
for item in self.debt_items:
enriched_item = self._enrich_debt_item(item)
enriched_items.append(enriched_item)
# Step 2: Calculate prioritization scores
for item in enriched_items:
if framework == "cost_of_delay":
item["priority_score"] = self._calculate_cost_of_delay_score(item)
elif framework == "wsjf":
item["priority_score"] = self._calculate_wsjf_score(item)
elif framework == "rice":
item["priority_score"] = self._calculate_rice_score(item)
else:
raise ValueError(f"Unknown prioritization framework: {framework}")
# Step 3: Sort by priority score
self.prioritized_items = sorted(enriched_items,
key=lambda x: x["priority_score"],
reverse=True)
# Step 4: Generate sprint allocation recommendations
sprint_allocation = self._generate_sprint_allocation()
# Step 5: Generate insights and recommendations
insights = self._generate_insights()
# Step 6: Create visualization data
charts_data = self._generate_charts_data()
return {
"metadata": {
"analysis_date": datetime.now().isoformat(),
"framework_used": framework,
"team_size": self.team_size,
"sprint_capacity_hours": self.sprint_capacity_hours,
"total_items_analyzed": len(self.debt_items)
},
"prioritized_backlog": self.prioritized_items,
"sprint_allocation": sprint_allocation,
"insights": insights,
"charts_data": charts_data,
"recommendations": self._generate_recommendations()
}
def _enrich_debt_item(self, item: Dict[str, Any]) -> Dict[str, Any]:
"""Enrich debt item with detailed estimates and impact analysis."""
enriched = item.copy()
# Generate effort estimate
effort = self._estimate_effort(item)
enriched["effort_estimate"] = asdict(effort)
# Generate business impact assessment
business_impact = self._assess_business_impact(item)
enriched["business_impact"] = asdict(business_impact)
# Calculate interest rate
interest_rate = self._calculate_interest_rate(item, business_impact)
enriched["interest_rate"] = asdict(interest_rate)
# Calculate cost of delay
enriched["cost_of_delay"] = self._calculate_cost_of_delay(interest_rate, effort)
# Assign categories and tags
enriched["category"] = self._categorize_debt_item(item)
enriched["impact_tags"] = self._generate_impact_tags(item, business_impact)
return enriched
def _estimate_effort(self, item: Dict[str, Any]) -> EffortEstimate:
"""Estimate effort required to fix debt item."""
debt_type = item.get("type", "unknown")
severity = item.get("severity", "medium")
# Base effort estimation by debt type
base_efforts = {
"todo_comment": (1, 2),
"missing_docstring": (1, 4),
"long_line": (0.5, 1),
"large_function": (4, 16),
"high_complexity": (8, 32),
"duplicate_code": (6, 24),
"large_file": (16, 64),
"syntax_error": (2, 8),
"security_risk": (4, 40),
"architecture_debt": (40, 160),
"test_debt": (8, 40),
"dependency_debt": (4, 24)
}
min_hours, max_hours = base_efforts.get(debt_type, (4, 16))
# Adjust by severity
severity_multipliers = {
"low": 0.5,
"medium": 1.0,
"high": 1.5,
"critical": 2.0
}
multiplier = severity_multipliers.get(severity, 1.0)
hours_estimate = (min_hours + max_hours) / 2 * multiplier
# Convert to story points (assuming 6 hours per point)
size_points = max(1, round(hours_estimate / 6))
# Determine risk factor
risk_factor = 1.0
if debt_type in ["architecture_debt", "security_risk", "large_file"]:
risk_factor = 1.8
elif debt_type in ["high_complexity", "duplicate_code"]:
risk_factor = 1.4
elif debt_type in ["syntax_error", "dependency_debt"]:
risk_factor = 1.2
# Determine skill level required
skill_requirements = {
"architecture_debt": "expert",
"security_risk": "senior",
"high_complexity": "senior",
"large_function": "mid",
"duplicate_code": "mid",
"dependency_debt": "mid",
"test_debt": "mid",
"todo_comment": "junior",
"missing_docstring": "junior",
"long_line": "junior"
}
skill_level = skill_requirements.get(debt_type, "mid")
# Confidence based on debt type clarity
confidence_levels = {
"todo_comment": 0.9,
"missing_docstring": 0.9,
"long_line": 0.95,
"syntax_error": 0.8,
"large_function": 0.7,
"duplicate_code": 0.6,
"high_complexity": 0.5,
"architecture_debt": 0.3,
"security_risk": 0.4
}
confidence = confidence_levels.get(debt_type, 0.6)
return EffortEstimate(
size_points=size_points,
hours_estimate=hours_estimate,
risk_factor=risk_factor,
skill_level_required=skill_level,
confidence=confidence
)
def _assess_business_impact(self, item: Dict[str, Any]) -> BusinessImpact:
"""Assess business impact of debt item."""
debt_type = item.get("type", "unknown")
severity = item.get("severity", "medium")
# Base impact scores by debt type (1-10 scale)
impact_profiles = {
"security_risk": (9, 8, 7, 9, 10), # customer, revenue, velocity, quality, security
"architecture_debt": (6, 7, 9, 8, 4),
"large_function": (3, 4, 7, 6, 2),
"high_complexity": (4, 5, 8, 7, 3),
"duplicate_code": (3, 4, 6, 6, 2),
"syntax_error": (7, 6, 8, 9, 3),
"test_debt": (5, 5, 7, 8, 3),
"dependency_debt": (6, 5, 6, 7, 7),
"todo_comment": (1, 1, 2, 2, 1),
"missing_docstring": (2, 2, 4, 3, 1)
}
base_impacts = impact_profiles.get(debt_type, (3, 3, 5, 5, 3))
# Adjust by severity
severity_adjustments = {
"low": 0.6,
"medium": 1.0,
"high": 1.4,
"critical": 1.8
}
adjustment = severity_adjustments.get(severity, 1.0)
# Apply adjustment and cap at 10
adjusted_impacts = [min(10, max(1, round(impact * adjustment)))
for impact in base_impacts]
return BusinessImpact(
customer_impact=adjusted_impacts[0],
revenue_impact=adjusted_impacts[1],
team_velocity_impact=adjusted_impacts[2],
quality_impact=adjusted_impacts[3],
security_impact=adjusted_impacts[4]
)
def _calculate_interest_rate(self, item: Dict[str, Any],
business_impact: BusinessImpact) -> InterestRate:
"""Calculate interest rate for technical debt."""
# Base daily cost calculation
velocity_impact = business_impact.team_velocity_impact
quality_impact = business_impact.quality_impact
# Daily cost in "developer hours lost"
daily_cost = (velocity_impact * 0.5) + (quality_impact * 0.3)
# Frequency multiplier based on code location and type
file_path = item.get("file_path", "")
debt_type = item.get("type", "unknown")
# Estimate frequency based on file path patterns
frequency_multiplier = 1.0
if any(pattern in file_path.lower() for pattern in ["main", "core", "auth", "api"]):
frequency_multiplier = 2.0
elif any(pattern in file_path.lower() for pattern in ["util", "helper", "common"]):
frequency_multiplier = 1.5
elif any(pattern in file_path.lower() for pattern in ["test", "spec", "config"]):
frequency_multiplier = 0.5
# Team impact multiplier
team_impact_multiplier = min(self.team_size, 8) / 5.0 # Normalize around team of 5
# Compound rate - how this debt creates more debt
compound_rates = {
"architecture_debt": 0.1, # Creates 10% more debt monthly
"duplicate_code": 0.08,
"high_complexity": 0.05,
"large_function": 0.03,
"test_debt": 0.04,
"security_risk": 0.02, # Doesn't compound much, but high initial impact
"todo_comment": 0.01
}
compound_rate = compound_rates.get(debt_type, 0.02)
return InterestRate(
daily_cost=daily_cost,
frequency_multiplier=frequency_multiplier,
team_impact_multiplier=team_impact_multiplier,
compound_rate=compound_rate
)
def _calculate_cost_of_delay(self, interest_rate: InterestRate,
effort: EffortEstimate) -> float:
"""Calculate total cost of delay if debt is not fixed."""
# Estimate delay in days (assuming debt gets fixed eventually)
estimated_delay_days = effort.hours_estimate / (self.sprint_capacity_hours / 14) # 2-week sprints
# Calculate cumulative cost
daily_cost = (interest_rate.daily_cost *
interest_rate.frequency_multiplier *
interest_rate.team_impact_multiplier)
# Add compound interest effect
compound_effect = (1 + interest_rate.compound_rate) ** (estimated_delay_days / 30)
total_cost = daily_cost * estimated_delay_days * compound_effect
return round(total_cost, 2)
def _categorize_debt_item(self, item: Dict[str, Any]) -> str:
"""Categorize debt item into high-level categories."""
debt_type = item.get("type", "unknown")
categories = {
"code_quality": ["large_function", "high_complexity", "duplicate_code",
"long_line", "missing_docstring"],
"architecture": ["architecture_debt", "large_file"],
"security": ["security_risk", "hardcoded_secrets"],
"testing": ["test_debt", "missing_tests"],
"maintenance": ["todo_comment", "commented_code"],
"dependencies": ["dependency_debt", "outdated_packages"],
"infrastructure": ["deployment_debt", "monitoring_gaps"],
"documentation": ["missing_docstring", "outdated_docs"]
}
for category, types in categories.items():
if debt_type in types:
return category
return "other"
def _generate_impact_tags(self, item: Dict[str, Any],
business_impact: BusinessImpact) -> List[str]:
"""Generate impact tags for debt item."""
tags = []
if business_impact.security_impact >= 7:
tags.append("security-critical")
if business_impact.customer_impact >= 7:
tags.append("customer-facing")
if business_impact.revenue_impact >= 7:
tags.append("revenue-impact")
if business_impact.team_velocity_impact >= 7:
tags.append("velocity-blocker")
if business_impact.quality_impact >= 7:
tags.append("quality-risk")
# Add effort-based tags
effort_hours = item.get("effort_estimate", {}).get("hours_estimate", 0)
if effort_hours <= 4:
tags.append("quick-win")
elif effort_hours >= 40:
tags.append("major-initiative")
return tags
def _calculate_cost_of_delay_score(self, item: Dict[str, Any]) -> float:
"""Calculate priority score using cost-of-delay framework."""
business_impact = item["business_impact"]
effort = item["effort_estimate"]
# Business value (weighted average of impacts)
business_value = (
business_impact["customer_impact"] * 0.3 +
business_impact["revenue_impact"] * 0.3 +
business_impact["quality_impact"] * 0.2 +
business_impact["team_velocity_impact"] * 0.2
)
# Urgency (how quickly value decreases)
urgency = item["interest_rate"]["daily_cost"] * 10 # Scale to 1-10
urgency = min(10, max(1, urgency))
# Risk reduction
risk_reduction = business_impact["security_impact"] * 0.6 + business_impact["quality_impact"] * 0.4
# Team productivity impact
team_productivity = business_impact["team_velocity_impact"]
# Combine with weights
weights = self.framework_weights["cost_of_delay"]
numerator = (
business_value * weights["business_value"] +
urgency * weights["urgency"] +
risk_reduction * weights["risk_reduction"] +
team_productivity * weights["team_productivity"]
)
# Divide by effort (adjusted for risk)
effort_adjusted = effort["hours_estimate"] * effort["risk_factor"]
denominator = max(1, effort_adjusted / 8) # Normalize to story points
return round(numerator / denominator, 2)
def _calculate_wsjf_score(self, item: Dict[str, Any]) -> float:
"""Calculate priority score using Weighted Shortest Job First (WSJF)."""
business_impact = item["business_impact"]
effort = item["effort_estimate"]
# Business value
business_value = (
business_impact["customer_impact"] * 0.4 +
business_impact["revenue_impact"] * 0.6
)
# Time criticality
time_criticality = item["cost_of_delay"] / 10 # Normalize
time_criticality = min(10, max(1, time_criticality))
# Risk reduction
risk_reduction = (
business_impact["security_impact"] * 0.5 +
business_impact["quality_impact"] * 0.5
)
# Job size (effort)
job_size = effort["size_points"]
# WSJF calculation
numerator = business_value + time_criticality + risk_reduction
denominator = max(1, job_size)
return round(numerator / denominator, 2)
def _calculate_rice_score(self, item: Dict[str, Any]) -> float:
"""Calculate priority score using RICE framework."""
business_impact = item["business_impact"]
effort = item["effort_estimate"]
# Reach (how many developers/users affected)
reach = min(10, self.team_size * business_impact["team_velocity_impact"] / 5)
# Impact
impact = (
business_impact["customer_impact"] * 0.3 +
business_impact["revenue_impact"] * 0.3 +
business_impact["quality_impact"] * 0.4
)
# Confidence
confidence = effort["confidence"] * 10
# Effort
effort_score = effort["size_points"]
# RICE calculation
rice_score = (reach * impact * confidence) / max(1, effort_score)
return round(rice_score, 2)
def _generate_sprint_allocation(self) -> Dict[str, Any]:
"""Generate sprint allocation recommendations."""
# Calculate total effort needed
total_effort_hours = sum(item["effort_estimate"]["hours_estimate"]
for item in self.prioritized_items)
# Assume 20% of sprint capacity goes to tech debt
debt_capacity_per_sprint = self.sprint_capacity_hours * 0.2
# Allocate items to sprints
sprints = []
current_sprint = {"sprint_number": 1, "items": [], "total_hours": 0, "capacity_used": 0}
for item in self.prioritized_items:
item_effort = item["effort_estimate"]["hours_estimate"]
if current_sprint["total_hours"] + item_effort <= debt_capacity_per_sprint:
current_sprint["items"].append(item)
current_sprint["total_hours"] += item_effort
current_sprint["capacity_used"] = current_sprint["total_hours"] / debt_capacity_per_sprint
else:
# Start new sprint
sprints.append(current_sprint)
current_sprint = {
"sprint_number": len(sprints) + 1,
"items": [item],
"total_hours": item_effort,
"capacity_used": item_effort / debt_capacity_per_sprint
}
# Add the last sprint
if current_sprint["items"]:
sprints.append(current_sprint)
# Calculate summary statistics
total_sprints_needed = len(sprints)
high_priority_items = len([item for item in self.prioritized_items
if item.get("priority", "medium") in ["high", "critical"]])
return {
"total_debt_hours": round(total_effort_hours, 1),
"debt_capacity_per_sprint": debt_capacity_per_sprint,
"total_sprints_needed": total_sprints_needed,
"high_priority_items": high_priority_items,
"sprint_plan": sprints[:6], # Show first 6 sprints
"recommendations": [
f"Allocate {debt_capacity_per_sprint} hours per sprint to tech debt",
f"Focus on {high_priority_items} high-priority items first",
f"Estimated {total_sprints_needed} sprints to clear current backlog"
]
}
def _generate_insights(self) -> Dict[str, Any]:
"""Generate insights from the prioritized debt analysis."""
# Category distribution
categories = Counter(item["category"] for item in self.prioritized_items)
# Effort distribution
total_effort = sum(item["effort_estimate"]["hours_estimate"]
for item in self.prioritized_items)
effort_by_category = defaultdict(float)
for item in self.prioritized_items:
effort_by_category[item["category"]] += item["effort_estimate"]["hours_estimate"]
# Priority distribution
priorities = Counter()
for item in self.prioritized_items:
score = item["priority_score"]
if score >= 8:
priorities["critical"] += 1
elif score >= 5:
priorities["high"] += 1
elif score >= 2:
priorities["medium"] += 1
else:
priorities["low"] += 1
# Risk analysis
high_risk_items = [item for item in self.prioritized_items
if item["effort_estimate"]["risk_factor"] >= 1.5]
# Quick wins identification
quick_wins = [item for item in self.prioritized_items
if (item["effort_estimate"]["hours_estimate"] <= 8 and
item["priority_score"] >= 3)]
# Cost analysis
total_cost_of_delay = sum(item["cost_of_delay"] for item in self.prioritized_items)
avg_interest_rate = sum(item["interest_rate"]["daily_cost"]
for item in self.prioritized_items) / len(self.prioritized_items)
return {
"category_distribution": dict(categories),
"total_effort_hours": round(total_effort, 1),
"effort_by_category": {k: round(v, 1) for k, v in effort_by_category.items()},
"priority_distribution": dict(priorities),
"high_risk_items_count": len(high_risk_items),
"quick_wins_count": len(quick_wins),
"total_cost_of_delay": round(total_cost_of_delay, 1),
"average_daily_interest_rate": round(avg_interest_rate, 2),
"top_categories_by_effort": sorted(effort_by_category.items(),
key=lambda x: x[1], reverse=True)[:3]
}
def _generate_charts_data(self) -> Dict[str, Any]:
"""Generate data for charts and visualizations."""
# Priority vs Effort scatter plot data
scatter_data = []
for item in self.prioritized_items:
scatter_data.append({
"x": item["effort_estimate"]["hours_estimate"],
"y": item["priority_score"],
"label": item.get("description", "")[:50],
"category": item["category"],
"size": item["cost_of_delay"]
})
# Category effort distribution (pie chart)
effort_by_category = defaultdict(float)
for item in self.prioritized_items:
effort_by_category[item["category"]] += item["effort_estimate"]["hours_estimate"]
pie_data = [{"category": k, "effort": round(v, 1)}
for k, v in effort_by_category.items()]
# Priority timeline (bar chart)
timeline_data = []
cumulative_effort = 0
for i, item in enumerate(self.prioritized_items[:20]): # Top 20 items
cumulative_effort += item["effort_estimate"]["hours_estimate"]
timeline_data.append({
"item_rank": i + 1,
"description": item.get("description", "")[:30],
"effort": item["effort_estimate"]["hours_estimate"],
"cumulative_effort": round(cumulative_effort, 1),
"priority_score": item["priority_score"]
})
# Interest rate trend (line chart data structure)
interest_trend_data = []
for i, item in enumerate(self.prioritized_items):
interest_trend_data.append({
"item_index": i,
"daily_cost": item["interest_rate"]["daily_cost"],
"category": item["category"]
})
return {
"priority_effort_scatter": scatter_data,
"category_effort_distribution": pie_data,
"priority_timeline": timeline_data,
"interest_rate_trend": interest_trend_data[:50] # Limit for performance
}
def _generate_recommendations(self) -> List[str]:
"""Generate actionable recommendations based on analysis."""
recommendations = []
insights = self._generate_insights()
# Quick wins recommendation
if insights["quick_wins_count"] > 0:
recommendations.append(
f"Start with {insights['quick_wins_count']} quick wins to build momentum "
"and demonstrate immediate value from tech debt reduction efforts."
)
# High-risk items
if insights["high_risk_items_count"] > 5:
recommendations.append(
f"Plan careful execution for {insights['high_risk_items_count']} high-risk items. "
"Consider pair programming, extra testing, and incremental approaches."
)
# Category focus
top_category = insights["top_categories_by_effort"][0][0]
recommendations.append(
f"Focus initial efforts on '{top_category}' category debt, which represents "
f"the largest effort investment ({insights['top_categories_by_effort'][0][1]:.1f} hours)."
)
# Cost of delay urgency
if insights["average_daily_interest_rate"] > 5:
recommendations.append(
f"High average daily interest rate ({insights['average_daily_interest_rate']:.1f}) "
"suggests urgent action needed. Consider increasing tech debt budget allocation."
)
# Sprint planning
sprints_needed = len(self.prioritized_items) / 10 # Rough estimate
if sprints_needed > 12:
recommendations.append(
"Large debt backlog detected. Consider dedicating entire sprints to debt reduction "
"rather than trying to fit debt work around features."
)
# Team capacity
total_effort = insights["total_effort_hours"]
weeks_needed = total_effort / (self.sprint_capacity_hours * 0.2)
if weeks_needed > 26: # Half a year
recommendations.append(
f"With current capacity allocation, debt backlog will take {weeks_needed:.0f} weeks. "
"Consider increasing tech debt budget or focusing on highest-impact items only."
)
return recommendations
def format_prioritized_report(analysis_result: Dict[str, Any]) -> str:
"""Format the prioritization analysis in human-readable format."""
output = []
# Header
output.append("=" * 60)
output.append("TECHNICAL DEBT PRIORITIZATION REPORT")
output.append("=" * 60)
metadata = analysis_result["metadata"]
output.append(f"Analysis Date: {metadata['analysis_date']}")
output.append(f"Framework: {metadata['framework_used'].upper()}")
output.append(f"Team Size: {metadata['team_size']}")
output.append(f"Sprint Capacity: {metadata['sprint_capacity_hours']} hours")
output.append("")
# Executive Summary
insights = analysis_result["insights"]
output.append("EXECUTIVE SUMMARY")
output.append("-" * 30)
output.append(f"Total Debt Items: {metadata['total_items_analyzed']}")
output.append(f"Total Effort Required: {insights['total_effort_hours']} hours")
output.append(f"Total Cost of Delay: ${insights['total_cost_of_delay']:,.0f}")
output.append(f"Quick Wins Available: {insights['quick_wins_count']}")
output.append(f"High-Risk Items: {insights['high_risk_items_count']}")
output.append("")
# Sprint Plan
sprint_plan = analysis_result["sprint_allocation"]
output.append("SPRINT ALLOCATION PLAN")
output.append("-" * 30)
output.append(f"Sprints Needed: {sprint_plan['total_sprints_needed']}")
output.append(f"Hours per Sprint: {sprint_plan['debt_capacity_per_sprint']}")
output.append("")
for sprint in sprint_plan["sprint_plan"][:3]: # Show first 3 sprints
output.append(f"Sprint {sprint['sprint_number']} ({sprint['capacity_used']:.0%} capacity):")
for item in sprint["items"][:3]: # Top 3 items per sprint
output.append(f" • {item['description'][:50]}...")
output.append(f" Effort: {item['effort_estimate']['hours_estimate']:.1f}h, "
f"Priority: {item['priority_score']}")
output.append("")
# Top Priority Items
output.append("TOP 10 PRIORITY ITEMS")
output.append("-" * 30)
for i, item in enumerate(analysis_result["prioritized_backlog"][:10], 1):
output.append(f"{i}. [{item['priority_score']:.1f}] {item['description']}")
output.append(f" Category: {item['category']}, "
f"Effort: {item['effort_estimate']['hours_estimate']:.1f}h, "
f"Cost of Delay: ${item['cost_of_delay']:.0f}")
if item["impact_tags"]:
output.append(f" Tags: {', '.join(item['impact_tags'])}")
output.append("")
# Recommendations
output.append("RECOMMENDATIONS")
output.append("-" * 30)
for i, rec in enumerate(analysis_result["recommendations"], 1):
output.append(f"{i}. {rec}")
output.append("")
return "\n".join(output)
def main():
"""Main entry point for the debt prioritizer."""
parser = argparse.ArgumentParser(description="Prioritize technical debt backlog")
parser.add_argument("inventory_file", help="Path to debt inventory JSON file")
parser.add_argument("--output", help="Output file path")
parser.add_argument("--format", choices=["json", "text", "both"],
default="both", help="Output format")
parser.add_argument("--framework", choices=["cost_of_delay", "wsjf", "rice"],
default="cost_of_delay", help="Prioritization framework")
parser.add_argument("--team-size", type=int, default=5, help="Team size")
parser.add_argument("--sprint-capacity", type=int, default=80,
help="Sprint capacity in hours")
args = parser.parse_args()
# Initialize prioritizer
prioritizer = DebtPrioritizer(args.team_size, args.sprint_capacity)
# Load inventory
if not prioritizer.load_debt_inventory(args.inventory_file):
sys.exit(1)
# Analyze and prioritize
try:
analysis_result = prioritizer.analyze_and_prioritize(args.framework)
except Exception as e:
print(f"Analysis failed: {e}")
sys.exit(1)
# Output results
if args.format in ["json", "both"]:
json_output = json.dumps(analysis_result, indent=2, default=str)
if args.output:
output_path = args.output if args.output.endswith('.json') else f"{args.output}.json"
with open(output_path, 'w') as f:
f.write(json_output)
print(f"JSON report written to: {output_path}")
else:
print("JSON REPORT:")
print("=" * 50)
print(json_output)
if args.format in ["text", "both"]:
text_output = format_prioritized_report(analysis_result)
if args.output:
output_path = args.output if args.output.endswith('.txt') else f"{args.output}.txt"
with open(output_path, 'w') as f:
f.write(text_output)
print(f"Text report written to: {output_path}")
else:
print("\nTEXT REPORT:")
print("=" * 50)
print(text_output)
if __name__ == "__main__":
main() #!/usr/bin/env python3
"""
Tech Debt Scanner
Scans a codebase directory for tech debt signals using AST parsing (Python) and
regex patterns (any language). Detects various forms of technical debt and generates
both JSON inventory and human-readable reports.
Usage:
python debt_scanner.py /path/to/codebase
python debt_scanner.py /path/to/codebase --config config.json
python debt_scanner.py /path/to/codebase --output report.json --format both
"""
import ast
import json
import argparse
import os
import re
import sys
from collections import defaultdict, Counter
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any, Optional, Set, Tuple
class DebtScanner:
"""Main scanner class for detecting technical debt in codebases."""
def __init__(self, config: Optional[Dict[str, Any]] = None):
self.config = self._load_default_config()
if config:
self.config.update(config)
self.debt_items = []
self.stats = defaultdict(int)
self.file_stats = {}
# Compile regex patterns for performance
self._compile_patterns()
def _load_default_config(self) -> Dict[str, Any]:
"""Load default configuration for debt detection."""
return {
"max_function_length": 50,
"max_complexity": 10,
"max_nesting_depth": 4,
"max_file_size_lines": 500,
"min_duplicate_lines": 3,
"ignore_patterns": [
"*.pyc", "__pycache__", ".git", ".svn", "node_modules",
"build", "dist", "*.min.js", "*.map"
],
"file_extensions": {
"python": [".py"],
"javascript": [".js", ".jsx", ".ts", ".tsx"],
"java": [".java"],
"csharp": [".cs"],
"cpp": [".cpp", ".cc", ".cxx", ".c", ".h", ".hpp"],
"ruby": [".rb"],
"php": [".php"],
"go": [".go"],
"rust": [".rs"],
"kotlin": [".kt"]
},
"comment_patterns": {
"todo": r"(?i)(TODO|FIXME|HACK|XXX|BUG)[\s:]*(.+)",
"commented_code": r"^\s*#.*[=(){}\[\];].*",
"magic_numbers": r"\b\d{2,}\b",
"long_strings": r'["\'](.{100,})["\']'
},
"severity_weights": {
"critical": 10,
"high": 7,
"medium": 5,
"low": 2,
"info": 1
}
}
def _compile_patterns(self):
"""Compile regex patterns for better performance."""
self.comment_regexes = {}
for name, pattern in self.config["comment_patterns"].items():
self.comment_regexes[name] = re.compile(pattern)
# Common code smells patterns
self.smell_patterns = {
"empty_catch": re.compile(r"except[^:]*:\s*pass\s*$", re.MULTILINE),
"print_debug": re.compile(r"print\s*\([^)]*debug[^)]*\)", re.IGNORECASE),
"hardcoded_paths": re.compile(r'["\'][/\\][^"\']*[/\\][^"\']*["\']'),
"sql_injection_risk": re.compile(r'["\'].*%s.*["\'].*execute', re.IGNORECASE),
}
def scan_directory(self, directory: str) -> Dict[str, Any]:
"""
Scan a directory for tech debt.
Args:
directory: Path to the directory to scan
Returns:
Dictionary containing debt inventory and statistics
"""
directory_path = Path(directory)
if not directory_path.exists():
raise ValueError(f"Directory does not exist: {directory}")
print(f"Scanning directory: {directory}")
print("=" * 50)
# Reset state
self.debt_items = []
self.stats = defaultdict(int)
self.file_stats = {}
# Walk through directory
for root, dirs, files in os.walk(directory):
# Filter out ignored directories
dirs[:] = [d for d in dirs if not self._should_ignore(d)]
for file in files:
if self._should_ignore(file):
continue
file_path = os.path.join(root, file)
relative_path = os.path.relpath(file_path, directory)
try:
self._scan_file(file_path, relative_path)
except Exception as e:
print(f"Error scanning {relative_path}: {e}")
self.stats["scan_errors"] += 1
# Post-process results
self._detect_duplicates(directory)
self._calculate_priorities()
return self._generate_report(directory)
def _should_ignore(self, name: str) -> bool:
"""Check if file/directory should be ignored."""
for pattern in self.config["ignore_patterns"]:
if "*" in pattern:
if re.match(pattern.replace("*", ".*"), name):
return True
elif pattern in name:
return True
return False
def _scan_file(self, file_path: str, relative_path: str):
"""Scan a single file for tech debt."""
try:
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
content = f.read()
lines = content.splitlines()
except Exception as e:
print(f"Cannot read {relative_path}: {e}")
return
file_ext = Path(file_path).suffix.lower()
file_info = {
"path": relative_path,
"lines": len(lines),
"size_kb": os.path.getsize(file_path) / 1024,
"language": self._detect_language(file_ext),
"debt_count": 0
}
self.stats["files_scanned"] += 1
self.stats["total_lines"] += len(lines)
# File size debt
if len(lines) > self.config["max_file_size_lines"]:
self._add_debt_item(
"large_file",
f"File is too large: {len(lines)} lines",
relative_path,
"medium",
{"lines": len(lines), "recommended_max": self.config["max_file_size_lines"]}
)
file_info["debt_count"] += 1
# Language-specific analysis
if file_info["language"] == "python" and file_ext == ".py":
self._scan_python_file(relative_path, content, lines)
else:
self._scan_generic_file(relative_path, content, lines, file_info["language"])
# Common patterns for all languages
self._scan_common_patterns(relative_path, content, lines)
self.file_stats[relative_path] = file_info
def _detect_language(self, file_ext: str) -> str:
"""Detect programming language from file extension."""
for lang, extensions in self.config["file_extensions"].items():
if file_ext in extensions:
return lang
return "unknown"
def _scan_python_file(self, file_path: str, content: str, lines: List[str]):
"""Scan Python files using AST parsing."""
try:
tree = ast.parse(content)
analyzer = PythonASTAnalyzer(self.config)
debt_items = analyzer.analyze(tree, file_path, lines)
self.debt_items.extend(debt_items)
self.stats["python_files"] += 1
except SyntaxError as e:
self._add_debt_item(
"syntax_error",
f"Python syntax error: {e}",
file_path,
"high",
{"line": e.lineno, "error": str(e)}
)
def _scan_generic_file(self, file_path: str, content: str, lines: List[str], language: str):
"""Scan non-Python files using pattern matching."""
# Detect long lines
for i, line in enumerate(lines):
if len(line) > 120:
self._add_debt_item(
"long_line",
f"Line too long: {len(line)} characters",
file_path,
"low",
{"line_number": i + 1, "length": len(line)}
)
# Detect deep nesting (approximate)
for i, line in enumerate(lines):
indent_level = len(line) - len(line.lstrip())
if language in ["python"]:
indent_level = indent_level // 4 # Python uses 4-space indents
elif language in ["javascript", "java", "csharp", "cpp"]:
# Count braces for brace-based languages
brace_level = content[:content.find('\n'.join(lines[:i+1]))].count('{') - content[:content.find('\n'.join(lines[:i+1]))].count('}')
if brace_level > self.config["max_nesting_depth"]:
self._add_debt_item(
"deep_nesting",
f"Deep nesting detected: {brace_level} levels",
file_path,
"medium",
{"line_number": i + 1, "nesting_level": brace_level}
)
def _scan_common_patterns(self, file_path: str, content: str, lines: List[str]):
"""Scan for common patterns across all file types."""
# TODO/FIXME comments
for i, line in enumerate(lines):
for pattern_name, regex in self.comment_regexes.items():
match = regex.search(line)
if match:
if pattern_name == "todo":
self._add_debt_item(
"todo_comment",
f"TODO/FIXME comment: {match.group(0)}",
file_path,
"low",
{"line_number": i + 1, "comment": match.group(0).strip()}
)
# Code smells
for smell_name, pattern in self.smell_patterns.items():
matches = pattern.finditer(content)
for match in matches:
line_num = content[:match.start()].count('\n') + 1
self._add_debt_item(
smell_name,
f"Code smell detected: {smell_name}",
file_path,
"medium",
{"line_number": line_num, "pattern": match.group(0)[:100]}
)
def _detect_duplicates(self, directory: str):
"""Detect duplicate code blocks across files."""
# Simple duplicate detection based on exact line matches
line_hashes = defaultdict(list)
for file_path, file_info in self.file_stats.items():
try:
full_path = os.path.join(directory, file_path)
with open(full_path, 'r', encoding='utf-8', errors='ignore') as f:
lines = f.readlines()
for i in range(len(lines) - self.config["min_duplicate_lines"] + 1):
block = ''.join(lines[i:i + self.config["min_duplicate_lines"]])
block_hash = hash(block.strip())
if len(block.strip()) > 50: # Only consider substantial blocks
line_hashes[block_hash].append((file_path, i + 1, block))
except Exception:
continue
# Report duplicates
for block_hash, occurrences in line_hashes.items():
if len(occurrences) > 1:
for file_path, line_num, block in occurrences:
self._add_debt_item(
"duplicate_code",
f"Duplicate code block found in {len(occurrences)} files",
file_path,
"medium",
{
"line_number": line_num,
"duplicate_count": len(occurrences),
"other_files": [f[0] for f in occurrences if f[0] != file_path]
}
)
def _calculate_priorities(self):
"""Calculate priority scores for debt items."""
severity_weights = self.config["severity_weights"]
for item in self.debt_items:
base_score = severity_weights.get(item["severity"], 1)
# Adjust based on debt type
type_multipliers = {
"syntax_error": 2.0,
"security_risk": 1.8,
"large_function": 1.5,
"high_complexity": 1.4,
"duplicate_code": 1.3,
"todo_comment": 0.5
}
multiplier = type_multipliers.get(item["type"], 1.0)
item["priority_score"] = int(base_score * multiplier)
# Set priority category
if item["priority_score"] >= 15:
item["priority"] = "critical"
elif item["priority_score"] >= 10:
item["priority"] = "high"
elif item["priority_score"] >= 5:
item["priority"] = "medium"
else:
item["priority"] = "low"
def _add_debt_item(self, debt_type: str, description: str, file_path: str,
severity: str, metadata: Dict[str, Any]):
"""Add a debt item to the inventory."""
item = {
"id": f"DEBT-{len(self.debt_items) + 1:04d}",
"type": debt_type,
"description": description,
"file_path": file_path,
"severity": severity,
"metadata": metadata,
"detected_date": datetime.now().isoformat(),
"status": "identified"
}
self.debt_items.append(item)
self.stats[f"debt_{debt_type}"] += 1
self.stats["total_debt_items"] += 1
if file_path in self.file_stats:
self.file_stats[file_path]["debt_count"] += 1
def _generate_report(self, directory: str) -> Dict[str, Any]:
"""Generate the final debt report."""
# Sort debt items by priority score
self.debt_items.sort(key=lambda x: x.get("priority_score", 0), reverse=True)
# Calculate summary statistics
priority_counts = Counter(item["priority"] for item in self.debt_items)
type_counts = Counter(item["type"] for item in self.debt_items)
# Calculate health score (0-100, higher is better)
total_files = self.stats.get("files_scanned", 1)
debt_density = len(self.debt_items) / total_files
health_score = max(0, 100 - (debt_density * 10))
report = {
"scan_metadata": {
"directory": directory,
"scan_date": datetime.now().isoformat(),
"scanner_version": "1.0.0",
"config": self.config
},
"summary": {
"total_files_scanned": self.stats.get("files_scanned", 0),
"total_lines_scanned": self.stats.get("total_lines", 0),
"total_debt_items": len(self.debt_items),
"health_score": round(health_score, 1),
"debt_density": round(debt_density, 2),
"priority_breakdown": dict(priority_counts),
"type_breakdown": dict(type_counts)
},
"debt_items": self.debt_items,
"file_statistics": self.file_stats,
"recommendations": self._generate_recommendations()
}
return report
def _generate_recommendations(self) -> List[str]:
"""Generate actionable recommendations based on findings."""
recommendations = []
# Priority-based recommendations
high_priority_count = len([item for item in self.debt_items
if item.get("priority") in ["critical", "high"]])
if high_priority_count > 10:
recommendations.append(
f"Address {high_priority_count} high-priority debt items immediately - "
"they pose significant risk to code quality and maintainability."
)
# Type-specific recommendations
type_counts = Counter(item["type"] for item in self.debt_items)
if type_counts.get("large_function", 0) > 5:
recommendations.append(
"Consider refactoring large functions into smaller, more focused units. "
"This will improve readability and testability."
)
if type_counts.get("duplicate_code", 0) > 3:
recommendations.append(
"Extract duplicate code into reusable functions or modules. "
"This reduces maintenance burden and potential for inconsistent changes."
)
if type_counts.get("todo_comment", 0) > 20:
recommendations.append(
"Review and address TODO/FIXME comments. Consider creating proper "
"tickets for substantial work items."
)
# General recommendations
total_files = self.stats.get("files_scanned", 1)
if len(self.debt_items) / total_files > 2:
recommendations.append(
"High debt density detected. Consider establishing coding standards "
"and regular code review processes to prevent debt accumulation."
)
if not recommendations:
recommendations.append("Code quality looks good! Continue current practices.")
return recommendations
class PythonASTAnalyzer(ast.NodeVisitor):
"""AST analyzer for Python-specific debt detection."""
def __init__(self, config: Dict[str, Any]):
self.config = config
self.debt_items = []
self.current_file = ""
self.lines = []
self.function_stack = []
def analyze(self, tree: ast.AST, file_path: str, lines: List[str]) -> List[Dict[str, Any]]:
"""Analyze Python AST for tech debt."""
self.debt_items = []
self.current_file = file_path
self.lines = lines
self.function_stack = []
self.visit(tree)
return self.debt_items
def visit_FunctionDef(self, node: ast.FunctionDef):
"""Analyze function definitions."""
self.function_stack.append(node.name)
# Calculate function length
func_length = node.end_lineno - node.lineno + 1
if func_length > self.config["max_function_length"]:
self._add_debt(
"large_function",
f"Function '{node.name}' is too long: {func_length} lines",
node.lineno,
"medium",
{"function_name": node.name, "length": func_length}
)
# Check for missing docstring
if not ast.get_docstring(node):
self._add_debt(
"missing_docstring",
f"Function '{node.name}' missing docstring",
node.lineno,
"low",
{"function_name": node.name}
)
# Calculate cyclomatic complexity
complexity = self._calculate_complexity(node)
if complexity > self.config["max_complexity"]:
self._add_debt(
"high_complexity",
f"Function '{node.name}' has high complexity: {complexity}",
node.lineno,
"high",
{"function_name": node.name, "complexity": complexity}
)
# Check parameter count
param_count = len(node.args.args)
if param_count > 5:
self._add_debt(
"too_many_parameters",
f"Function '{node.name}' has too many parameters: {param_count}",
node.lineno,
"medium",
{"function_name": node.name, "parameter_count": param_count}
)
self.generic_visit(node)
self.function_stack.pop()
def visit_ClassDef(self, node: ast.ClassDef):
"""Analyze class definitions."""
# Check for missing docstring
if not ast.get_docstring(node):
self._add_debt(
"missing_docstring",
f"Class '{node.name}' missing docstring",
node.lineno,
"low",
{"class_name": node.name}
)
# Check for too many methods
methods = [n for n in node.body if isinstance(n, ast.FunctionDef)]
if len(methods) > 20:
self._add_debt(
"large_class",
f"Class '{node.name}' has too many methods: {len(methods)}",
node.lineno,
"medium",
{"class_name": node.name, "method_count": len(methods)}
)
self.generic_visit(node)
def _calculate_complexity(self, node: ast.FunctionDef) -> int:
"""Calculate cyclomatic complexity of a function."""
complexity = 1 # Base complexity
for child in ast.walk(node):
if isinstance(child, (ast.If, ast.While, ast.For, ast.AsyncFor)):
complexity += 1
elif isinstance(child, ast.ExceptHandler):
complexity += 1
elif isinstance(child, ast.BoolOp):
complexity += len(child.values) - 1
return complexity
def _add_debt(self, debt_type: str, description: str, line_number: int,
severity: str, metadata: Dict[str, Any]):
"""Add a debt item to the collection."""
item = {
"id": f"DEBT-{len(self.debt_items) + 1:04d}",
"type": debt_type,
"description": description,
"file_path": self.current_file,
"line_number": line_number,
"severity": severity,
"metadata": metadata,
"detected_date": datetime.now().isoformat(),
"status": "identified"
}
self.debt_items.append(item)
def format_human_readable_report(report: Dict[str, Any]) -> str:
"""Format the report in human-readable format."""
output = []
# Header
output.append("=" * 60)
output.append("TECHNICAL DEBT SCAN REPORT")
output.append("=" * 60)
output.append(f"Directory: {report['scan_metadata']['directory']}")
output.append(f"Scan Date: {report['scan_metadata']['scan_date']}")
output.append(f"Scanner Version: {report['scan_metadata']['scanner_version']}")
output.append("")
# Summary
summary = report["summary"]
output.append("SUMMARY")
output.append("-" * 30)
output.append(f"Files Scanned: {summary['total_files_scanned']}")
output.append(f"Lines Scanned: {summary['total_lines_scanned']:,}")
output.append(f"Total Debt Items: {summary['total_debt_items']}")
output.append(f"Health Score: {summary['health_score']}/100")
output.append(f"Debt Density: {summary['debt_density']} items/file")
output.append("")
# Priority breakdown
output.append("PRIORITY BREAKDOWN")
output.append("-" * 30)
for priority, count in summary["priority_breakdown"].items():
output.append(f"{priority.capitalize()}: {count}")
output.append("")
# Top debt items
output.append("TOP DEBT ITEMS")
output.append("-" * 30)
top_items = report["debt_items"][:10]
for i, item in enumerate(top_items, 1):
output.append(f"{i}. [{item['priority'].upper()}] {item['description']}")
output.append(f" File: {item['file_path']}")
if 'line_number' in item:
output.append(f" Line: {item['line_number']}")
output.append("")
# Recommendations
output.append("RECOMMENDATIONS")
output.append("-" * 30)
for i, rec in enumerate(report["recommendations"], 1):
output.append(f"{i}. {rec}")
output.append("")
return "\n".join(output)
def main():
"""Main entry point for the debt scanner."""
parser = argparse.ArgumentParser(description="Scan codebase for technical debt")
parser.add_argument("directory", help="Directory to scan")
parser.add_argument("--config", help="Configuration file (JSON)")
parser.add_argument("--output", help="Output file path")
parser.add_argument("--format", choices=["json", "text", "both"],
default="both", help="Output format")
args = parser.parse_args()
# Load configuration
config = None
if args.config:
try:
with open(args.config, 'r') as f:
config = json.load(f)
except Exception as e:
print(f"Error loading config: {e}")
sys.exit(1)
# Run scan
scanner = DebtScanner(config)
try:
report = scanner.scan_directory(args.directory)
except Exception as e:
print(f"Scan failed: {e}")
sys.exit(1)
# Output results
if args.format in ["json", "both"]:
json_output = json.dumps(report, indent=2, default=str)
if args.output:
output_path = args.output if args.output.endswith('.json') else f"{args.output}.json"
with open(output_path, 'w') as f:
f.write(json_output)
print(f"JSON report written to: {output_path}")
else:
print("\nJSON REPORT:")
print("=" * 50)
print(json_output)
if args.format in ["text", "both"]:
text_output = format_human_readable_report(report)
if args.output:
output_path = args.output if args.output.endswith('.txt') else f"{args.output}.txt"
with open(output_path, 'w') as f:
f.write(text_output)
print(f"Text report written to: {output_path}")
else:
print("\nTEXT REPORT:")
print("=" * 50)
print(text_output)
if __name__ == "__main__":
main() Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill engineering/tech-debt-tracker Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Development
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
engineering/tech-debt-tracker/SKILL.md
People who install this also use
Code Reviewer
Deep code review for TypeScript, JavaScript, Python, and Go — anti-pattern detection, security issues, performance bottlenecks, and quality metrics.
@alirezarezvani
Senior Software Architect
Design system architecture with C4 and sequence diagrams, write Architecture Decision Records, evaluate tech stacks, and guide architectural trade-offs.
@alirezarezvani
Migration Architect
Plan and execute code and system migrations — database migrations, framework upgrades, cloud migrations, and monolith-to-microservices transitions.
@alirezarezvani