Release Manager
Manage software releases end to end — release checklists, versioning strategy, changelog generation, rollback plans, and stakeholder communication.
What this skill does
Manage software launches end-to-end by automating planning, version numbers, and team communication. You will produce ready-to-use checklists, update notes, and recovery plans that ensure every launch is reliable and well-documented. Reach for this whenever you are ready to release new features or deliver fixes to your users.
name: “release-manager” description: “Release Manager”
Release Manager
Tier: POWERFUL
Category: Engineering
Domain: Software Release Management & DevOps
Overview
The Release Manager skill provides comprehensive tools and knowledge for managing software releases end-to-end. From parsing conventional commits to generating changelogs, determining version bumps, and orchestrating release processes, this skill ensures reliable, predictable, and well-documented software releases.
Core Capabilities
- Automated Changelog Generation from git history using conventional commits
- Semantic Version Bumping based on commit analysis and breaking changes
- Release Readiness Assessment with comprehensive checklists and validation
- Release Planning & Coordination with stakeholder communication templates
- Rollback Planning with automated recovery procedures
- Hotfix Management for emergency releases
- Feature Flag Integration for progressive rollouts
Key Components
Scripts
- changelog_generator.py - Parses git logs and generates structured changelogs
- version_bumper.py - Determines correct version bumps from conventional commits
- release_planner.py - Assesses release readiness and generates coordination plans
Documentation
- Comprehensive release management methodology
- Conventional commits specification and examples
- Release workflow comparisons (Git Flow, Trunk-based, GitHub Flow)
- Hotfix procedures and emergency response protocols
Release Management Methodology
Semantic Versioning (SemVer)
Semantic Versioning follows the MAJOR.MINOR.PATCH format where:
- MAJOR version when you make incompatible API changes
- MINOR version when you add functionality in a backwards compatible manner
- PATCH version when you make backwards compatible bug fixes
Pre-release Versions
Pre-release versions are denoted by appending a hyphen and identifiers:
1.0.0-alpha.1- Alpha releases for early testing1.0.0-beta.2- Beta releases for wider testing1.0.0-rc.1- Release candidates for final validation
Version Precedence
Version precedence is determined by comparing each identifier:
1.0.0-alpha<1.0.0-alpha.1<1.0.0-alpha.beta<1.0.0-beta1.0.0-beta<1.0.0-beta.2<1.0.0-beta.11<1.0.0-rc.11.0.0-rc.1<1.0.0
Conventional Commits
Conventional Commits provide a structured format for commit messages that enables automated tooling:
Format
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Types
- feat: A new feature (correlates with MINOR version bump)
- fix: A bug fix (correlates with PATCH version bump)
- docs: Documentation only changes
- style: Changes that do not affect the meaning of the code
- refactor: A code change that neither fixes a bug nor adds a feature
- perf: A code change that improves performance
- test: Adding missing tests or correcting existing tests
- chore: Changes to the build process or auxiliary tools
- ci: Changes to CI configuration files and scripts
- build: Changes that affect the build system or external dependencies
- breaking: Introduces a breaking change (correlates with MAJOR version bump)
Examples
feat(user-auth): add OAuth2 integration
fix(api): resolve race condition in user creation
docs(readme): update installation instructions
feat!: remove deprecated payment API
BREAKING CHANGE: The legacy payment API has been removed
Automated Changelog Generation
Changelogs are automatically generated from conventional commits, organized by:
Structure
# Changelog
## [Unreleased]
### Added
### Changed
### Deprecated
### Removed
### Fixed
### Security
## [1.2.0] - 2024-01-15
### Added
- OAuth2 authentication support (#123)
- User preference dashboard (#145)
### Fixed
- Race condition in user creation (#134)
- Memory leak in image processing (#156)
### Breaking Changes
- Removed legacy payment API
Grouping Rules
- Added for new features (feat)
- Fixed for bug fixes (fix)
- Changed for changes in existing functionality
- Deprecated for soon-to-be removed features
- Removed for now removed features
- Security for vulnerability fixes
Metadata Extraction
- Link to pull requests and issues:
(#123) - Breaking changes highlighted prominently
- Scope-based grouping:
auth:,api:,ui: - Co-authored-by for contributor recognition
Version Bump Strategies
Version bumps are determined by analyzing commits since the last release:
Automatic Detection Rules
- MAJOR: Any commit with
BREAKING CHANGEor!after type - MINOR: Any
feattype commits without breaking changes - PATCH:
fix,perf,securitytype commits - NO BUMP:
docs,style,test,chore,ci,buildonly
Pre-release Handling
# Alpha: 1.0.0-alpha.1 → 1.0.0-alpha.2
# Beta: 1.0.0-alpha.5 → 1.0.0-beta.1
# RC: 1.0.0-beta.3 → 1.0.0-rc.1
# Release: 1.0.0-rc.2 → 1.0.0
Multi-package Considerations
For monorepos with multiple packages:
- Analyze commits affecting each package independently
- Support scoped version bumps:
@scope/[email protected] - Generate coordinated release plans across packages
Release Branch Workflows
Git Flow
main (production) ← release/1.2.0 ← develop ← feature/login
← hotfix/critical-fix
Advantages:
- Clear separation of concerns
- Stable main branch
- Parallel feature development
- Structured release process
Process:
- Create release branch from develop:
git checkout -b release/1.2.0 develop - Finalize release (version bump, changelog)
- Merge to main and develop
- Tag release:
git tag v1.2.0 - Deploy from main
Trunk-based Development
main ← feature/login (short-lived)
← feature/payment (short-lived)
← hotfix/critical-fix
Advantages:
- Simplified workflow
- Faster integration
- Reduced merge conflicts
- Continuous integration friendly
Process:
- Short-lived feature branches (1-3 days)
- Frequent commits to main
- Feature flags for incomplete features
- Automated testing gates
- Deploy from main with feature toggles
GitHub Flow
main ← feature/login
← hotfix/critical-fix
Advantages:
- Simple and lightweight
- Fast deployment cycle
- Good for web applications
- Minimal overhead
Process:
- Create feature branch from main
- Regular commits and pushes
- Open pull request when ready
- Deploy from feature branch for testing
- Merge to main and deploy
Feature Flag Integration
Feature flags enable safe, progressive rollouts:
Types of Feature Flags
- Release flags: Control feature visibility in production
- Experiment flags: A/B testing and gradual rollouts
- Operational flags: Circuit breakers and performance toggles
- Permission flags: Role-based feature access
Implementation Strategy
# Progressive rollout example
if feature_flag("new_payment_flow", user_id):
return new_payment_processor.process(payment)
else:
return legacy_payment_processor.process(payment)
Release Coordination
- Deploy code with feature behind flag (disabled)
- Gradually enable for percentage of users
- Monitor metrics and error rates
- Full rollout or quick rollback based on data
- Remove flag in subsequent release
Release Readiness Checklists
Pre-Release Validation
- All planned features implemented and tested
- Breaking changes documented with migration guide
- API documentation updated
- Database migrations tested
- Security review completed for sensitive changes
- Performance testing passed thresholds
- Internationalization strings updated
- Third-party integrations validated
Quality Gates
- Unit test coverage ≥ 85%
- Integration tests passing
- End-to-end tests passing
- Static analysis clean
- Security scan passed
- Dependency audit clean
- Load testing completed
Documentation Requirements
- CHANGELOG.md updated
- README.md reflects new features
- API documentation generated
- Migration guide written for breaking changes
- Deployment notes prepared
- Rollback procedure documented
Stakeholder Approvals
- Product Manager sign-off
- Engineering Lead approval
- QA validation complete
- Security team clearance
- Legal review (if applicable)
- Compliance check (if regulated)
Deployment Coordination
Communication Plan
Internal Stakeholders:
- Engineering team: Technical changes and rollback procedures
- Product team: Feature descriptions and user impact
- Support team: Known issues and troubleshooting guides
- Sales team: Customer-facing changes and talking points
External Communication:
- Release notes for users
- API changelog for developers
- Migration guide for breaking changes
- Downtime notifications if applicable
Deployment Sequence
- Pre-deployment (T-24h): Final validation, freeze code
- Database migrations (T-2h): Run and validate schema changes
- Blue-green deployment (T-0): Switch traffic gradually
- Post-deployment (T+1h): Monitor metrics and logs
- Rollback window (T+4h): Decision point for rollback
Monitoring & Validation
- Application health checks
- Error rate monitoring
- Performance metrics tracking
- User experience monitoring
- Business metrics validation
- Third-party service integration health
Hotfix Procedures
Hotfixes address critical production issues requiring immediate deployment:
Severity Classification
P0 - Critical: Complete system outage, data loss, security breach
- SLA: Fix within 2 hours
- Process: Emergency deployment, all hands on deck
- Approval: Engineering Lead + On-call Manager
P1 - High: Major feature broken, significant user impact
- SLA: Fix within 24 hours
- Process: Expedited review and deployment
- Approval: Engineering Lead + Product Manager
P2 - Medium: Minor feature issues, limited user impact
- SLA: Fix in next release cycle
- Process: Normal review process
- Approval: Standard PR review
Emergency Response Process
- Incident declaration: Page on-call team
- Assessment: Determine severity and impact
- Hotfix branch: Create from last stable release
- Minimal fix: Address root cause only
- Expedited testing: Automated tests + manual validation
- Emergency deployment: Deploy to production
- Post-incident: Root cause analysis and prevention
Rollback Planning
Every release must have a tested rollback plan:
Rollback Triggers
- Error rate spike: >2x baseline within 30 minutes
- Performance degradation: >50% latency increase
- Feature failures: Core functionality broken
- Security incident: Vulnerability exploited
- Data corruption: Database integrity compromised
Rollback Types
Code Rollback:
- Revert to previous Docker image
- Database-compatible code changes only
- Feature flag disable preferred over code rollback
Database Rollback:
- Only for non-destructive migrations
- Data backup required before migration
- Forward-only migrations preferred (add columns, not drop)
Infrastructure Rollback:
- Blue-green deployment switch
- Load balancer configuration revert
- DNS changes (longer propagation time)
Automated Rollback
# Example rollback automation
def monitor_deployment():
if error_rate() > THRESHOLD:
alert_oncall("Error rate spike detected")
if auto_rollback_enabled():
execute_rollback()
Release Metrics & Analytics
Key Performance Indicators
- Lead Time: From commit to production
- Deployment Frequency: Releases per week/month
- Mean Time to Recovery: From incident to resolution
- Change Failure Rate: Percentage of releases causing incidents
Quality Metrics
- Rollback Rate: Percentage of releases rolled back
- Hotfix Rate: Hotfixes per regular release
- Bug Escape Rate: Production bugs per release
- Time to Detection: How quickly issues are identified
Process Metrics
- Review Time: Time spent in code review
- Testing Time: Automated + manual testing duration
- Approval Cycle: Time from PR to merge
- Release Preparation: Time spent on release activities
Tool Integration
Version Control Systems
- Git: Primary VCS with conventional commit parsing
- GitHub/GitLab: Pull request automation and CI/CD
- Bitbucket: Pipeline integration and deployment gates
CI/CD Platforms
- Jenkins: Pipeline orchestration and deployment automation
- GitHub Actions: Workflow automation and release publishing
- GitLab CI: Integrated pipelines with environment management
- CircleCI: Container-based builds and deployments
Monitoring & Alerting
- DataDog: Application performance monitoring
- New Relic: Error tracking and performance insights
- Sentry: Error aggregation and release tracking
- PagerDuty: Incident response and escalation
Communication Platforms
- Slack: Release notifications and coordination
- Microsoft Teams: Stakeholder communication
- Email: External customer notifications
- Status Pages: Public incident communication
Best Practices
Release Planning
- Regular cadence: Establish predictable release schedule
- Feature freeze: Lock changes 48h before release
- Risk assessment: Evaluate changes for potential impact
- Stakeholder alignment: Ensure all teams are prepared
Quality Assurance
- Automated testing: Comprehensive test coverage
- Staging environment: Production-like testing environment
- Canary releases: Gradual rollout to subset of users
- Monitoring: Proactive issue detection
Communication
- Clear timelines: Communicate schedules early
- Regular updates: Status reports during release process
- Issue transparency: Honest communication about problems
- Post-mortems: Learn from incidents and improve
Automation
- Reduce manual steps: Automate repetitive tasks
- Consistent process: Same steps every time
- Audit trails: Log all release activities
- Self-service: Enable teams to deploy safely
Common Anti-patterns
Process Anti-patterns
- Manual deployments: Error-prone and inconsistent
- Last-minute changes: Risk introduction without proper testing
- Skipping testing: Deploying without validation
- Poor communication: Stakeholders unaware of changes
Technical Anti-patterns
- Monolithic releases: Large, infrequent releases with high risk
- Coupled deployments: Services that must be deployed together
- No rollback plan: Unable to quickly recover from issues
- Environment drift: Production differs from staging
Cultural Anti-patterns
- Blame culture: Fear of making changes or reporting issues
- Hero culture: Relying on individuals instead of process
- Perfectionism: Delaying releases for minor improvements
- Risk aversion: Avoiding necessary changes due to fear
Getting Started
- Assessment: Evaluate current release process and pain points
- Tool setup: Configure scripts for your repository
- Process definition: Choose appropriate workflow for your team
- Automation: Implement CI/CD pipelines and quality gates
- Training: Educate team on new processes and tools
- Monitoring: Set up metrics and alerting for releases
- Iteration: Continuously improve based on feedback and metrics
The Release Manager skill transforms chaotic deployments into predictable, reliable releases that build confidence across your entire organization.
Release Manager
A comprehensive release management toolkit for automating changelog generation, version bumping, and release planning based on conventional commits and industry best practices.
Overview
The Release Manager skill provides three powerful Python scripts and comprehensive documentation for managing software releases:
- changelog_generator.py - Generate structured changelogs from git history
- version_bumper.py - Determine correct semantic version bumps
- release_planner.py - Assess release readiness and generate coordination plans
Quick Start
Prerequisites
- Python 3.7+
- Git repository with conventional commit messages
- No external dependencies required (uses only Python standard library)
Basic Usage
# Generate changelog from recent commits
git log --oneline --since="1 month ago" | python changelog_generator.py
# Determine version bump from commits since last tag
git log --oneline $(git describe --tags --abbrev=0)..HEAD | python version_bumper.py -c "1.2.3"
# Assess release readiness
python release_planner.py --input assets/sample_release_plan.jsonScripts Reference
changelog_generator.py
Parses conventional commits and generates structured changelogs in multiple formats.
Input Options:
- Git log text (oneline or full format)
- JSON array of commits
- Stdin or file input
Output Formats:
- Markdown (Keep a Changelog format)
- JSON structured data
- Both with release statistics
# From git log (recommended)
git log --oneline --since="last release" | python changelog_generator.py \
--version "2.1.0" \
--date "2024-01-15" \
--base-url "https://github.com/yourorg/yourrepo"
# From JSON file
python changelog_generator.py \
--input assets/sample_commits.json \
--input-format json \
--format both \
--summary
# With custom output
git log --format="%h %s" v1.0.0..HEAD | python changelog_generator.py \
--version "1.1.0" \
--output CHANGELOG_DRAFT.mdFeatures:
- Parses conventional commit types (feat, fix, docs, etc.)
- Groups commits by changelog categories (Added, Fixed, Changed, etc.)
- Extracts issue references (#123, fixes #456)
- Identifies breaking changes
- Links to commits and PRs
- Generates release summary statistics
version_bumper.py
Analyzes commits to determine semantic version bumps according to conventional commits.
Bump Rules:
- MAJOR: Breaking changes (
feat!:orBREAKING CHANGE:) - MINOR: New features (
feat:) - PATCH: Bug fixes (
fix:,perf:,security:) - NONE: Documentation, tests, chores only
# Basic version bump determination
git log --oneline v1.2.3..HEAD | python version_bumper.py --current-version "1.2.3"
# With pre-release version
python version_bumper.py \
--current-version "1.2.3" \
--prerelease alpha \
--input assets/sample_commits.json \
--input-format json
# Include bump commands and file updates
git log --oneline $(git describe --tags --abbrev=0)..HEAD | \
python version_bumper.py \
--current-version "$(git describe --tags --abbrev=0)" \
--include-commands \
--include-files \
--analysisFeatures:
- Supports pre-release versions (alpha, beta, rc)
- Generates bump commands for npm, Python, Rust, Git
- Provides file update snippets
- Detailed commit analysis and categorization
- Custom rules for specific commit types
- JSON and text output formats
release_planner.py
Assesses release readiness and generates comprehensive release coordination plans.
Input: JSON release plan with features, quality gates, and stakeholders
# Assess release readiness
python release_planner.py --input assets/sample_release_plan.json
# Generate full release package
python release_planner.py \
--input release_plan.json \
--output-format markdown \
--include-checklist \
--include-communication \
--include-rollback \
--output release_report.mdFeatures:
- Feature readiness assessment with approval tracking
- Quality gate validation and reporting
- Stakeholder communication planning
- Rollback procedure generation
- Risk analysis and timeline assessment
- Customizable test coverage thresholds
- Multiple output formats (text, JSON, Markdown)
File Structure
release-manager/
├── SKILL.md # Comprehensive methodology guide
├── README.md # This file
├── changelog_generator.py # Changelog generation script
├── version_bumper.py # Version bump determination
├── release_planner.py # Release readiness assessment
├── references/ # Reference documentation
│ ├── conventional-commits-guide.md # Conventional commits specification
│ ├── release-workflow-comparison.md # Git Flow vs GitHub Flow vs Trunk-based
│ └── hotfix-procedures.md # Emergency release procedures
├── assets/ # Sample data for testing
│ ├── sample_git_log.txt # Sample git log output
│ ├── sample_git_log_full.txt # Detailed git log format
│ ├── sample_commits.json # JSON commit data
│ └── sample_release_plan.json # Release plan template
└── expected_outputs/ # Example script outputs
├── changelog_example.md # Expected changelog format
├── version_bump_example.txt # Version bump output
└── release_readiness_example.txt # Release assessment reportIntegration Examples
CI/CD Pipeline Integration
# .github/workflows/release.yml
name: Automated Release
on:
push:
branches: [main]
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0 # Need full history
- name: Determine version bump
id: version
run: |
CURRENT=$(git describe --tags --abbrev=0)
git log --oneline $CURRENT..HEAD | \
python scripts/version_bumper.py -c $CURRENT --output-format json > bump.json
echo "new_version=$(jq -r '.recommended_version' bump.json)" >> $GITHUB_OUTPUT
- name: Generate changelog
run: |
git log --oneline ${{ steps.version.outputs.current_version }}..HEAD | \
python scripts/changelog_generator.py \
--version "${{ steps.version.outputs.new_version }}" \
--base-url "https://github.com/${{ github.repository }}" \
--output CHANGELOG_ENTRY.md
- name: Create release
uses: actions/create-release@v1
with:
tag_name: v${{ steps.version.outputs.new_version }}
release_name: Release ${{ steps.version.outputs.new_version }}
body_path: CHANGELOG_ENTRY.mdGit Hooks Integration
#!/bin/bash
# .git/hooks/pre-commit
# Validate conventional commit format
commit_msg_file=$1
commit_msg=$(cat $commit_msg_file)
# Simple validation (more sophisticated validation available in commitlint)
if ! echo "$commit_msg" | grep -qE "^(feat|fix|docs|style|refactor|test|chore|perf|ci|build)(\(.+\))?(!)?:"; then
echo "❌ Commit message doesn't follow conventional commits format"
echo "Expected: type(scope): description"
echo "Examples:"
echo " feat(auth): add OAuth2 integration"
echo " fix(api): resolve race condition"
echo " docs: update installation guide"
exit 1
fi
echo "✅ Commit message format is valid"Release Planning Automation
#!/usr/bin/env python3
# generate_release_plan.py - Automatically generate release plans from project management tools
import json
import requests
from datetime import datetime, timedelta
def generate_release_plan_from_github(repo, milestone):
"""Generate release plan from GitHub milestone and PRs."""
# Fetch milestone details
milestone_url = f"https://api.github.com/repos/{repo}/milestones/{milestone}"
milestone_data = requests.get(milestone_url).json()
# Fetch associated issues/PRs
issues_url = f"https://api.github.com/repos/{repo}/issues?milestone={milestone}&state=all"
issues = requests.get(issues_url).json()
release_plan = {
"release_name": milestone_data["title"],
"version": "TBD", # Fill in manually or extract from milestone
"target_date": milestone_data["due_on"],
"features": []
}
for issue in issues:
if issue.get("pull_request"): # It's a PR
feature = {
"id": f"GH-{issue['number']}",
"title": issue["title"],
"description": issue["body"][:200] + "..." if len(issue["body"]) > 200 else issue["body"],
"type": "feature", # Could be parsed from labels
"assignee": issue["assignee"]["login"] if issue["assignee"] else "",
"status": "ready" if issue["state"] == "closed" else "in_progress",
"pull_request_url": issue["pull_request"]["html_url"],
"issue_url": issue["html_url"],
"risk_level": "medium", # Could be parsed from labels
"qa_approved": "qa-approved" in [label["name"] for label in issue["labels"]],
"pm_approved": "pm-approved" in [label["name"] for label in issue["labels"]]
}
release_plan["features"].append(feature)
return release_plan
# Usage
if __name__ == "__main__":
plan = generate_release_plan_from_github("yourorg/yourrepo", "5")
with open("release_plan.json", "w") as f:
json.dump(plan, f, indent=2)
print("Generated release_plan.json")
print("Run: python release_planner.py --input release_plan.json")Advanced Usage
Custom Commit Type Rules
# Define custom rules for version bumping
python version_bumper.py \
--current-version "1.2.3" \
--custom-rules '{"security": "patch", "breaking": "major"}' \
--ignore-types "docs,style,test"Multi-repository Release Coordination
#!/bin/bash
# multi_repo_release.sh - Coordinate releases across multiple repositories
repos=("frontend" "backend" "mobile" "docs")
base_version="2.1.0"
for repo in "${repos[@]}"; do
echo "Processing $repo..."
cd "$repo"
# Generate changelog for this repo
git log --oneline --since="1 month ago" | \
python ../scripts/changelog_generator.py \
--version "$base_version" \
--output "CHANGELOG_$repo.md"
# Determine version bump
git log --oneline $(git describe --tags --abbrev=0)..HEAD | \
python ../scripts/version_bumper.py \
--current-version "$(git describe --tags --abbrev=0)" > "VERSION_$repo.txt"
cd ..
done
echo "Generated changelogs and version recommendations for all repositories"Integration with Slack/Teams
#!/usr/bin/env python3
# notify_release_status.py
import json
import requests
import subprocess
def send_slack_notification(webhook_url, message):
payload = {"text": message}
requests.post(webhook_url, json=payload)
def get_release_status():
"""Get current release status from release planner."""
result = subprocess.run(
["python", "release_planner.py", "--input", "release_plan.json", "--output-format", "json"],
capture_output=True, text=True
)
return json.loads(result.stdout)
# Usage in CI/CD
status = get_release_status()
if status["assessment"]["overall_status"] == "blocked":
message = f"🚫 Release {status['version']} is BLOCKED\n"
message += f"Issues: {', '.join(status['assessment']['blocking_issues'])}"
send_slack_notification(SLACK_WEBHOOK_URL, message)
elif status["assessment"]["overall_status"] == "ready":
message = f"✅ Release {status['version']} is READY for deployment!"
send_slack_notification(SLACK_WEBHOOK_URL, message)Best Practices
Commit Message Guidelines
- Use conventional commits consistently across your team
- Be specific in commit descriptions: "fix: resolve race condition in user creation" vs "fix: bug"
- Reference issues when applicable: "Closes #123" or "Fixes #456"
- Mark breaking changes clearly with
!orBREAKING CHANGE:footer - Keep first line under 50 characters when possible
Release Planning
- Plan releases early with clear feature lists and target dates
- Set quality gates and stick to them (test coverage, security scans, etc.)
- Track approvals from all relevant stakeholders
- Document rollback procedures before deployment
- Communicate clearly with both internal teams and external users
Version Management
- Follow semantic versioning strictly for predictable releases
- Use pre-release versions for beta testing and gradual rollouts
- Tag releases consistently with proper version numbers
- Maintain backwards compatibility when possible to avoid major version bumps
- Document breaking changes thoroughly with migration guides
Troubleshooting
Common Issues
"No valid commits found"
- Ensure git log contains commit messages
- Check that commits follow conventional format
- Verify input format (git-log vs json)
"Invalid version format"
- Use semantic versioning: 1.2.3, not 1.2 or v1.2.3.beta
- Pre-release format: 1.2.3-alpha.1
"Missing required approvals"
- Check feature risk levels in release plan
- High/critical risk features require additional approvals
- Update approval status in JSON file
Debug Mode
All scripts support verbose output for debugging:
# Add debug logging
python changelog_generator.py --input sample.txt --debug
# Validate input data
python -c "import json; print(json.load(open('release_plan.json')))"
# Test with sample data first
python release_planner.py --input assets/sample_release_plan.jsonContributing
When extending these scripts:
- Maintain backwards compatibility for existing command-line interfaces
- Add comprehensive tests for new features
- Update documentation including this README and SKILL.md
- Follow Python standards (PEP 8, type hints where helpful)
- Use only standard library to avoid dependencies
License
This skill is part of the claude-skills repository and follows the same license terms.
For detailed methodology and background information, see SKILL.md. For specific workflow guidance, see the references directory. For testing the scripts, use the sample data in the assets directory.
[
{
"hash": "a1b2c3d",
"author": "Sarah Johnson <[email protected]>",
"date": "2024-01-15T14:30:22Z",
"message": "feat(auth): add OAuth2 integration with Google and GitHub\n\nImplement OAuth2 authentication flow supporting Google and GitHub providers.\nUsers can now sign in using their existing social media accounts, improving\nuser experience and reducing password fatigue.\n\n- Add OAuth2 client configuration\n- Implement authorization code flow\n- Add user profile mapping from providers\n- Include comprehensive error handling\n\nCloses #123\nResolves #145"
},
{
"hash": "e4f5g6h",
"author": "Mike Chen <[email protected]>",
"date": "2024-01-15T13:45:18Z",
"message": "fix(api): resolve race condition in user creation endpoint\n\nFixed a race condition that occurred when multiple requests attempted\nto create users with the same email address simultaneously. This was\ncausing duplicate user records in some edge cases.\n\n- Added database unique constraint on email field\n- Implemented proper error handling for constraint violations\n- Added retry logic with exponential backoff\n\nFixes #234"
},
{
"hash": "i7j8k9l",
"author": "Emily Davis <[email protected]>",
"date": "2024-01-15T12:20:45Z",
"message": "docs(readme): update installation and deployment instructions\n\nUpdated README with comprehensive installation guide including:\n- Docker setup instructions\n- Environment variable configuration\n- Database migration steps\n- Troubleshooting common issues"
},
{
"hash": "m1n2o3p",
"author": "David Wilson <[email protected]>",
"date": "2024-01-15T11:15:30Z",
"message": "feat(ui)!: redesign dashboard with new component library\n\nComplete redesign of the user dashboard using our new component library.\nThis provides better accessibility, improved mobile responsiveness, and\na more modern user interface.\n\nBREAKING CHANGE: The dashboard API endpoints have changed structure.\nFrontend clients must update to use the new /v2/dashboard endpoints.\nThe legacy /v1/dashboard endpoints will be removed in version 3.0.0.\n\n- Implement new Card, Grid, and Chart components\n- Add responsive breakpoints for mobile devices\n- Improve accessibility with proper ARIA labels\n- Add dark mode support\n\nCloses #345, #367, #389"
},
{
"hash": "q4r5s6t",
"author": "Lisa Rodriguez <[email protected]>",
"date": "2024-01-15T10:45:12Z",
"message": "fix(db): optimize slow query in user search functionality\n\nOptimized the user search query that was causing performance issues\non databases with large user counts. Query time reduced from 2.5s to 150ms.\n\n- Added composite index on (email, username, created_at)\n- Refactored query to use more efficient JOIN structure\n- Added query result caching for common search patterns\n\nFixes #456"
},
{
"hash": "u7v8w9x",
"author": "Tom Anderson <[email protected]>",
"date": "2024-01-15T09:30:55Z",
"message": "chore(deps): upgrade React to version 18.2.0\n\nUpgrade React and related dependencies to latest stable versions.\nThis includes performance improvements and new concurrent features.\n\n- React: 17.0.2 → 18.2.0\n- React-DOM: 17.0.2 → 18.2.0\n- React-Router: 6.8.0 → 6.8.1\n- Updated all peer dependencies"
},
{
"hash": "y1z2a3b",
"author": "Jennifer Kim <[email protected]>",
"date": "2024-01-15T08:15:33Z",
"message": "test(auth): add comprehensive tests for OAuth flow\n\nAdded unit and integration tests for the OAuth2 authentication system\nto ensure reliability and prevent regressions.\n\n- Unit tests for OAuth client configuration\n- Integration tests for complete auth flow\n- Mock providers for testing without external dependencies\n- Error scenario testing\n\nTest coverage increased from 72% to 89% for auth module."
},
{
"hash": "c4d5e6f",
"author": "Alex Thompson <[email protected]>",
"date": "2024-01-15T07:45:20Z",
"message": "perf(image): implement WebP compression reducing size by 40%\n\nReplaced PNG compression with WebP format for uploaded images.\nThis reduces average image file sizes by 40% while maintaining\nvisual quality, improving page load times and reducing bandwidth costs.\n\n- Add WebP encoding support\n- Implement fallback to PNG for older browsers\n- Add quality settings configuration\n- Update image serving endpoints\n\nPerformance improvement: Page load time reduced by 25% on average."
},
{
"hash": "g7h8i9j",
"author": "Rachel Green <[email protected]>",
"date": "2024-01-14T16:20:10Z",
"message": "feat(payment): add Stripe payment processor integration\n\nIntegrate Stripe as a payment processor to support credit card payments.\nThis enables users to purchase premium features and subscriptions.\n\n- Add Stripe SDK integration\n- Implement payment intent flow\n- Add webhook handling for payment status updates\n- Include comprehensive error handling and logging\n- Add payment method management for users\n\nCloses #567\nCo-authored-by: Payment Team <[email protected]>"
},
{
"hash": "k1l2m3n",
"author": "Chris Martinez <[email protected]>",
"date": "2024-01-14T15:30:45Z",
"message": "fix(ui): resolve mobile navigation menu overflow issue\n\nFixed navigation menu overflow on mobile devices where long menu items\nwere being cut off and causing horizontal scrolling issues.\n\n- Implement responsive text wrapping\n- Add horizontal scrolling for overflowing content\n- Improve touch targets for better mobile usability\n- Fix z-index conflicts with dropdown menus\n\nFixes #678\nTested on iOS Safari, Chrome Mobile, and Firefox Mobile"
},
{
"hash": "o4p5q6r",
"author": "Anna Kowalski <[email protected]>",
"date": "2024-01-14T14:20:15Z",
"message": "refactor(api): extract validation logic into reusable middleware\n\nExtracted common validation logic from individual API endpoints into\nreusable middleware functions to reduce code duplication and improve\nmaintainability.\n\n- Create validation middleware for common patterns\n- Refactor user, product, and order endpoints\n- Add comprehensive error messages\n- Improve validation performance by 30%"
},
{
"hash": "s7t8u9v",
"author": "Kevin Park <[email protected]>",
"date": "2024-01-14T13:10:30Z",
"message": "feat(search): implement fuzzy search with Elasticsearch\n\nImplemented fuzzy search functionality using Elasticsearch to provide\nbetter search results for users with typos or partial matches.\n\n- Integrate Elasticsearch cluster\n- Add fuzzy matching with configurable distance\n- Implement search result ranking algorithm\n- Add search analytics and logging\n\nSearch accuracy improved by 35% in user testing.\nCloses #789"
},
{
"hash": "w1x2y3z",
"author": "Security Team <[email protected]>",
"date": "2024-01-14T12:45:22Z",
"message": "fix(security): patch SQL injection vulnerability in reports\n\nPatched SQL injection vulnerability in the reports generation endpoint\nthat could allow unauthorized access to sensitive data.\n\n- Implement parameterized queries for all report filters\n- Add input sanitization and validation\n- Update security audit logging\n- Add automated security tests\n\nSeverity: HIGH - CVE-2024-0001\nReported by: External security researcher"
}
] a1b2c3d feat(auth): add OAuth2 integration with Google and GitHub
e4f5g6h fix(api): resolve race condition in user creation endpoint
i7j8k9l docs(readme): update installation and deployment instructions
m1n2o3p feat(ui)!: redesign dashboard with new component library
q4r5s6t fix(db): optimize slow query in user search functionality
u7v8w9x chore(deps): upgrade React to version 18.2.0
y1z2a3b test(auth): add comprehensive tests for OAuth flow
c4d5e6f perf(image): implement WebP compression reducing size by 40%
g7h8i9j feat(payment): add Stripe payment processor integration
k1l2m3n fix(ui): resolve mobile navigation menu overflow issue
o4p5q6r refactor(api): extract validation logic into reusable middleware
s7t8u9v feat(search): implement fuzzy search with Elasticsearch
w1x2y3z fix(security): patch SQL injection vulnerability in reports
a4b5c6d build(ci): add automated security scanning to deployment pipeline
e7f8g9h feat(notification): add email and SMS notification system
i1j2k3l fix(payment): handle expired credit cards gracefully
m4n5o6p docs(api): generate OpenAPI specification for all endpoints
q7r8s9t chore(cleanup): remove deprecated user preference API endpoints
u1v2w3x feat(admin)!: redesign admin panel with role-based permissions
y4z5a6b fix(db): resolve deadlock issues in concurrent transactions
c7d8e9f perf(cache): implement Redis caching for frequent database queries
g1h2i3j feat(mobile): add biometric authentication support
k4l5m6n fix(api): validate input parameters to prevent XSS attacks
o7p8q9r style(ui): update color palette and typography consistency
s1t2u3v feat(analytics): integrate Google Analytics 4 tracking
w4x5y6z fix(memory): resolve memory leak in image processing service
a7b8c9d ci(github): add automated testing for all pull requests
e1f2g3h feat(export): add CSV and PDF export functionality for reports
i4j5k6l fix(ui): resolve accessibility issues with screen readers
m7n8o9p refactor(auth): consolidate authentication logic into single service commit a1b2c3d4e5f6789012345678901234567890abcd
Author: Sarah Johnson <[email protected]>
Date: Mon Jan 15 14:30:22 2024 +0000
feat(auth): add OAuth2 integration with Google and GitHub
Implement OAuth2 authentication flow supporting Google and GitHub providers.
Users can now sign in using their existing social media accounts, improving
user experience and reducing password fatigue.
- Add OAuth2 client configuration
- Implement authorization code flow
- Add user profile mapping from providers
- Include comprehensive error handling
Closes #123
Resolves #145
commit e4f5g6h7i8j9012345678901234567890123abcdef
Author: Mike Chen <[email protected]>
Date: Mon Jan 15 13:45:18 2024 +0000
fix(api): resolve race condition in user creation endpoint
Fixed a race condition that occurred when multiple requests attempted
to create users with the same email address simultaneously. This was
causing duplicate user records in some edge cases.
- Added database unique constraint on email field
- Implemented proper error handling for constraint violations
- Added retry logic with exponential backoff
Fixes #234
commit i7j8k9l0m1n2345678901234567890123456789abcd
Author: Emily Davis <[email protected]>
Date: Mon Jan 15 12:20:45 2024 +0000
docs(readme): update installation and deployment instructions
Updated README with comprehensive installation guide including:
- Docker setup instructions
- Environment variable configuration
- Database migration steps
- Troubleshooting common issues
commit m1n2o3p4q5r6789012345678901234567890abcdefg
Author: David Wilson <[email protected]>
Date: Mon Jan 15 11:15:30 2024 +0000
feat(ui)!: redesign dashboard with new component library
Complete redesign of the user dashboard using our new component library.
This provides better accessibility, improved mobile responsiveness, and
a more modern user interface.
BREAKING CHANGE: The dashboard API endpoints have changed structure.
Frontend clients must update to use the new /v2/dashboard endpoints.
The legacy /v1/dashboard endpoints will be removed in version 3.0.0.
- Implement new Card, Grid, and Chart components
- Add responsive breakpoints for mobile devices
- Improve accessibility with proper ARIA labels
- Add dark mode support
Closes #345, #367, #389
commit q4r5s6t7u8v9012345678901234567890123456abcd
Author: Lisa Rodriguez <[email protected]>
Date: Mon Jan 15 10:45:12 2024 +0000
fix(db): optimize slow query in user search functionality
Optimized the user search query that was causing performance issues
on databases with large user counts. Query time reduced from 2.5s to 150ms.
- Added composite index on (email, username, created_at)
- Refactored query to use more efficient JOIN structure
- Added query result caching for common search patterns
Fixes #456
commit u7v8w9x0y1z2345678901234567890123456789abcde
Author: Tom Anderson <[email protected]>
Date: Mon Jan 15 09:30:55 2024 +0000
chore(deps): upgrade React to version 18.2.0
Upgrade React and related dependencies to latest stable versions.
This includes performance improvements and new concurrent features.
- React: 17.0.2 → 18.2.0
- React-DOM: 17.0.2 → 18.2.0
- React-Router: 6.8.0 → 6.8.1
- Updated all peer dependencies
commit y1z2a3b4c5d6789012345678901234567890abcdefg
Author: Jennifer Kim <[email protected]>
Date: Mon Jan 15 08:15:33 2024 +0000
test(auth): add comprehensive tests for OAuth flow
Added unit and integration tests for the OAuth2 authentication system
to ensure reliability and prevent regressions.
- Unit tests for OAuth client configuration
- Integration tests for complete auth flow
- Mock providers for testing without external dependencies
- Error scenario testing
Test coverage increased from 72% to 89% for auth module.
commit c4d5e6f7g8h9012345678901234567890123456abcd
Author: Alex Thompson <[email protected]>
Date: Mon Jan 15 07:45:20 2024 +0000
perf(image): implement WebP compression reducing size by 40%
Replaced PNG compression with WebP format for uploaded images.
This reduces average image file sizes by 40% while maintaining
visual quality, improving page load times and reducing bandwidth costs.
- Add WebP encoding support
- Implement fallback to PNG for older browsers
- Add quality settings configuration
- Update image serving endpoints
Performance improvement: Page load time reduced by 25% on average.
commit g7h8i9j0k1l2345678901234567890123456789abcde
Author: Rachel Green <[email protected]>
Date: Sun Jan 14 16:20:10 2024 +0000
feat(payment): add Stripe payment processor integration
Integrate Stripe as a payment processor to support credit card payments.
This enables users to purchase premium features and subscriptions.
- Add Stripe SDK integration
- Implement payment intent flow
- Add webhook handling for payment status updates
- Include comprehensive error handling and logging
- Add payment method management for users
Closes #567
Co-authored-by: Payment Team <[email protected]>
commit k1l2m3n4o5p6789012345678901234567890abcdefg
Author: Chris Martinez <[email protected]>
Date: Sun Jan 14 15:30:45 2024 +0000
fix(ui): resolve mobile navigation menu overflow issue
Fixed navigation menu overflow on mobile devices where long menu items
were being cut off and causing horizontal scrolling issues.
- Implement responsive text wrapping
- Add horizontal scrolling for overflowing content
- Improve touch targets for better mobile usability
- Fix z-index conflicts with dropdown menus
Fixes #678
Tested on iOS Safari, Chrome Mobile, and Firefox Mobile {
"release_name": "Winter 2024 Release",
"version": "2.3.0",
"target_date": "2024-02-15T10:00:00Z",
"features": [
{
"id": "AUTH-123",
"title": "OAuth2 Integration",
"description": "Add support for Google and GitHub OAuth2 authentication",
"type": "feature",
"assignee": "[email protected]",
"status": "ready",
"pull_request_url": "https://github.com/ourapp/backend/pull/234",
"issue_url": "https://github.com/ourapp/backend/issues/123",
"risk_level": "medium",
"test_coverage_required": 85.0,
"test_coverage_actual": 89.5,
"requires_migration": false,
"breaking_changes": [],
"dependencies": ["AUTH-124"],
"qa_approved": true,
"security_approved": true,
"pm_approved": true
},
{
"id": "UI-345",
"title": "Dashboard Redesign",
"description": "Complete redesign of user dashboard with new component library",
"type": "breaking_change",
"assignee": "[email protected]",
"status": "ready",
"pull_request_url": "https://github.com/ourapp/frontend/pull/456",
"issue_url": "https://github.com/ourapp/frontend/issues/345",
"risk_level": "high",
"test_coverage_required": 90.0,
"test_coverage_actual": 92.3,
"requires_migration": true,
"migration_complexity": "moderate",
"breaking_changes": [
"Dashboard API endpoints changed from /v1/dashboard to /v2/dashboard",
"Dashboard widget configuration format updated"
],
"dependencies": [],
"qa_approved": true,
"security_approved": true,
"pm_approved": true
},
{
"id": "PAY-567",
"title": "Stripe Payment Integration",
"description": "Add Stripe as payment processor for premium features",
"type": "feature",
"assignee": "[email protected]",
"status": "ready",
"pull_request_url": "https://github.com/ourapp/backend/pull/678",
"issue_url": "https://github.com/ourapp/backend/issues/567",
"risk_level": "high",
"test_coverage_required": 95.0,
"test_coverage_actual": 97.2,
"requires_migration": true,
"migration_complexity": "complex",
"breaking_changes": [],
"dependencies": ["SEC-890"],
"qa_approved": true,
"security_approved": true,
"pm_approved": true
},
{
"id": "SEARCH-789",
"title": "Elasticsearch Fuzzy Search",
"description": "Implement fuzzy search functionality with Elasticsearch",
"type": "feature",
"assignee": "[email protected]",
"status": "in_progress",
"pull_request_url": "https://github.com/ourapp/backend/pull/890",
"issue_url": "https://github.com/ourapp/backend/issues/789",
"risk_level": "medium",
"test_coverage_required": 80.0,
"test_coverage_actual": 76.5,
"requires_migration": true,
"migration_complexity": "moderate",
"breaking_changes": [],
"dependencies": ["INFRA-234"],
"qa_approved": false,
"security_approved": true,
"pm_approved": true
},
{
"id": "MOBILE-456",
"title": "Biometric Authentication",
"description": "Add fingerprint and face ID support for mobile apps",
"type": "feature",
"assignee": "[email protected]",
"status": "blocked",
"pull_request_url": null,
"issue_url": "https://github.com/ourapp/mobile/issues/456",
"risk_level": "medium",
"test_coverage_required": 85.0,
"test_coverage_actual": null,
"requires_migration": false,
"breaking_changes": [],
"dependencies": ["AUTH-123"],
"qa_approved": false,
"security_approved": false,
"pm_approved": true
},
{
"id": "PERF-678",
"title": "Redis Caching Implementation",
"description": "Implement Redis caching for frequently accessed data",
"type": "performance",
"assignee": "[email protected]",
"status": "ready",
"pull_request_url": "https://github.com/ourapp/backend/pull/901",
"issue_url": "https://github.com/ourapp/backend/issues/678",
"risk_level": "low",
"test_coverage_required": 75.0,
"test_coverage_actual": 82.1,
"requires_migration": false,
"breaking_changes": [],
"dependencies": [],
"qa_approved": true,
"security_approved": false,
"pm_approved": true
}
],
"quality_gates": [
{
"name": "Unit Test Coverage",
"required": true,
"status": "ready",
"details": "Overall test coverage above 85% threshold",
"threshold": 85.0,
"actual_value": 87.3
},
{
"name": "Integration Tests",
"required": true,
"status": "ready",
"details": "All integration tests passing"
},
{
"name": "Security Scan",
"required": true,
"status": "pending",
"details": "Waiting for security team review of payment integration"
},
{
"name": "Performance Testing",
"required": true,
"status": "ready",
"details": "Load testing shows 99th percentile response time under 500ms"
},
{
"name": "Documentation Review",
"required": true,
"status": "pending",
"details": "API documentation needs update for dashboard changes"
},
{
"name": "Dependency Audit",
"required": true,
"status": "ready",
"details": "No high or critical vulnerabilities found"
}
],
"stakeholders": [
{
"name": "Engineering Team",
"role": "developer",
"contact": "[email protected]",
"notification_type": "slack",
"critical_path": true
},
{
"name": "Product Team",
"role": "pm",
"contact": "[email protected]",
"notification_type": "email",
"critical_path": true
},
{
"name": "QA Team",
"role": "qa",
"contact": "[email protected]",
"notification_type": "slack",
"critical_path": true
},
{
"name": "Security Team",
"role": "security",
"contact": "[email protected]",
"notification_type": "email",
"critical_path": false
},
{
"name": "Customer Support",
"role": "support",
"contact": "[email protected]",
"notification_type": "email",
"critical_path": false
},
{
"name": "Sales Team",
"role": "sales",
"contact": "[email protected]",
"notification_type": "email",
"critical_path": false
},
{
"name": "Beta Users",
"role": "customer",
"contact": "[email protected]",
"notification_type": "email",
"critical_path": false
}
],
"rollback_steps": [
{
"order": 1,
"description": "Alert incident response team and stakeholders",
"estimated_time": "2 minutes",
"risk_level": "low",
"verification": "Confirm team is aware and responding via Slack"
},
{
"order": 2,
"description": "Switch load balancer to previous version",
"command": "kubectl patch service app --patch '{\"spec\": {\"selector\": {\"version\": \"v2.2.1\"}}}'",
"estimated_time": "30 seconds",
"risk_level": "low",
"verification": "Check traffic routing to previous version via monitoring dashboard"
},
{
"order": 3,
"description": "Disable new feature flags",
"command": "curl -X POST https://api.example.com/feature-flags/oauth2/disable",
"estimated_time": "1 minute",
"risk_level": "low",
"verification": "Verify feature flags are disabled in admin panel"
},
{
"order": 4,
"description": "Roll back database migrations",
"command": "python manage.py migrate app 0042",
"estimated_time": "10 minutes",
"risk_level": "high",
"verification": "Verify database schema and run data integrity checks"
},
{
"order": 5,
"description": "Clear Redis cache",
"command": "redis-cli FLUSHALL",
"estimated_time": "30 seconds",
"risk_level": "medium",
"verification": "Confirm cache is cleared and application rebuilds cache properly"
},
{
"order": 6,
"description": "Verify application health",
"estimated_time": "5 minutes",
"risk_level": "low",
"verification": "Check health endpoints, error rates, and core user workflows"
},
{
"order": 7,
"description": "Update status page and notify users",
"estimated_time": "5 minutes",
"risk_level": "low",
"verification": "Confirm status page updated and notifications sent"
}
]
} #!/usr/bin/env python3
"""
Changelog Generator
Parses git log output in conventional commits format and generates structured changelogs
in multiple formats (Markdown, Keep a Changelog). Groups commits by type, extracts scope,
links to PRs/issues, and highlights breaking changes.
Input: git log text (piped from git log) or JSON array of commits
Output: formatted CHANGELOG.md section + release summary stats
"""
import argparse
import json
import re
import sys
from collections import defaultdict, Counter
from datetime import datetime
from typing import Dict, List, Optional, Tuple, Union
class ConventionalCommit:
"""Represents a parsed conventional commit."""
def __init__(self, raw_message: str, commit_hash: str = "", author: str = "",
date: str = "", merge_info: Optional[str] = None):
self.raw_message = raw_message
self.commit_hash = commit_hash
self.author = author
self.date = date
self.merge_info = merge_info
# Parse the commit message
self.type = ""
self.scope = ""
self.description = ""
self.body = ""
self.footers = []
self.is_breaking = False
self.breaking_change_description = ""
self._parse_commit_message()
def _parse_commit_message(self):
"""Parse conventional commit format."""
lines = self.raw_message.split('\n')
header = lines[0] if lines else ""
# Parse header: type(scope): description
header_pattern = r'^(\w+)(\([^)]+\))?(!)?:\s*(.+)$'
match = re.match(header_pattern, header)
if match:
self.type = match.group(1).lower()
scope_match = match.group(2)
self.scope = scope_match[1:-1] if scope_match else "" # Remove parentheses
self.is_breaking = bool(match.group(3)) # ! indicates breaking change
self.description = match.group(4).strip()
else:
# Fallback for non-conventional commits
self.type = "chore"
self.description = header
# Parse body and footers
if len(lines) > 1:
body_lines = []
footer_lines = []
in_footer = False
for line in lines[1:]:
if not line.strip():
continue
# Check if this is a footer (KEY: value or KEY #value format)
footer_pattern = r'^([A-Z-]+):\s*(.+)$|^([A-Z-]+)\s+#(\d+)$'
if re.match(footer_pattern, line):
in_footer = True
footer_lines.append(line)
# Check for breaking change
if line.startswith('BREAKING CHANGE:'):
self.is_breaking = True
self.breaking_change_description = line[16:].strip()
else:
if in_footer:
# Continuation of footer
footer_lines.append(line)
else:
body_lines.append(line)
self.body = '\n'.join(body_lines).strip()
self.footers = footer_lines
def extract_issue_references(self) -> List[str]:
"""Extract issue/PR references like #123, fixes #456, etc."""
text = f"{self.description} {self.body} {' '.join(self.footers)}"
# Common patterns for issue references
patterns = [
r'#(\d+)', # Simple #123
r'(?:close[sd]?|fix(?:e[sd])?|resolve[sd]?)\s+#(\d+)', # closes #123
r'(?:close[sd]?|fix(?:e[sd])?|resolve[sd]?)\s+(\w+/\w+)?#(\d+)' # fixes repo#123
]
references = []
for pattern in patterns:
matches = re.findall(pattern, text, re.IGNORECASE)
for match in matches:
if isinstance(match, tuple):
# Handle tuple results from more complex patterns
ref = match[-1] if match[-1] else match[0]
else:
ref = match
if ref and ref not in references:
references.append(ref)
return references
def get_changelog_category(self) -> str:
"""Map commit type to changelog category."""
category_map = {
'feat': 'Added',
'add': 'Added',
'fix': 'Fixed',
'bugfix': 'Fixed',
'security': 'Security',
'perf': 'Fixed', # Performance improvements go to Fixed
'refactor': 'Changed',
'style': 'Changed',
'docs': 'Changed',
'test': None, # Tests don't appear in user-facing changelog
'ci': None,
'build': None,
'chore': None,
'revert': 'Fixed',
'remove': 'Removed',
'deprecate': 'Deprecated'
}
return category_map.get(self.type, 'Changed')
class ChangelogGenerator:
"""Main changelog generator class."""
def __init__(self):
self.commits: List[ConventionalCommit] = []
self.version = "Unreleased"
self.date = datetime.now().strftime("%Y-%m-%d")
self.base_url = ""
def parse_git_log_output(self, git_log_text: str):
"""Parse git log output into ConventionalCommit objects."""
# Try to detect format based on patterns in the text
lines = git_log_text.strip().split('\n')
if not lines or not lines[0]:
return
# Format 1: Simple oneline format (hash message)
oneline_pattern = r'^([a-f0-9]{7,40})\s+(.+)$'
# Format 2: Full format with metadata
full_pattern = r'^commit\s+([a-f0-9]+)'
current_commit = None
commit_buffer = []
for line in lines:
line = line.strip()
if not line:
continue
# Check if this is a new commit (oneline format)
oneline_match = re.match(oneline_pattern, line)
if oneline_match:
# Process previous commit
if current_commit:
self.commits.append(current_commit)
# Start new commit
commit_hash = oneline_match.group(1)
message = oneline_match.group(2)
current_commit = ConventionalCommit(message, commit_hash)
continue
# Check if this is a new commit (full format)
full_match = re.match(full_pattern, line)
if full_match:
# Process previous commit
if current_commit:
commit_message = '\n'.join(commit_buffer).strip()
if commit_message:
current_commit = ConventionalCommit(commit_message, current_commit.commit_hash,
current_commit.author, current_commit.date)
self.commits.append(current_commit)
# Start new commit
commit_hash = full_match.group(1)
current_commit = ConventionalCommit("", commit_hash)
commit_buffer = []
continue
# Parse metadata lines in full format
if current_commit and not current_commit.raw_message:
if line.startswith('Author:'):
current_commit.author = line[7:].strip()
elif line.startswith('Date:'):
current_commit.date = line[5:].strip()
elif line.startswith('Merge:'):
current_commit.merge_info = line[6:].strip()
elif line.startswith(' '):
# Commit message line (indented)
commit_buffer.append(line[4:]) # Remove 4-space indent
# Process final commit
if current_commit:
if commit_buffer:
commit_message = '\n'.join(commit_buffer).strip()
current_commit = ConventionalCommit(commit_message, current_commit.commit_hash,
current_commit.author, current_commit.date)
self.commits.append(current_commit)
def parse_json_commits(self, json_data: Union[str, List[Dict]]):
"""Parse commits from JSON format."""
if isinstance(json_data, str):
data = json.loads(json_data)
else:
data = json_data
for commit_data in data:
commit = ConventionalCommit(
raw_message=commit_data.get('message', ''),
commit_hash=commit_data.get('hash', ''),
author=commit_data.get('author', ''),
date=commit_data.get('date', '')
)
self.commits.append(commit)
def group_commits_by_category(self) -> Dict[str, List[ConventionalCommit]]:
"""Group commits by changelog category."""
categories = defaultdict(list)
for commit in self.commits:
category = commit.get_changelog_category()
if category: # Skip None categories (internal changes)
categories[category].append(commit)
return dict(categories)
def generate_markdown_changelog(self, include_unreleased: bool = True) -> str:
"""Generate Keep a Changelog format markdown."""
grouped_commits = self.group_commits_by_category()
if not grouped_commits:
return "No notable changes.\n"
# Start with header
changelog = []
if include_unreleased and self.version == "Unreleased":
changelog.append(f"## [{self.version}]")
else:
changelog.append(f"## [{self.version}] - {self.date}")
changelog.append("")
# Order categories logically
category_order = ['Added', 'Changed', 'Deprecated', 'Removed', 'Fixed', 'Security']
# Separate breaking changes
breaking_changes = [commit for commit in self.commits if commit.is_breaking]
# Add breaking changes section first if any exist
if breaking_changes:
changelog.append("### Breaking Changes")
for commit in breaking_changes:
line = self._format_commit_line(commit, show_breaking=True)
changelog.append(f"- {line}")
changelog.append("")
# Add regular categories
for category in category_order:
if category not in grouped_commits:
continue
changelog.append(f"### {category}")
# Group by scope for better organization
scoped_commits = defaultdict(list)
for commit in grouped_commits[category]:
scope = commit.scope if commit.scope else "general"
scoped_commits[scope].append(commit)
# Sort scopes, with 'general' last
scopes = sorted(scoped_commits.keys())
if "general" in scopes:
scopes.remove("general")
scopes.append("general")
for scope in scopes:
if len(scoped_commits) > 1 and scope != "general":
changelog.append(f"#### {scope.title()}")
for commit in scoped_commits[scope]:
line = self._format_commit_line(commit)
changelog.append(f"- {line}")
changelog.append("")
return '\n'.join(changelog)
def _format_commit_line(self, commit: ConventionalCommit, show_breaking: bool = False) -> str:
"""Format a single commit line for the changelog."""
# Start with description
line = commit.description.capitalize()
# Add scope if present and not already in description
if commit.scope and commit.scope.lower() not in line.lower():
line = f"{commit.scope}: {line}"
# Add issue references
issue_refs = commit.extract_issue_references()
if issue_refs:
refs_str = ', '.join(f"#{ref}" for ref in issue_refs)
line += f" ({refs_str})"
# Add commit hash if available
if commit.commit_hash:
short_hash = commit.commit_hash[:7]
line += f" [{short_hash}]"
if self.base_url:
line += f"({self.base_url}/commit/{commit.commit_hash})"
# Add breaking change indicator
if show_breaking and commit.breaking_change_description:
line += f" - {commit.breaking_change_description}"
elif commit.is_breaking and not show_breaking:
line += " ⚠️ BREAKING"
return line
def generate_release_summary(self) -> Dict:
"""Generate summary statistics for the release."""
if not self.commits:
return {
'version': self.version,
'date': self.date,
'total_commits': 0,
'by_type': {},
'by_author': {},
'breaking_changes': 0,
'notable_changes': 0
}
# Count by type
type_counts = Counter(commit.type for commit in self.commits)
# Count by author
author_counts = Counter(commit.author for commit in self.commits if commit.author)
# Count breaking changes
breaking_count = sum(1 for commit in self.commits if commit.is_breaking)
# Count notable changes (excluding chore, ci, build, test)
notable_types = {'feat', 'fix', 'security', 'perf', 'refactor', 'remove', 'deprecate'}
notable_count = sum(1 for commit in self.commits if commit.type in notable_types)
return {
'version': self.version,
'date': self.date,
'total_commits': len(self.commits),
'by_type': dict(type_counts.most_common()),
'by_author': dict(author_counts.most_common(10)), # Top 10 contributors
'breaking_changes': breaking_count,
'notable_changes': notable_count,
'scopes': list(set(commit.scope for commit in self.commits if commit.scope)),
'issue_references': len(set().union(*(commit.extract_issue_references() for commit in self.commits)))
}
def generate_json_output(self) -> str:
"""Generate JSON representation of the changelog data."""
grouped_commits = self.group_commits_by_category()
# Convert commits to serializable format
json_data = {
'version': self.version,
'date': self.date,
'summary': self.generate_release_summary(),
'categories': {}
}
for category, commits in grouped_commits.items():
json_data['categories'][category] = []
for commit in commits:
commit_data = {
'type': commit.type,
'scope': commit.scope,
'description': commit.description,
'hash': commit.commit_hash,
'author': commit.author,
'date': commit.date,
'breaking': commit.is_breaking,
'breaking_description': commit.breaking_change_description,
'issue_references': commit.extract_issue_references()
}
json_data['categories'][category].append(commit_data)
return json.dumps(json_data, indent=2)
def main():
"""Main entry point with CLI argument parsing."""
parser = argparse.ArgumentParser(description="Generate changelog from conventional commits")
parser.add_argument('--input', '-i', type=str, help='Input file (default: stdin)')
parser.add_argument('--format', '-f', choices=['markdown', 'json', 'both'],
default='markdown', help='Output format')
parser.add_argument('--version', '-v', type=str, default='Unreleased',
help='Version for this release')
parser.add_argument('--date', '-d', type=str,
default=datetime.now().strftime("%Y-%m-%d"),
help='Release date (YYYY-MM-DD format)')
parser.add_argument('--base-url', '-u', type=str, default='',
help='Base URL for commit links')
parser.add_argument('--input-format', choices=['git-log', 'json'],
default='git-log', help='Input format')
parser.add_argument('--output', '-o', type=str, help='Output file (default: stdout)')
parser.add_argument('--summary', '-s', action='store_true',
help='Include release summary statistics')
args = parser.parse_args()
# Read input
if args.input:
with open(args.input, 'r', encoding='utf-8') as f:
input_data = f.read()
else:
input_data = sys.stdin.read()
if not input_data.strip():
print("No input data provided", file=sys.stderr)
sys.exit(1)
# Initialize generator
generator = ChangelogGenerator()
generator.version = args.version
generator.date = args.date
generator.base_url = args.base_url
# Parse input
try:
if args.input_format == 'json':
generator.parse_json_commits(input_data)
else:
generator.parse_git_log_output(input_data)
except Exception as e:
print(f"Error parsing input: {e}", file=sys.stderr)
sys.exit(1)
if not generator.commits:
print("No valid commits found in input", file=sys.stderr)
sys.exit(1)
# Generate output
output_lines = []
if args.format in ['markdown', 'both']:
changelog_md = generator.generate_markdown_changelog()
if args.format == 'both':
output_lines.append("# Markdown Changelog\n")
output_lines.append(changelog_md)
if args.format in ['json', 'both']:
changelog_json = generator.generate_json_output()
if args.format == 'both':
output_lines.append("\n# JSON Output\n")
output_lines.append(changelog_json)
if args.summary:
summary = generator.generate_release_summary()
output_lines.append(f"\n# Release Summary")
output_lines.append(f"- **Version:** {summary['version']}")
output_lines.append(f"- **Total Commits:** {summary['total_commits']}")
output_lines.append(f"- **Notable Changes:** {summary['notable_changes']}")
output_lines.append(f"- **Breaking Changes:** {summary['breaking_changes']}")
output_lines.append(f"- **Issue References:** {summary['issue_references']}")
if summary['by_type']:
output_lines.append("- **By Type:**")
for commit_type, count in summary['by_type'].items():
output_lines.append(f" - {commit_type}: {count}")
# Write output
final_output = '\n'.join(output_lines)
if args.output:
with open(args.output, 'w', encoding='utf-8') as f:
f.write(final_output)
else:
print(final_output)
if __name__ == '__main__':
main() Expected Changelog Output
[2.3.0] - 2024-01-15
Breaking Changes
- ui: redesign dashboard with new component library - The dashboard API endpoints have changed structure. Frontend clients must update to use the new /v2/dashboard endpoints. The legacy /v1/dashboard endpoints will be removed in version 3.0.0. (#345, #367, #389) [m1n2o3p]
Added
- auth: add OAuth2 integration with Google and GitHub (#123, #145) [a1b2c3d]
- payment: add Stripe payment processor integration (#567) [g7h8i9j]
- search: implement fuzzy search with Elasticsearch (#789) [s7t8u9v]
Fixed
- api: resolve race condition in user creation endpoint (#234) [e4f5g6h]
- db: optimize slow query in user search functionality (#456) [q4r5s6t]
- ui: resolve mobile navigation menu overflow issue (#678) [k1l2m3n]
- security: patch SQL injection vulnerability in reports [w1x2y3z] ⚠️ BREAKING
Changed
- image: implement WebP compression reducing size by 40% [c4d5e6f]
- api: extract validation logic into reusable middleware [o4p5q6r]
- readme: update installation and deployment instructions [i7j8k9l]
Release Summary
- Version: 2.3.0
- Total Commits: 13
- Notable Changes: 9
- Breaking Changes: 2
- Issue References: 8
- By Type:
- feat: 4
- fix: 4
- perf: 1
- refactor: 1
- docs: 1
- test: 1
- chore: 1
Release Readiness Report
========================
Release: Winter 2024 Release v2.3.0
Status: AT_RISK
Readiness Score: 73.3%
WARNINGS:
⚠️ Feature 'Elasticsearch Fuzzy Search' (SEARCH-789) still in progress
⚠️ Feature 'Elasticsearch Fuzzy Search' has low test coverage: 76.5% < 80.0%
⚠️ Required quality gate 'Security Scan' is pending
⚠️ Required quality gate 'Documentation Review' is pending
BLOCKING ISSUES:
❌ Feature 'Biometric Authentication' (MOBILE-456) is blocked
❌ Feature 'Biometric Authentication' missing approvals: QA approval, Security approval
RECOMMENDATIONS:
💡 Obtain required approvals for pending features
💡 Improve test coverage for features below threshold
💡 Complete pending quality gate validations
FEATURE SUMMARY:
Total: 6 | Ready: 3 | Blocked: 1
Breaking Changes: 1 | Missing Approvals: 1
QUALITY GATES:
Total: 6 | Passed: 3 | Failed: 0 Current Version: 2.2.5
Recommended Version: 3.0.0
With v prefix: v3.0.0
Bump Type: major
Commit Analysis:
- Total commits: 13
- Breaking changes: 2
- New features: 4
- Bug fixes: 4
- Ignored commits: 3
Breaking Changes:
- feat(ui): redesign dashboard with new component library
- fix(security): patch SQL injection vulnerability in reports
Bump Commands:
npm:
npm version 3.0.0 --no-git-tag-version
python:
# Update version in setup.py, __init__.py, or pyproject.toml
# pyproject.toml: version = "3.0.0"
rust:
# Update Cargo.toml
# version = "3.0.0"
git:
git tag -a v3.0.0 -m 'Release v3.0.0'
git push origin v3.0.0
docker:
docker build -t myapp:3.0.0 .
docker tag myapp:3.0.0 myapp:latest Conventional Commits Guide
Overview
Conventional Commits is a specification for adding human and machine readable meaning to commit messages. The specification provides an easy set of rules for creating an explicit commit history, which makes it easier to write automated tools for version management, changelog generation, and release planning.
Basic Format
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]Commit Types
Primary Types
- feat: A new feature for the user (correlates with MINOR in semantic versioning)
- fix: A bug fix for the user (correlates with PATCH in semantic versioning)
Secondary Types
- build: Changes that affect the build system or external dependencies (webpack, npm, etc.)
- ci: Changes to CI configuration files and scripts (Travis, Circle, BrowserStack, SauceLabs)
- docs: Documentation only changes
- perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
- style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc.)
- test: Adding missing tests or correcting existing tests
- chore: Other changes that don't modify src or test files
- revert: Reverts a previous commit
Breaking Changes
Any commit can introduce a breaking change by:
- Adding
!after the type:feat!: remove deprecated API - Including
BREAKING CHANGE:in the footer
Scopes
Scopes provide additional contextual information about the change. They should be noun describing a section of the codebase:
auth- Authentication and authorizationapi- API changesui- User interfacedb- Database related changesconfig- Configuration changesdeps- Dependency updates
Examples
Simple Feature
feat(auth): add OAuth2 integration
Integrate OAuth2 authentication with Google and GitHub providers.
Users can now log in using their existing social media accounts.Bug Fix
fix(api): resolve race condition in user creation
When multiple requests tried to create users with the same email
simultaneously, duplicate records were sometimes created. Added
proper database constraints and error handling.
Fixes #234Breaking Change with !
feat(api)!: remove deprecated /v1/users endpoint
The deprecated /v1/users endpoint has been removed. All clients
should migrate to /v2/users which provides better performance
and additional features.
BREAKING CHANGE: /v1/users endpoint removed, use /v2/users insteadBreaking Change with Footer
feat(auth): implement new authentication flow
Add support for multi-factor authentication and improved session
management. This change requires all users to re-authenticate.
BREAKING CHANGE: Authentication tokens issued before this release
are no longer valid. Users must log in again.Performance Improvement
perf(image): optimize image compression algorithm
Replaced PNG compression with WebP format, reducing image sizes
by 40% on average while maintaining visual quality.
Closes #456Dependency Update
build(deps): upgrade React to version 18.2.0
Updates React and related packages to latest stable versions.
Includes performance improvements and new concurrent features.Documentation
docs(readme): add deployment instructions
Added comprehensive deployment guide including Docker setup,
environment variables configuration, and troubleshooting tips.Revert
revert: feat(payment): add cryptocurrency support
This reverts commit 667ecc1654a317a13331b17617d973392f415f02.
Reverting due to security concerns identified in code review.
The feature will be re-implemented with proper security measures.Multi-paragraph Body
For complex changes, use multiple paragraphs in the body:
feat(search): implement advanced search functionality
Add support for complex search queries including:
- Boolean operators (AND, OR, NOT)
- Field-specific searches (title:, author:, date:)
- Fuzzy matching with configurable threshold
- Search result highlighting
The search index has been restructured to support these new
features while maintaining backward compatibility with existing
simple search queries.
Performance testing shows less than 10ms impact on search
response times even with complex queries.
Closes #789, #823, #901Footers
Issue References
Fixes #123
Closes #234, #345
Resolves #456Breaking Changes
BREAKING CHANGE: The `authenticate` function now requires a second
parameter for the authentication method. Update all calls from
`authenticate(token)` to `authenticate(token, 'bearer')`.Co-authors
Co-authored-by: Jane Doe <[email protected]>
Co-authored-by: John Smith <[email protected]>Reviewed By
Reviewed-by: Senior Developer <[email protected]>
Acked-by: Tech Lead <[email protected]>Automation Benefits
Using conventional commits enables:
Automatic Version Bumping
fixcommits trigger PATCH version bump (1.0.0 → 1.0.1)featcommits trigger MINOR version bump (1.0.0 → 1.1.0)BREAKING CHANGEtriggers MAJOR version bump (1.0.0 → 2.0.0)
Changelog Generation
## [1.2.0] - 2024-01-15
### Added
- OAuth2 integration (auth)
- Advanced search functionality (search)
### Fixed
- Race condition in user creation (api)
- Memory leak in image processing (image)
### Breaking Changes
- Authentication tokens issued before this release are no longer validRelease Notes
Generate user-friendly release notes automatically from commit history, filtering out internal changes and highlighting user-facing improvements.
Best Practices
Writing Good Descriptions
- Use imperative mood: "add feature" not "added feature"
- Start with lowercase letter
- No period at the end
- Limit to 50 characters when possible
- Be specific and descriptive
Good Examples
feat(auth): add password reset functionality
fix(ui): resolve mobile navigation menu overflow
perf(db): optimize user query with proper indexingBad Examples
feat: stuff
fix: bug
update: changesBody Guidelines
- Separate subject from body with blank line
- Wrap body at 72 characters
- Use body to explain what and why, not how
- Reference issues and PRs when relevant
Scope Guidelines
- Use consistent scope naming across the team
- Keep scopes short and meaningful
- Document your team's scope conventions
- Consider using scopes that match your codebase structure
Tools and Integration
Git Hooks
Use tools like commitizen or husky to enforce conventional commit format:
# Install commitizen
npm install -g commitizen cz-conventional-changelog
# Configure
echo '{ "path": "cz-conventional-changelog" }' > ~/.czrc
# Use
git czAutomated Validation
Add commit message validation to prevent non-conventional commits:
// commitlint.config.js
module.exports = {
extends: ['@commitlint/config-conventional'],
rules: {
'type-enum': [
2, 'always',
['feat', 'fix', 'docs', 'style', 'refactor', 'perf', 'test', 'build', 'ci', 'chore', 'revert']
],
'subject-case': [2, 'always', 'lower-case'],
'subject-max-length': [2, 'always', 50]
}
};CI/CD Integration
Integrate with release automation tools:
- semantic-release: Automated version management and package publishing
- standard-version: Generate changelog and tag releases
- release-please: Google's release automation tool
Common Mistakes
Mixing Multiple Changes
# Bad: Multiple unrelated changes
feat: add login page and fix CSS bug and update dependencies
# Good: Separate commits
feat(auth): add login page
fix(ui): resolve CSS styling issue
build(deps): update React to version 18Vague Descriptions
# Bad: Not descriptive
fix: bug in code
feat: new stuff
# Good: Specific and clear
fix(api): resolve null pointer exception in user validation
feat(search): implement fuzzy matching algorithmMissing Breaking Change Indicators
# Bad: Breaking change not marked
feat(api): update user authentication
# Good: Properly marked breaking change
feat(api)!: update user authentication
BREAKING CHANGE: All API clients must now include authentication
headers in every request. Anonymous access is no longer supported.Team Guidelines
Establishing Conventions
- Define scope vocabulary: Create a list of approved scopes for your project
- Document examples: Provide team-specific examples of good commits
- Set up tooling: Use linters and hooks to enforce standards
- Review process: Include commit message quality in code reviews
- Training: Ensure all team members understand the format
Scope Examples by Project Type
Web Application:
auth,ui,api,db,config,deploy
Library/SDK:
core,utils,docs,examples,tests
Mobile App:
ios,android,shared,ui,network,storage
By following conventional commits consistently, your team will have a clear, searchable commit history that enables powerful automation and improves the overall development workflow.
Hotfix Procedures
Overview
Hotfixes are emergency releases designed to address critical production issues that cannot wait for the regular release cycle. This document outlines classification, procedures, and best practices for managing hotfixes across different development workflows.
Severity Classification
P0 - Critical (Production Down)
Definition: Complete system outage, data corruption, or security breach affecting all users.
Examples:
- Server crashes preventing any user access
- Database corruption causing data loss
- Security vulnerability being actively exploited
- Payment system completely non-functional
- Authentication system failure preventing all logins
Response Requirements:
- Timeline: Fix deployed within 2 hours
- Approval: Engineering Lead + On-call Manager (verbal approval acceptable)
- Process: Emergency deployment bypassing normal gates
- Communication: Immediate notification to all stakeholders
- Documentation: Post-incident review required within 24 hours
Escalation:
- Page on-call engineer immediately
- Escalate to Engineering Lead within 15 minutes
- Notify CEO/CTO if resolution exceeds 4 hours
P1 - High (Major Feature Broken)
Definition: Critical functionality broken affecting significant portion of users.
Examples:
- Core user workflow completely broken
- Payment processing failures affecting >50% of transactions
- Search functionality returning no results
- Mobile app crashes on startup
- API returning 500 errors for main endpoints
Response Requirements:
- Timeline: Fix deployed within 24 hours
- Approval: Engineering Lead + Product Manager
- Process: Expedited review and testing
- Communication: Stakeholder notification within 1 hour
- Documentation: Root cause analysis within 48 hours
Escalation:
- Notify on-call engineer within 30 minutes
- Escalate to Engineering Lead within 2 hours
- Daily updates to Product/Business stakeholders
P2 - Medium (Minor Feature Issues)
Definition: Non-critical functionality issues with limited user impact.
Examples:
- Cosmetic UI issues affecting user experience
- Non-essential features not working properly
- Performance degradation not affecting core workflows
- Minor API inconsistencies
- Reporting/analytics data inaccuracies
Response Requirements:
- Timeline: Include in next regular release
- Approval: Standard pull request review process
- Process: Normal development and testing cycle
- Communication: Include in regular release notes
- Documentation: Standard issue tracking
Escalation:
- Create ticket in normal backlog
- No special escalation required
- Include in release planning discussions
Hotfix Workflows by Development Model
Git Flow Hotfix Process
Branch Structure
main (v1.2.3) ← hotfix/security-patch → main (v1.2.4)
→ developStep-by-Step Process
Create Hotfix Branch
git checkout main git pull origin main git checkout -b hotfix/security-patchImplement Fix
- Make minimal changes addressing only the specific issue
- Include tests to prevent regression
- Update version number (patch increment)
# Fix the issue git add . git commit -m "fix: resolve SQL injection vulnerability" # Version bump echo "1.2.4" > VERSION git add VERSION git commit -m "chore: bump version to 1.2.4"Test Fix
- Run automated test suite
- Manual testing of affected functionality
- Security review if applicable
# Run tests npm test python -m pytest # Security scan npm audit bandit -r src/Deploy to Staging
# Deploy hotfix branch to staging git push origin hotfix/security-patch # Trigger staging deployment via CI/CDMerge to Production
# Merge to main git checkout main git merge --no-ff hotfix/security-patch git tag -a v1.2.4 -m "Hotfix: Security vulnerability patch" git push origin main --tags # Merge back to develop git checkout develop git merge --no-ff hotfix/security-patch git push origin develop # Clean up git branch -d hotfix/security-patch git push origin --delete hotfix/security-patch
GitHub Flow Hotfix Process
Branch Structure
main ← hotfix/critical-fix → main (immediate deploy)Step-by-Step Process
Create Fix Branch
git checkout main git pull origin main git checkout -b hotfix/payment-gateway-fixImplement and Test
# Make the fix git add . git commit -m "fix(payment): resolve gateway timeout issue" git push origin hotfix/payment-gateway-fixCreate Emergency PR
# Use GitHub CLI or web interface gh pr create --title "HOTFIX: Payment gateway timeout" \ --body "Critical fix for payment processing failures" \ --reviewer engineering-team \ --label hotfixDeploy Branch for Testing
# Deploy branch to staging for validation ./deploy.sh hotfix/payment-gateway-fix staging # Quick smoke testsEmergency Merge and Deploy
# After approval, merge and deploy gh pr merge --squash # Automatic deployment to production via CI/CD
Trunk-based Hotfix Process
Direct Commit Approach
# For small fixes, commit directly to main
git checkout main
git pull origin main
# Make fix
git add .
git commit -m "fix: resolve memory leak in user session handling"
git push origin main
# Automatic deployment triggersFeature Flag Rollback
# For feature-related issues, disable via feature flag
curl -X POST api/feature-flags/new-search/disable
# Verify issue resolved
# Plan proper fix for next deploymentEmergency Response Procedures
Incident Declaration Process
Detection and Assessment (0-5 minutes)
- Monitor alerts or user reports identify issue
- Assess severity using classification matrix
- Determine if hotfix is required
Team Assembly (5-10 minutes)
- Page appropriate on-call engineer
- Assemble incident response team
- Establish communication channel (Slack, Teams)
Initial Response (10-30 minutes)
- Create incident ticket/document
- Begin investigating root cause
- Implement immediate mitigations if possible
Hotfix Development (30 minutes - 2 hours)
- Create hotfix branch
- Implement minimal fix
- Test fix in isolation
Deployment (15-30 minutes)
- Deploy to staging for validation
- Deploy to production
- Monitor for successful resolution
Verification (15-30 minutes)
- Confirm issue is resolved
- Monitor system stability
- Update stakeholders
Communication Templates
P0 Initial Alert
🚨 CRITICAL INCIDENT - Production Down
Status: Investigating
Impact: Complete service outage
Affected Users: All users
Started: 2024-01-15 14:30 UTC
Incident Commander: @john.doe
Current Actions:
- Investigating root cause
- Preparing emergency fix
- Will update every 15 minutes
Status Page: https://status.ourapp.com
Incident Channel: #incident-2024-001P0 Resolution Notice
✅ RESOLVED - Production Restored
Status: Resolved
Resolution Time: 1h 23m
Root Cause: Database connection pool exhaustion
Fix: Increased connection limits and restarted services
Timeline:
14:30 UTC - Issue detected
14:45 UTC - Root cause identified
15:20 UTC - Fix deployed
15:35 UTC - Full functionality restored
Post-incident review scheduled for tomorrow 10:00 AM.
Thank you for your patience.P1 Status Update
⚠️ Issue Update - Payment Processing
Status: Fix deployed, monitoring
Impact: Payment failures reduced from 45% to <2%
ETA: Complete resolution within 2 hours
Actions taken:
- Deployed hotfix to address timeout issues
- Increased monitoring on payment gateway
- Contacting affected customers
Next update in 30 minutes or when resolved.Rollback Procedures
When to Rollback
- Fix doesn't resolve the issue
- Fix introduces new problems
- System stability is compromised
- Data corruption is detected
Rollback Process
Immediate Assessment (2-5 minutes)
# Check system health curl -f https://api.ourapp.com/health # Review error logs kubectl logs deployment/app --tail=100 # Check key metricsRollback Execution (5-15 minutes)
# Git-based rollback git checkout main git revert HEAD git push origin main # Or container-based rollback kubectl rollout undo deployment/app # Or load balancer switch aws elbv2 modify-target-group --target-group-arn arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/previous-versionVerification (5-10 minutes)
# Confirm rollback successful # Check system health endpoints # Verify core functionality working # Monitor error rates and performanceCommunication
🔄 ROLLBACK COMPLETE The hotfix has been rolled back due to [reason]. System is now stable on previous version. We are investigating the issue and will provide updates.
Testing Strategies for Hotfixes
Pre-deployment Testing
Automated Testing
# Run full test suite
npm test
pytest tests/
go test ./...
# Security scanning
npm audit --audit-level high
bandit -r src/
gosec ./...
# Integration tests
./run_integration_tests.sh
# Load testing (if performance-related)
artillery quick --count 100 --num 10 https://staging.ourapp.comManual Testing Checklist
- Core user workflow functions correctly
- Authentication and authorization working
- Payment processing (if applicable)
- Data integrity maintained
- No new error logs or exceptions
- Performance within acceptable range
- Mobile app functionality (if applicable)
- Third-party integrations working
Staging Validation
# Deploy to staging
./deploy.sh hotfix/critical-fix staging
# Run smoke tests
curl -f https://staging.ourapp.com/api/health
./smoke_tests.sh
# Manual verification of specific issue
# Document test resultsPost-deployment Monitoring
Immediate Monitoring (First 30 minutes)
- Error rate and count
- Response time and latency
- CPU and memory usage
- Database connection counts
- Key business metrics
Extended Monitoring (First 24 hours)
- User activity patterns
- Feature usage statistics
- Customer support tickets
- Performance trends
- Security log analysis
Monitoring Scripts
#!/bin/bash
# monitor_hotfix.sh - Post-deployment monitoring
echo "=== Hotfix Deployment Monitoring ==="
echo "Deployment time: $(date)"
echo
# Check application health
echo "--- Application Health ---"
curl -s https://api.ourapp.com/health | jq '.'
# Check error rates
echo "--- Error Rates (last 30min) ---"
curl -s "https://api.datadog.com/api/v1/query?query=sum:application.errors{*}" \
-H "DD-API-KEY: $DATADOG_API_KEY" | jq '.series[0].pointlist[-1][1]'
# Check response times
echo "--- Response Times ---"
curl -s "https://api.datadog.com/api/v1/query?query=avg:application.response_time{*}" \
-H "DD-API-KEY: $DATADOG_API_KEY" | jq '.series[0].pointlist[-1][1]'
# Check database connections
echo "--- Database Status ---"
psql -h db.ourapp.com -U readonly -c "SELECT count(*) as active_connections FROM pg_stat_activity;"
echo "=== Monitoring Complete ==="Documentation and Learning
Incident Documentation Template
# Incident Report: [Brief Description]
## Summary
- **Incident ID:** INC-2024-001
- **Severity:** P0/P1/P2
- **Start Time:** 2024-01-15 14:30 UTC
- **End Time:** 2024-01-15 15:45 UTC
- **Duration:** 1h 15m
- **Impact:** [Description of user/business impact]
## Root Cause
[Detailed explanation of what went wrong and why]
## Timeline
| Time | Event |
|------|-------|
| 14:30 | Issue detected via monitoring alert |
| 14:35 | Incident team assembled |
| 14:45 | Root cause identified |
| 15:00 | Fix developed and tested |
| 15:20 | Fix deployed to production |
| 15:45 | Issue confirmed resolved |
## Resolution
[What was done to fix the issue]
## Lessons Learned
### What went well
- Quick detection through monitoring
- Effective team coordination
- Minimal user impact
### What could be improved
- Earlier detection possible with better alerting
- Testing could have caught this issue
- Communication could be more proactive
## Action Items
- [ ] Improve monitoring for [specific area]
- [ ] Add automated test for [specific scenario]
- [ ] Update documentation for [specific process]
- [ ] Training on [specific topic] for team
## Prevention Measures
[How we'll prevent this from happening again]Post-Incident Review Process
Schedule Review (within 24-48 hours)
- Involve all key participants
- Book 60-90 minute session
- Prepare incident timeline
Blameless Analysis
- Focus on systems and processes, not individuals
- Understand contributing factors
- Identify improvement opportunities
Action Plan
- Concrete, assignable tasks
- Realistic timelines
- Clear success criteria
Follow-up
- Track action item completion
- Share learnings with broader team
- Update procedures based on insights
Knowledge Sharing
Runbook Updates
After each hotfix, update relevant runbooks:
- Add new troubleshooting steps
- Update contact information
- Refine escalation procedures
- Document new tools or processes
Team Training
- Share incident learnings in team meetings
- Conduct tabletop exercises for common scenarios
- Update onboarding materials with hotfix procedures
- Create decision trees for severity classification
Automation Improvements
- Add alerts for new failure modes
- Automate manual steps where possible
- Improve deployment and rollback processes
- Enhance monitoring and observability
Common Pitfalls and Best Practices
Common Pitfalls
❌ Over-engineering the fix
- Making broad changes instead of minimal targeted fix
- Adding features while fixing bugs
- Refactoring unrelated code
❌ Insufficient testing
- Skipping automated tests due to time pressure
- Not testing the exact scenario that caused the issue
- Deploying without staging validation
❌ Poor communication
- Not notifying stakeholders promptly
- Unclear or infrequent status updates
- Forgetting to announce resolution
❌ Inadequate monitoring
- Not watching system health after deployment
- Missing secondary effects of the fix
- Failing to verify the issue is actually resolved
Best Practices
✅ Keep fixes minimal and focused
- Address only the specific issue
- Avoid scope creep or improvements
- Save refactoring for regular releases
✅ Maintain clear communication
- Set up dedicated incident channel
- Provide regular status updates
- Use clear, non-technical language for business stakeholders
✅ Test thoroughly but efficiently
- Focus testing on affected functionality
- Use automated tests where possible
- Validate in staging before production
✅ Document everything
- Maintain timeline of events
- Record decisions and rationale
- Share lessons learned with team
✅ Plan for rollback
- Always have a rollback plan ready
- Test rollback procedure in advance
- Monitor closely after deployment
By following these procedures and continuously improving based on experience, teams can handle production emergencies effectively while minimizing impact and learning from each incident.
Release Workflow Comparison
Overview
This document compares the three most popular branching and release workflows: Git Flow, GitHub Flow, and Trunk-based Development. Each approach has distinct advantages and trade-offs depending on your team size, deployment frequency, and risk tolerance.
Git Flow
Structure
main (production)
↑
release/1.2.0 ← develop (integration) ← feature/user-auth
↑ ← feature/payment-api
hotfix/critical-fixBranch Types
- main: Production-ready code, tagged releases
- develop: Integration branch for next release
- feature/*: Individual features, merged to develop
- release/X.Y.Z: Release preparation, branched from develop
- hotfix/*: Critical fixes, branched from main
Typical Flow
- Create feature branch from develop:
git checkout -b feature/login develop - Work on feature, commit changes
- Merge feature to develop when complete
- When ready for release, create release branch:
git checkout -b release/1.2.0 develop - Finalize release (version bump, changelog, bug fixes)
- Merge release branch to both main and develop
- Tag release:
git tag v1.2.0 - Deploy from main branch
Advantages
- Clear separation between production and development code
- Stable main branch always represents production state
- Parallel development of features without interference
- Structured release process with dedicated release branches
- Hotfix support without disrupting development work
- Good for scheduled releases and traditional release cycles
Disadvantages
- Complex workflow with many branch types
- Merge overhead from multiple integration points
- Delayed feedback from long-lived feature branches
- Integration conflicts when merging large features
- Slower deployment due to process overhead
- Not ideal for continuous deployment
Best For
- Large teams (10+ developers)
- Products with scheduled release cycles
- Enterprise software with formal testing phases
- Projects requiring stable release branches
- Teams comfortable with complex Git workflows
Example Commands
# Start new feature
git checkout develop
git checkout -b feature/user-authentication
# Finish feature
git checkout develop
git merge --no-ff feature/user-authentication
git branch -d feature/user-authentication
# Start release
git checkout develop
git checkout -b release/1.2.0
# Version bump and changelog updates
git commit -am "Bump version to 1.2.0"
# Finish release
git checkout main
git merge --no-ff release/1.2.0
git tag -a v1.2.0 -m "Release version 1.2.0"
git checkout develop
git merge --no-ff release/1.2.0
git branch -d release/1.2.0
# Hotfix
git checkout main
git checkout -b hotfix/security-patch
# Fix the issue
git commit -am "Fix security vulnerability"
git checkout main
git merge --no-ff hotfix/security-patch
git tag -a v1.2.1 -m "Hotfix version 1.2.1"
git checkout develop
git merge --no-ff hotfix/security-patchGitHub Flow
Structure
main ← feature/user-auth
← feature/payment-api
← hotfix/critical-fixBranch Types
- main: Production-ready code, deployed automatically
- feature/*: All changes, regardless of size or type
Typical Flow
- Create feature branch from main:
git checkout -b feature/login main - Work on feature with regular commits and pushes
- Open pull request when ready for feedback
- Deploy feature branch to staging for testing
- Merge to main when approved and tested
- Deploy main to production automatically
- Delete feature branch
Advantages
- Simple workflow with only two branch types
- Fast deployment with minimal process overhead
- Continuous integration with frequent merges to main
- Early feedback through pull request reviews
- Deploy from branches allows testing before merge
- Good for continuous deployment
Disadvantages
- Main can be unstable if testing is insufficient
- No release branches for coordinating multiple features
- Limited hotfix process requires careful coordination
- Requires strong testing and CI/CD infrastructure
- Not suitable for scheduled releases
- Can be chaotic with many simultaneous features
Best For
- Small to medium teams (2-10 developers)
- Web applications with continuous deployment
- Products with rapid iteration cycles
- Teams with strong testing and CI/CD practices
- Projects where main is always deployable
Example Commands
# Start new feature
git checkout main
git pull origin main
git checkout -b feature/user-authentication
# Regular work
git add .
git commit -m "feat(auth): add login form validation"
git push origin feature/user-authentication
# Deploy branch for testing
# (Usually done through CI/CD)
./deploy.sh feature/user-authentication staging
# Merge when ready
git checkout main
git merge feature/user-authentication
git push origin main
git branch -d feature/user-authentication
# Automatic deployment to production
# (Triggered by push to main)Trunk-based Development
Structure
main ← short-feature-branch (1-3 days max)
← another-short-branch
← direct-commitsBranch Types
- main: The single source of truth, always deployable
- Short-lived branches: Optional, for changes taking >1 day
Typical Flow
- Commit directly to main for small changes
- Create short-lived branch for larger changes (max 2-3 days)
- Merge to main frequently (multiple times per day)
- Use feature flags to hide incomplete features
- Deploy main to production multiple times per day
- Release by enabling feature flags, not code deployment
Advantages
- Simplest workflow with minimal branching
- Fastest integration with continuous merges
- Reduced merge conflicts from short-lived branches
- Always deployable main through feature flags
- Fastest feedback loop with immediate integration
- Excellent for CI/CD and DevOps practices
Disadvantages
- Requires discipline to keep main stable
- Needs feature flags for incomplete features
- Limited code review for direct commits
- Can be destabilizing without proper testing
- Requires advanced CI/CD infrastructure
- Not suitable for teams uncomfortable with frequent changes
Best For
- Expert teams with strong DevOps culture
- Products requiring very fast iteration
- Microservices architectures
- Teams practicing continuous deployment
- Organizations with mature testing practices
Example Commands
# Small change - direct to main
git checkout main
git pull origin main
# Make changes
git add .
git commit -m "fix(ui): resolve button alignment issue"
git push origin main
# Larger change - short branch
git checkout main
git pull origin main
git checkout -b payment-integration
# Work for 1-2 days maximum
git add .
git commit -m "feat(payment): add Stripe integration"
git push origin payment-integration
# Immediate merge
git checkout main
git merge payment-integration
git push origin main
git branch -d payment-integration
# Feature flag usage
if (featureFlags.enabled('stripe_payments', userId)) {
return renderStripePayment();
} else {
return renderLegacyPayment();
}Feature Comparison Matrix
| Aspect | Git Flow | GitHub Flow | Trunk-based |
|---|---|---|---|
| Complexity | High | Medium | Low |
| Learning Curve | Steep | Moderate | Gentle |
| Deployment Frequency | Weekly/Monthly | Daily | Multiple/day |
| Branch Lifetime | Weeks/Months | Days/Weeks | Hours/Days |
| Main Stability | Very High | High | High* |
| Release Coordination | Excellent | Limited | Feature Flags |
| Hotfix Support | Built-in | Manual | Direct |
| Merge Conflicts | High | Medium | Low |
| Team Size | 10+ | 3-10 | Any |
| CI/CD Requirements | Medium | High | Very High |
*With proper feature flags and testing
Release Strategies by Workflow
Git Flow Releases
# Scheduled release every 2 weeks
git checkout develop
git checkout -b release/2.3.0
# Version management
echo "2.3.0" > VERSION
npm version 2.3.0 --no-git-tag-version
python setup.py --version 2.3.0
# Changelog generation
git log --oneline release/2.2.0..HEAD --pretty=format:"%s" > CHANGELOG_DRAFT.md
# Testing and bug fixes in release branch
git commit -am "fix: resolve issue found in release testing"
# Finalize release
git checkout main
git merge --no-ff release/2.3.0
git tag -a v2.3.0 -m "Release 2.3.0"
# Deploy tagged version
docker build -t app:2.3.0 .
kubectl set image deployment/app app=app:2.3.0GitHub Flow Releases
# Deploy every merge to main
git checkout main
git merge feature/new-payment-method
# Automatic deployment via CI/CD
# .github/workflows/deploy.yml triggers on push to main
# Tag releases for tracking (optional)
git tag -a v2.3.$(date +%Y%m%d%H%M) -m "Production deployment"
# Rollback if needed
git revert HEAD
git push origin main # Triggers automatic rollback deploymentTrunk-based Releases
# Continuous deployment with feature flags
git checkout main
git add feature_flags.json
git commit -m "feat: enable new payment method for 10% of users"
git push origin main
# Gradual rollout
curl -X POST api/feature-flags/payment-v2/rollout/25 # 25% of users
# Monitor metrics...
curl -X POST api/feature-flags/payment-v2/rollout/50 # 50% of users
# Monitor metrics...
curl -X POST api/feature-flags/payment-v2/rollout/100 # Full rollout
# Remove flag after successful rollout
git rm old_payment_code.js
git commit -m "cleanup: remove legacy payment code"Choosing the Right Workflow
Decision Matrix
Choose Git Flow if:
- ✅ Team size > 10 developers
- ✅ Scheduled release cycles (weekly/monthly)
- ✅ Multiple versions supported simultaneously
- ✅ Formal testing and QA processes
- ✅ Complex enterprise software
- ❌ Need rapid deployment
- ❌ Small team or startup
Choose GitHub Flow if:
- ✅ Team size 3-10 developers
- ✅ Web applications or APIs
- ✅ Strong CI/CD and testing
- ✅ Daily or continuous deployment
- ✅ Simple release requirements
- ❌ Complex release coordination needed
- ❌ Multiple release branches required
Choose Trunk-based Development if:
- ✅ Expert development team
- ✅ Mature DevOps practices
- ✅ Microservices architecture
- ✅ Feature flag infrastructure
- ✅ Multiple deployments per day
- ✅ Strong automated testing
- ❌ Junior developers
- ❌ Complex integration requirements
Migration Strategies
From Git Flow to GitHub Flow
- Simplify branching: Eliminate develop branch, work directly with main
- Increase deployment frequency: Move from scheduled to continuous releases
- Strengthen testing: Improve automated test coverage and CI/CD
- Reduce branch lifetime: Limit feature branches to 1-2 weeks maximum
- Train team: Educate on simpler workflow and increased responsibility
From GitHub Flow to Trunk-based
- Implement feature flags: Add feature toggle infrastructure
- Improve CI/CD: Ensure all tests run in <10 minutes
- Increase commit frequency: Encourage multiple commits per day
- Reduce branch usage: Start committing small changes directly to main
- Monitor stability: Ensure main remains deployable at all times
From Trunk-based to Git Flow
- Add structure: Introduce develop and release branches
- Reduce deployment frequency: Move to scheduled release cycles
- Extend branch lifetime: Allow longer feature development cycles
- Formalize process: Add approval gates and testing phases
- Coordinate releases: Plan features for specific release versions
Anti-patterns to Avoid
Git Flow Anti-patterns
- Long-lived feature branches (>2 weeks)
- Skipping release branches for small releases
- Direct commits to main bypassing develop
- Forgetting to merge back to develop after hotfixes
- Complex merge conflicts from delayed integration
GitHub Flow Anti-patterns
- Unstable main branch due to insufficient testing
- Long-lived feature branches defeating the purpose
- Skipping pull request reviews for speed
- Direct production deployment without staging validation
- No rollback plan when deployments fail
Trunk-based Anti-patterns
- Committing broken code to main branch
- Feature branches lasting weeks defeating the philosophy
- No feature flags for incomplete features
- Insufficient automated testing leading to instability
- Poor CI/CD pipeline causing deployment delays
Conclusion
The choice of release workflow significantly impacts your team's productivity, code quality, and deployment reliability. Consider your team size, technical maturity, deployment requirements, and organizational culture when making this decision.
Start conservative (Git Flow) and evolve toward more agile approaches (GitHub Flow, Trunk-based) as your team's skills and infrastructure mature. The key is consistency within your team and alignment with your organization's goals and constraints.
Remember: The best workflow is the one your team can execute consistently and reliably.
#!/usr/bin/env python3
"""
Release Planner
Takes a list of features/PRs/tickets planned for release and assesses release readiness.
Checks for required approvals, test coverage thresholds, breaking change documentation,
dependency updates, migration steps needed. Generates release checklist, communication
plan, and rollback procedures.
Input: release plan JSON (features, PRs, target date)
Output: release readiness report + checklist + rollback runbook + announcement draft
"""
import argparse
import json
import sys
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any, Union
from dataclasses import dataclass, asdict
from enum import Enum
class RiskLevel(Enum):
"""Risk levels for release components."""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class ComponentStatus(Enum):
"""Status of release components."""
PENDING = "pending"
IN_PROGRESS = "in_progress"
READY = "ready"
BLOCKED = "blocked"
FAILED = "failed"
@dataclass
class Feature:
"""Represents a feature in the release."""
id: str
title: str
description: str
type: str # feature, bugfix, security, breaking_change, etc.
assignee: str
status: ComponentStatus
pull_request_url: Optional[str] = None
issue_url: Optional[str] = None
risk_level: RiskLevel = RiskLevel.MEDIUM
test_coverage_required: float = 80.0
test_coverage_actual: Optional[float] = None
requires_migration: bool = False
migration_complexity: str = "simple" # simple, moderate, complex
breaking_changes: List[str] = None
dependencies: List[str] = None
qa_approved: bool = False
security_approved: bool = False
pm_approved: bool = False
def __post_init__(self):
if self.breaking_changes is None:
self.breaking_changes = []
if self.dependencies is None:
self.dependencies = []
@dataclass
class QualityGate:
"""Quality gate requirements."""
name: str
required: bool
status: ComponentStatus
details: Optional[str] = None
threshold: Optional[float] = None
actual_value: Optional[float] = None
@dataclass
class Stakeholder:
"""Stakeholder for release communication."""
name: str
role: str
contact: str
notification_type: str # email, slack, teams
critical_path: bool = False
@dataclass
class RollbackStep:
"""Individual rollback step."""
order: int
description: str
command: Optional[str] = None
estimated_time: str = "5 minutes"
risk_level: RiskLevel = RiskLevel.LOW
verification: str = ""
class ReleasePlanner:
"""Main release planning and assessment logic."""
def __init__(self):
self.release_name: str = ""
self.version: str = ""
self.target_date: Optional[datetime] = None
self.features: List[Feature] = []
self.quality_gates: List[QualityGate] = []
self.stakeholders: List[Stakeholder] = []
self.rollback_steps: List[RollbackStep] = []
# Configuration
self.min_test_coverage = 80.0
self.required_approvals = ['pm_approved', 'qa_approved']
self.high_risk_approval_requirements = ['pm_approved', 'qa_approved', 'security_approved']
def load_release_plan(self, plan_data: Union[str, Dict]):
"""Load release plan from JSON."""
if isinstance(plan_data, str):
data = json.loads(plan_data)
else:
data = plan_data
self.release_name = data.get('release_name', 'Unnamed Release')
self.version = data.get('version', '1.0.0')
if 'target_date' in data:
self.target_date = datetime.fromisoformat(data['target_date'].replace('Z', '+00:00'))
# Load features
self.features = []
for feature_data in data.get('features', []):
try:
status = ComponentStatus(feature_data.get('status', 'pending'))
risk_level = RiskLevel(feature_data.get('risk_level', 'medium'))
feature = Feature(
id=feature_data['id'],
title=feature_data['title'],
description=feature_data.get('description', ''),
type=feature_data.get('type', 'feature'),
assignee=feature_data.get('assignee', ''),
status=status,
pull_request_url=feature_data.get('pull_request_url'),
issue_url=feature_data.get('issue_url'),
risk_level=risk_level,
test_coverage_required=feature_data.get('test_coverage_required', 80.0),
test_coverage_actual=feature_data.get('test_coverage_actual'),
requires_migration=feature_data.get('requires_migration', False),
migration_complexity=feature_data.get('migration_complexity', 'simple'),
breaking_changes=feature_data.get('breaking_changes', []),
dependencies=feature_data.get('dependencies', []),
qa_approved=feature_data.get('qa_approved', False),
security_approved=feature_data.get('security_approved', False),
pm_approved=feature_data.get('pm_approved', False)
)
self.features.append(feature)
except Exception as e:
print(f"Warning: Error parsing feature {feature_data.get('id', 'unknown')}: {e}",
file=sys.stderr)
# Load quality gates
self.quality_gates = []
for gate_data in data.get('quality_gates', []):
try:
status = ComponentStatus(gate_data.get('status', 'pending'))
gate = QualityGate(
name=gate_data['name'],
required=gate_data.get('required', True),
status=status,
details=gate_data.get('details'),
threshold=gate_data.get('threshold'),
actual_value=gate_data.get('actual_value')
)
self.quality_gates.append(gate)
except Exception as e:
print(f"Warning: Error parsing quality gate {gate_data.get('name', 'unknown')}: {e}",
file=sys.stderr)
# Load stakeholders
self.stakeholders = []
for stakeholder_data in data.get('stakeholders', []):
stakeholder = Stakeholder(
name=stakeholder_data['name'],
role=stakeholder_data['role'],
contact=stakeholder_data['contact'],
notification_type=stakeholder_data.get('notification_type', 'email'),
critical_path=stakeholder_data.get('critical_path', False)
)
self.stakeholders.append(stakeholder)
# Load or generate default quality gates if none provided
if not self.quality_gates:
self._generate_default_quality_gates()
# Load or generate default rollback steps
if 'rollback_steps' in data:
self.rollback_steps = []
for step_data in data['rollback_steps']:
risk_level = RiskLevel(step_data.get('risk_level', 'low'))
step = RollbackStep(
order=step_data['order'],
description=step_data['description'],
command=step_data.get('command'),
estimated_time=step_data.get('estimated_time', '5 minutes'),
risk_level=risk_level,
verification=step_data.get('verification', '')
)
self.rollback_steps.append(step)
else:
self._generate_default_rollback_steps()
def _generate_default_quality_gates(self):
"""Generate default quality gates."""
default_gates = [
{
'name': 'Unit Test Coverage',
'required': True,
'threshold': self.min_test_coverage,
'details': f'Minimum {self.min_test_coverage}% code coverage required'
},
{
'name': 'Integration Tests',
'required': True,
'details': 'All integration tests must pass'
},
{
'name': 'Security Scan',
'required': True,
'details': 'No high or critical security vulnerabilities'
},
{
'name': 'Performance Testing',
'required': True,
'details': 'Performance metrics within acceptable thresholds'
},
{
'name': 'Documentation Review',
'required': True,
'details': 'API docs and user docs updated for new features'
},
{
'name': 'Dependency Audit',
'required': True,
'details': 'All dependencies scanned for vulnerabilities'
}
]
self.quality_gates = []
for gate_data in default_gates:
gate = QualityGate(
name=gate_data['name'],
required=gate_data['required'],
status=ComponentStatus.PENDING,
details=gate_data['details'],
threshold=gate_data.get('threshold')
)
self.quality_gates.append(gate)
def _generate_default_rollback_steps(self):
"""Generate default rollback procedure."""
default_steps = [
{
'order': 1,
'description': 'Alert on-call team and stakeholders',
'estimated_time': '2 minutes',
'verification': 'Confirm team is aware and responding'
},
{
'order': 2,
'description': 'Switch load balancer to previous version',
'command': 'kubectl patch service app --patch \'{"spec": {"selector": {"version": "previous"}}}\'',
'estimated_time': '30 seconds',
'verification': 'Check that traffic is routing to old version'
},
{
'order': 3,
'description': 'Verify application health after rollback',
'estimated_time': '5 minutes',
'verification': 'Check error rates, response times, and health endpoints'
},
{
'order': 4,
'description': 'Roll back database migrations if needed',
'command': 'python manage.py migrate app 0001',
'estimated_time': '10 minutes',
'risk_level': 'high',
'verification': 'Verify data integrity and application functionality'
},
{
'order': 5,
'description': 'Update monitoring dashboards and alerts',
'estimated_time': '5 minutes',
'verification': 'Confirm metrics reflect rollback state'
},
{
'order': 6,
'description': 'Notify stakeholders of successful rollback',
'estimated_time': '5 minutes',
'verification': 'All stakeholders acknowledge rollback completion'
}
]
self.rollback_steps = []
for step_data in default_steps:
risk_level = RiskLevel(step_data.get('risk_level', 'low'))
step = RollbackStep(
order=step_data['order'],
description=step_data['description'],
command=step_data.get('command'),
estimated_time=step_data.get('estimated_time', '5 minutes'),
risk_level=risk_level,
verification=step_data.get('verification', '')
)
self.rollback_steps.append(step)
def assess_release_readiness(self) -> Dict:
"""Assess overall release readiness."""
assessment = {
'overall_status': 'ready',
'readiness_score': 0.0,
'blocking_issues': [],
'warnings': [],
'recommendations': [],
'feature_summary': {},
'quality_gate_summary': {},
'timeline_assessment': {}
}
total_score = 0
max_score = 0
# Assess features
feature_stats = {
'total': len(self.features),
'ready': 0,
'blocked': 0,
'in_progress': 0,
'pending': 0,
'high_risk': 0,
'breaking_changes': 0,
'missing_approvals': 0,
'low_test_coverage': 0
}
for feature in self.features:
max_score += 10 # Each feature worth 10 points
if feature.status == ComponentStatus.READY:
feature_stats['ready'] += 1
total_score += 10
elif feature.status == ComponentStatus.BLOCKED:
feature_stats['blocked'] += 1
assessment['blocking_issues'].append(
f"Feature '{feature.title}' ({feature.id}) is blocked"
)
elif feature.status == ComponentStatus.IN_PROGRESS:
feature_stats['in_progress'] += 1
total_score += 5 # Partial credit
assessment['warnings'].append(
f"Feature '{feature.title}' ({feature.id}) still in progress"
)
else:
feature_stats['pending'] += 1
assessment['warnings'].append(
f"Feature '{feature.title}' ({feature.id}) is pending"
)
# Check risk level
if feature.risk_level in [RiskLevel.HIGH, RiskLevel.CRITICAL]:
feature_stats['high_risk'] += 1
# Check breaking changes
if feature.breaking_changes:
feature_stats['breaking_changes'] += 1
# Check approvals
missing_approvals = self._check_feature_approvals(feature)
if missing_approvals:
feature_stats['missing_approvals'] += 1
assessment['blocking_issues'].append(
f"Feature '{feature.title}' missing approvals: {', '.join(missing_approvals)}"
)
# Check test coverage
if (feature.test_coverage_actual is not None and
feature.test_coverage_actual < feature.test_coverage_required):
feature_stats['low_test_coverage'] += 1
assessment['warnings'].append(
f"Feature '{feature.title}' has low test coverage: "
f"{feature.test_coverage_actual}% < {feature.test_coverage_required}%"
)
assessment['feature_summary'] = feature_stats
# Assess quality gates
gate_stats = {
'total': len(self.quality_gates),
'passed': 0,
'failed': 0,
'pending': 0,
'required_failed': 0
}
for gate in self.quality_gates:
max_score += 5 # Each gate worth 5 points
if gate.status == ComponentStatus.READY:
gate_stats['passed'] += 1
total_score += 5
elif gate.status == ComponentStatus.FAILED:
gate_stats['failed'] += 1
if gate.required:
gate_stats['required_failed'] += 1
assessment['blocking_issues'].append(
f"Required quality gate '{gate.name}' failed"
)
else:
gate_stats['pending'] += 1
if gate.required:
assessment['warnings'].append(
f"Required quality gate '{gate.name}' is pending"
)
assessment['quality_gate_summary'] = gate_stats
# Timeline assessment
if self.target_date:
# Handle timezone-aware datetime comparison
now = datetime.now(self.target_date.tzinfo) if self.target_date.tzinfo else datetime.now()
days_until_release = (self.target_date - now).days
assessment['timeline_assessment'] = {
'target_date': self.target_date.isoformat(),
'days_remaining': days_until_release,
'timeline_status': 'on_track' if days_until_release > 0 else 'overdue'
}
if days_until_release < 0:
assessment['blocking_issues'].append(f"Release is {abs(days_until_release)} days overdue")
elif days_until_release < 3 and feature_stats['blocked'] > 0:
assessment['blocking_issues'].append("Not enough time to resolve blocked features")
# Calculate overall readiness score
if max_score > 0:
assessment['readiness_score'] = (total_score / max_score) * 100
# Determine overall status
if assessment['blocking_issues']:
assessment['overall_status'] = 'blocked'
elif assessment['warnings']:
assessment['overall_status'] = 'at_risk'
else:
assessment['overall_status'] = 'ready'
# Generate recommendations
if feature_stats['missing_approvals'] > 0:
assessment['recommendations'].append("Obtain required approvals for pending features")
if feature_stats['low_test_coverage'] > 0:
assessment['recommendations'].append("Improve test coverage for features below threshold")
if gate_stats['pending'] > 0:
assessment['recommendations'].append("Complete pending quality gate validations")
if feature_stats['high_risk'] > 0:
assessment['recommendations'].append("Review high-risk features for additional validation")
return assessment
def _check_feature_approvals(self, feature: Feature) -> List[str]:
"""Check which approvals are missing for a feature."""
missing = []
# Determine required approvals based on risk level
required = self.required_approvals.copy()
if feature.risk_level in [RiskLevel.HIGH, RiskLevel.CRITICAL]:
required = self.high_risk_approval_requirements.copy()
if 'pm_approved' in required and not feature.pm_approved:
missing.append('PM approval')
if 'qa_approved' in required and not feature.qa_approved:
missing.append('QA approval')
if 'security_approved' in required and not feature.security_approved:
missing.append('Security approval')
return missing
def generate_release_checklist(self) -> List[Dict]:
"""Generate comprehensive release checklist."""
checklist = []
# Pre-release validation
checklist.extend([
{
'category': 'Pre-Release Validation',
'item': 'All features implemented and tested',
'status': 'ready' if all(f.status == ComponentStatus.READY for f in self.features) else 'pending',
'details': f"{len([f for f in self.features if f.status == ComponentStatus.READY])}/{len(self.features)} features ready"
},
{
'category': 'Pre-Release Validation',
'item': 'Breaking changes documented',
'status': 'ready' if self._check_breaking_change_docs() else 'pending',
'details': f"{len([f for f in self.features if f.breaking_changes])} features have breaking changes"
},
{
'category': 'Pre-Release Validation',
'item': 'Migration scripts tested',
'status': 'ready' if self._check_migrations() else 'pending',
'details': f"{len([f for f in self.features if f.requires_migration])} features require migrations"
}
])
# Quality gates
for gate in self.quality_gates:
checklist.append({
'category': 'Quality Gates',
'item': gate.name,
'status': gate.status.value,
'details': gate.details,
'required': gate.required
})
# Approvals
approval_items = [
('Product Manager sign-off', self._check_pm_approvals()),
('QA validation complete', self._check_qa_approvals()),
('Security team clearance', self._check_security_approvals())
]
for item, status in approval_items:
checklist.append({
'category': 'Approvals',
'item': item,
'status': 'ready' if status else 'pending'
})
# Documentation
doc_items = [
'CHANGELOG.md updated',
'API documentation updated',
'User documentation updated',
'Migration guide written',
'Rollback procedure documented'
]
for item in doc_items:
checklist.append({
'category': 'Documentation',
'item': item,
'status': 'pending' # Would need integration with docs system to check
})
# Deployment preparation
deployment_items = [
'Database migrations prepared',
'Environment variables configured',
'Monitoring alerts updated',
'Rollback plan tested',
'Stakeholders notified'
]
for item in deployment_items:
checklist.append({
'category': 'Deployment',
'item': item,
'status': 'pending'
})
return checklist
def _check_breaking_change_docs(self) -> bool:
"""Check if breaking changes are properly documented."""
features_with_breaking_changes = [f for f in self.features if f.breaking_changes]
return all(len(f.breaking_changes) > 0 for f in features_with_breaking_changes)
def _check_migrations(self) -> bool:
"""Check migration readiness."""
features_with_migrations = [f for f in self.features if f.requires_migration]
return all(f.status == ComponentStatus.READY for f in features_with_migrations)
def _check_pm_approvals(self) -> bool:
"""Check PM approvals."""
return all(f.pm_approved for f in self.features if f.risk_level != RiskLevel.LOW)
def _check_qa_approvals(self) -> bool:
"""Check QA approvals."""
return all(f.qa_approved for f in self.features)
def _check_security_approvals(self) -> bool:
"""Check security approvals."""
high_risk_features = [f for f in self.features if f.risk_level in [RiskLevel.HIGH, RiskLevel.CRITICAL]]
return all(f.security_approved for f in high_risk_features)
def generate_communication_plan(self) -> Dict:
"""Generate stakeholder communication plan."""
plan = {
'internal_notifications': [],
'external_notifications': [],
'timeline': [],
'channels': {},
'templates': {}
}
# Group stakeholders by type
internal_stakeholders = [s for s in self.stakeholders if s.role in
['developer', 'qa', 'pm', 'devops', 'security']]
external_stakeholders = [s for s in self.stakeholders if s.role in
['customer', 'partner', 'support']]
# Internal notifications
for stakeholder in internal_stakeholders:
plan['internal_notifications'].append({
'recipient': stakeholder.name,
'role': stakeholder.role,
'method': stakeholder.notification_type,
'content_type': 'technical_details',
'timing': 'T-24h and T-0'
})
# External notifications
for stakeholder in external_stakeholders:
plan['external_notifications'].append({
'recipient': stakeholder.name,
'role': stakeholder.role,
'method': stakeholder.notification_type,
'content_type': 'user_facing_changes',
'timing': 'T-48h and T+1h'
})
# Communication timeline
if self.target_date:
timeline_items = [
(timedelta(days=-2), 'Send pre-release notification to external stakeholders'),
(timedelta(days=-1), 'Send deployment notification to internal teams'),
(timedelta(hours=-2), 'Final go/no-go decision'),
(timedelta(hours=0), 'Begin deployment'),
(timedelta(hours=1), 'Post-deployment status update'),
(timedelta(hours=24), 'Post-release summary')
]
for delta, description in timeline_items:
notification_time = self.target_date + delta
plan['timeline'].append({
'time': notification_time.isoformat(),
'description': description,
'recipients': 'all' if 'all' in description.lower() else 'internal'
})
# Communication channels
channels = {}
for stakeholder in self.stakeholders:
if stakeholder.notification_type not in channels:
channels[stakeholder.notification_type] = []
channels[stakeholder.notification_type].append(stakeholder.contact)
plan['channels'] = channels
# Message templates
plan['templates'] = self._generate_message_templates()
return plan
def _generate_message_templates(self) -> Dict:
"""Generate message templates for different audiences."""
breaking_changes = [f for f in self.features if f.breaking_changes]
new_features = [f for f in self.features if f.type == 'feature']
bug_fixes = [f for f in self.features if f.type == 'bugfix']
templates = {
'internal_pre_release': {
'subject': f'Release {self.version} - Pre-deployment Notification',
'body': f"""Team,
We are preparing to deploy {self.release_name} version {self.version} on {self.target_date.strftime('%Y-%m-%d %H:%M UTC') if self.target_date else 'TBD'}.
Key Changes:
- {len(new_features)} new features
- {len(bug_fixes)} bug fixes
- {len(breaking_changes)} breaking changes
Please review the release notes and prepare for any needed support activities.
Rollback plan: Available in release documentation
On-call: Please be available during deployment window
Best regards,
Release Team"""
},
'external_user_notification': {
'subject': f'Product Update - Version {self.version} Now Available',
'body': f"""Dear Users,
We're excited to announce version {self.version} of {self.release_name} is now available!
What's New:
{chr(10).join(f"- {f.title}" for f in new_features[:5])}
Bug Fixes:
{chr(10).join(f"- {f.title}" for f in bug_fixes[:3])}
{'Important: This release includes breaking changes. Please review the migration guide.' if breaking_changes else ''}
For full release notes and migration instructions, visit our documentation.
Thank you for using our product!
The Development Team"""
},
'rollback_notification': {
'subject': f'URGENT: Release {self.version} Rollback Initiated',
'body': f"""ATTENTION: Release rollback in progress.
Release: {self.version}
Reason: [TO BE FILLED]
Rollback initiated: {datetime.now().strftime('%Y-%m-%d %H:%M UTC')}
Estimated completion: [TO BE FILLED]
Current status: Rolling back to previous stable version
Impact: [TO BE FILLED]
We will provide updates every 15 minutes until rollback is complete.
Incident Commander: [TO BE FILLED]
Status page: [TO BE FILLED]"""
}
}
return templates
def generate_rollback_runbook(self) -> Dict:
"""Generate detailed rollback runbook."""
runbook = {
'overview': {
'purpose': f'Emergency rollback procedure for {self.release_name} v{self.version}',
'triggers': [
'Error rate spike (>2x baseline for >15 minutes)',
'Critical functionality failure',
'Security incident',
'Data corruption detected',
'Performance degradation (>50% latency increase)',
'Manual decision by incident commander'
],
'decision_makers': ['On-call Engineer', 'Engineering Lead', 'Incident Commander'],
'estimated_total_time': self._calculate_rollback_time()
},
'prerequisites': [
'Confirm rollback is necessary (check with incident commander)',
'Notify stakeholders of rollback decision',
'Ensure database backups are available',
'Verify monitoring systems are operational',
'Have communication channels ready'
],
'steps': [],
'verification': {
'health_checks': [
'Application responds to health endpoint',
'Database connectivity confirmed',
'Authentication system functional',
'Core user workflows working',
'Error rates back to baseline',
'Performance metrics within normal range'
],
'rollback_confirmation': [
'Previous version fully deployed',
'Database in consistent state',
'All services communicating properly',
'Monitoring shows stable metrics',
'Sample user workflows tested'
]
},
'post_rollback': [
'Update status page with resolution',
'Notify all stakeholders of successful rollback',
'Schedule post-incident review',
'Document issues encountered during rollback',
'Plan investigation of root cause',
'Determine timeline for next release attempt'
],
'emergency_contacts': []
}
# Convert rollback steps to detailed format
for step in sorted(self.rollback_steps, key=lambda x: x.order):
step_data = {
'order': step.order,
'title': step.description,
'estimated_time': step.estimated_time,
'risk_level': step.risk_level.value,
'instructions': step.description,
'command': step.command,
'verification': step.verification,
'rollback_possible': step.risk_level != RiskLevel.CRITICAL
}
runbook['steps'].append(step_data)
# Add emergency contacts
critical_stakeholders = [s for s in self.stakeholders if s.critical_path]
for stakeholder in critical_stakeholders:
runbook['emergency_contacts'].append({
'name': stakeholder.name,
'role': stakeholder.role,
'contact': stakeholder.contact,
'method': stakeholder.notification_type
})
return runbook
def _calculate_rollback_time(self) -> str:
"""Calculate estimated total rollback time."""
total_minutes = 0
for step in self.rollback_steps:
# Parse time estimates like "5 minutes", "30 seconds", "1 hour"
time_str = step.estimated_time.lower()
if 'minute' in time_str:
minutes = int(re.search(r'(\d+)', time_str).group(1))
total_minutes += minutes
elif 'hour' in time_str:
hours = int(re.search(r'(\d+)', time_str).group(1))
total_minutes += hours * 60
elif 'second' in time_str:
# Round up seconds to minutes
total_minutes += 1
if total_minutes < 60:
return f"{total_minutes} minutes"
else:
hours = total_minutes // 60
minutes = total_minutes % 60
return f"{hours}h {minutes}m"
def main():
"""Main CLI entry point."""
parser = argparse.ArgumentParser(description="Assess release readiness and generate release plans")
parser.add_argument('--input', '-i', required=True,
help='Release plan JSON file')
parser.add_argument('--output-format', '-f',
choices=['json', 'markdown', 'text'],
default='text', help='Output format')
parser.add_argument('--output', '-o', type=str,
help='Output file (default: stdout)')
parser.add_argument('--include-checklist', action='store_true',
help='Include release checklist in output')
parser.add_argument('--include-communication', action='store_true',
help='Include communication plan')
parser.add_argument('--include-rollback', action='store_true',
help='Include rollback runbook')
parser.add_argument('--min-coverage', type=float, default=80.0,
help='Minimum test coverage threshold')
args = parser.parse_args()
# Load release plan
try:
with open(args.input, 'r', encoding='utf-8') as f:
plan_data = f.read()
except Exception as e:
print(f"Error reading input file: {e}", file=sys.stderr)
sys.exit(1)
# Initialize planner
planner = ReleasePlanner()
planner.min_test_coverage = args.min_coverage
try:
planner.load_release_plan(plan_data)
except Exception as e:
print(f"Error loading release plan: {e}", file=sys.stderr)
sys.exit(1)
# Generate assessment
assessment = planner.assess_release_readiness()
# Generate optional components
checklist = planner.generate_release_checklist() if args.include_checklist else None
communication = planner.generate_communication_plan() if args.include_communication else None
rollback = planner.generate_rollback_runbook() if args.include_rollback else None
# Generate output
if args.output_format == 'json':
output_data = {
'assessment': assessment,
'checklist': checklist,
'communication_plan': communication,
'rollback_runbook': rollback
}
output_text = json.dumps(output_data, indent=2, default=str)
elif args.output_format == 'markdown':
output_lines = [
f"# Release Readiness Report - {planner.release_name} v{planner.version}",
"",
f"**Overall Status:** {assessment['overall_status'].upper()}",
f"**Readiness Score:** {assessment['readiness_score']:.1f}%",
""
]
if assessment['blocking_issues']:
output_lines.extend([
"## 🚫 Blocking Issues",
""
])
for issue in assessment['blocking_issues']:
output_lines.append(f"- {issue}")
output_lines.append("")
if assessment['warnings']:
output_lines.extend([
"## ⚠️ Warnings",
""
])
for warning in assessment['warnings']:
output_lines.append(f"- {warning}")
output_lines.append("")
# Feature summary
fs = assessment['feature_summary']
output_lines.extend([
"## Features Summary",
"",
f"- **Total:** {fs['total']}",
f"- **Ready:** {fs['ready']}",
f"- **In Progress:** {fs['in_progress']}",
f"- **Blocked:** {fs['blocked']}",
f"- **Breaking Changes:** {fs['breaking_changes']}",
""
])
if checklist:
output_lines.extend([
"## Release Checklist",
""
])
current_category = ""
for item in checklist:
if item['category'] != current_category:
current_category = item['category']
output_lines.append(f"### {current_category}")
output_lines.append("")
status_icon = "✅" if item['status'] == 'ready' else "❌" if item['status'] == 'failed' else "⏳"
output_lines.append(f"- {status_icon} {item['item']}")
output_lines.append("")
output_text = '\n'.join(output_lines)
else: # text format
output_lines = [
f"Release Readiness Report",
f"========================",
f"Release: {planner.release_name} v{planner.version}",
f"Status: {assessment['overall_status'].upper()}",
f"Readiness Score: {assessment['readiness_score']:.1f}%",
""
]
if assessment['blocking_issues']:
output_lines.extend(["BLOCKING ISSUES:", ""])
for issue in assessment['blocking_issues']:
output_lines.append(f" ❌ {issue}")
output_lines.append("")
if assessment['warnings']:
output_lines.extend(["WARNINGS:", ""])
for warning in assessment['warnings']:
output_lines.append(f" ⚠️ {warning}")
output_lines.append("")
if assessment['recommendations']:
output_lines.extend(["RECOMMENDATIONS:", ""])
for rec in assessment['recommendations']:
output_lines.append(f" 💡 {rec}")
output_lines.append("")
# Summary stats
fs = assessment['feature_summary']
gs = assessment['quality_gate_summary']
output_lines.extend([
f"FEATURE SUMMARY:",
f" Total: {fs['total']} | Ready: {fs['ready']} | Blocked: {fs['blocked']}",
f" Breaking Changes: {fs['breaking_changes']} | Missing Approvals: {fs['missing_approvals']}",
"",
f"QUALITY GATES:",
f" Total: {gs['total']} | Passed: {gs['passed']} | Failed: {gs['failed']}",
""
])
output_text = '\n'.join(output_lines)
# Write output
if args.output:
with open(args.output, 'w', encoding='utf-8') as f:
f.write(output_text)
else:
print(output_text)
if __name__ == '__main__':
main() #!/usr/bin/env python3
"""
Version Bumper
Analyzes commits since last tag to determine the correct version bump (major/minor/patch)
based on conventional commits. Handles pre-release versions (alpha, beta, rc) and generates
version bump commands for various package files.
Input: current version + commit list JSON or git log
Output: recommended new version + bump commands + updated file snippets
"""
import argparse
import json
import re
import sys
from typing import Dict, List, Optional, Tuple, Union
from enum import Enum
from dataclasses import dataclass
class BumpType(Enum):
"""Version bump types."""
NONE = "none"
PATCH = "patch"
MINOR = "minor"
MAJOR = "major"
class PreReleaseType(Enum):
"""Pre-release types."""
ALPHA = "alpha"
BETA = "beta"
RC = "rc"
@dataclass
class Version:
"""Semantic version representation."""
major: int
minor: int
patch: int
prerelease_type: Optional[PreReleaseType] = None
prerelease_number: Optional[int] = None
@classmethod
def parse(cls, version_str: str) -> 'Version':
"""Parse version string into Version object."""
# Remove 'v' prefix if present
clean_version = version_str.lstrip('v')
# Pattern for semantic versioning with optional pre-release
pattern = r'^(\d+)\.(\d+)\.(\d+)(?:-(\w+)\.?(\d+)?)?$'
match = re.match(pattern, clean_version)
if not match:
raise ValueError(f"Invalid version format: {version_str}")
major, minor, patch = int(match.group(1)), int(match.group(2)), int(match.group(3))
prerelease_type = None
prerelease_number = None
if match.group(4): # Pre-release identifier
prerelease_str = match.group(4).lower()
try:
prerelease_type = PreReleaseType(prerelease_str)
except ValueError:
# Handle variations like 'alpha1' -> 'alpha'
if prerelease_str.startswith('alpha'):
prerelease_type = PreReleaseType.ALPHA
elif prerelease_str.startswith('beta'):
prerelease_type = PreReleaseType.BETA
elif prerelease_str.startswith('rc'):
prerelease_type = PreReleaseType.RC
else:
raise ValueError(f"Unknown pre-release type: {prerelease_str}")
if match.group(5):
prerelease_number = int(match.group(5))
else:
# Extract number from combined string like 'alpha1'
number_match = re.search(r'(\d+)$', prerelease_str)
if number_match:
prerelease_number = int(number_match.group(1))
else:
prerelease_number = 1 # Default to 1
return cls(major, minor, patch, prerelease_type, prerelease_number)
def to_string(self, include_v_prefix: bool = False) -> str:
"""Convert version to string representation."""
base = f"{self.major}.{self.minor}.{self.patch}"
if self.prerelease_type:
if self.prerelease_number is not None:
base += f"-{self.prerelease_type.value}.{self.prerelease_number}"
else:
base += f"-{self.prerelease_type.value}"
return f"v{base}" if include_v_prefix else base
def bump(self, bump_type: BumpType, prerelease_type: Optional[PreReleaseType] = None) -> 'Version':
"""Create new version with specified bump."""
if bump_type == BumpType.NONE:
return Version(self.major, self.minor, self.patch, self.prerelease_type, self.prerelease_number)
new_major = self.major
new_minor = self.minor
new_patch = self.patch
new_prerelease_type = None
new_prerelease_number = None
# Handle pre-release versions
if prerelease_type:
if bump_type == BumpType.MAJOR:
new_major += 1
new_minor = 0
new_patch = 0
elif bump_type == BumpType.MINOR:
new_minor += 1
new_patch = 0
elif bump_type == BumpType.PATCH:
new_patch += 1
new_prerelease_type = prerelease_type
new_prerelease_number = 1
# Handle existing pre-release -> next pre-release
elif self.prerelease_type:
# If we're already in pre-release, increment or promote
if prerelease_type is None:
# Promote to stable release
# Don't change version numbers, just remove pre-release
pass
else:
# Move to next pre-release type or increment
if prerelease_type == self.prerelease_type:
# Same pre-release type, increment number
new_prerelease_type = self.prerelease_type
new_prerelease_number = (self.prerelease_number or 0) + 1
else:
# Different pre-release type
new_prerelease_type = prerelease_type
new_prerelease_number = 1
# Handle stable version bumps
else:
if bump_type == BumpType.MAJOR:
new_major += 1
new_minor = 0
new_patch = 0
elif bump_type == BumpType.MINOR:
new_minor += 1
new_patch = 0
elif bump_type == BumpType.PATCH:
new_patch += 1
return Version(new_major, new_minor, new_patch, new_prerelease_type, new_prerelease_number)
@dataclass
class ConventionalCommit:
"""Represents a parsed conventional commit for version analysis."""
type: str
scope: str
description: str
is_breaking: bool
breaking_description: str
hash: str = ""
author: str = ""
date: str = ""
@classmethod
def parse_message(cls, message: str, commit_hash: str = "",
author: str = "", date: str = "") -> 'ConventionalCommit':
"""Parse conventional commit message."""
lines = message.split('\n')
header = lines[0] if lines else ""
# Parse header: type(scope): description
header_pattern = r'^(\w+)(\([^)]+\))?(!)?:\s*(.+)$'
match = re.match(header_pattern, header)
commit_type = "chore"
scope = ""
description = header
is_breaking = False
breaking_description = ""
if match:
commit_type = match.group(1).lower()
scope_match = match.group(2)
scope = scope_match[1:-1] if scope_match else ""
is_breaking = bool(match.group(3)) # ! indicates breaking change
description = match.group(4).strip()
# Check for breaking change in body/footers
if len(lines) > 1:
body_text = '\n'.join(lines[1:])
if 'BREAKING CHANGE:' in body_text:
is_breaking = True
breaking_match = re.search(r'BREAKING CHANGE:\s*(.+)', body_text)
if breaking_match:
breaking_description = breaking_match.group(1).strip()
return cls(commit_type, scope, description, is_breaking, breaking_description,
commit_hash, author, date)
class VersionBumper:
"""Main version bumping logic."""
def __init__(self):
self.current_version: Optional[Version] = None
self.commits: List[ConventionalCommit] = []
self.custom_rules: Dict[str, BumpType] = {}
self.ignore_types: List[str] = ['test', 'ci', 'build', 'chore', 'docs', 'style']
def set_current_version(self, version_str: str):
"""Set the current version."""
self.current_version = Version.parse(version_str)
def add_custom_rule(self, commit_type: str, bump_type: BumpType):
"""Add custom rule for commit type to bump type mapping."""
self.custom_rules[commit_type] = bump_type
def parse_commits_from_json(self, json_data: Union[str, List[Dict]]):
"""Parse commits from JSON format."""
if isinstance(json_data, str):
data = json.loads(json_data)
else:
data = json_data
self.commits = []
for commit_data in data:
commit = ConventionalCommit.parse_message(
message=commit_data.get('message', ''),
commit_hash=commit_data.get('hash', ''),
author=commit_data.get('author', ''),
date=commit_data.get('date', '')
)
self.commits.append(commit)
def parse_commits_from_git_log(self, git_log_text: str):
"""Parse commits from git log output."""
lines = git_log_text.strip().split('\n')
if not lines or not lines[0]:
return
# Simple oneline format (hash message)
oneline_pattern = r'^([a-f0-9]{7,40})\s+(.+)$'
self.commits = []
for line in lines:
line = line.strip()
if not line:
continue
match = re.match(oneline_pattern, line)
if match:
commit_hash = match.group(1)
message = match.group(2)
commit = ConventionalCommit.parse_message(message, commit_hash)
self.commits.append(commit)
def determine_bump_type(self) -> BumpType:
"""Determine version bump type based on commits."""
if not self.commits:
return BumpType.NONE
has_breaking = False
has_feature = False
has_fix = False
for commit in self.commits:
# Check for breaking changes
if commit.is_breaking:
has_breaking = True
continue
# Apply custom rules first
if commit.type in self.custom_rules:
bump_type = self.custom_rules[commit.type]
if bump_type == BumpType.MAJOR:
has_breaking = True
elif bump_type == BumpType.MINOR:
has_feature = True
elif bump_type == BumpType.PATCH:
has_fix = True
continue
# Standard rules
if commit.type in ['feat', 'add']:
has_feature = True
elif commit.type in ['fix', 'security', 'perf', 'bugfix']:
has_fix = True
# Ignore types in ignore_types list
# Determine bump type by priority
if has_breaking:
return BumpType.MAJOR
elif has_feature:
return BumpType.MINOR
elif has_fix:
return BumpType.PATCH
else:
return BumpType.NONE
def recommend_version(self, prerelease_type: Optional[PreReleaseType] = None) -> Version:
"""Recommend new version based on commits."""
if not self.current_version:
raise ValueError("Current version not set")
bump_type = self.determine_bump_type()
return self.current_version.bump(bump_type, prerelease_type)
def generate_bump_commands(self, new_version: Version) -> Dict[str, List[str]]:
"""Generate version bump commands for different package managers."""
version_str = new_version.to_string()
version_with_v = new_version.to_string(include_v_prefix=True)
commands = {
'npm': [
f"npm version {version_str} --no-git-tag-version",
f"# Or manually edit package.json version field to '{version_str}'"
],
'python': [
f"# Update version in setup.py, __init__.py, or pyproject.toml",
f"# setup.py: version='{version_str}'",
f"# pyproject.toml: version = '{version_str}'",
f"# __init__.py: __version__ = '{version_str}'"
],
'rust': [
f"# Update Cargo.toml",
f"# [package]",
f"# version = '{version_str}'"
],
'git': [
f"git tag -a {version_with_v} -m 'Release {version_with_v}'",
f"git push origin {version_with_v}"
],
'docker': [
f"docker build -t myapp:{version_str} .",
f"docker tag myapp:{version_str} myapp:latest"
]
}
return commands
def generate_file_updates(self, new_version: Version) -> Dict[str, str]:
"""Generate file update snippets for common package files."""
version_str = new_version.to_string()
updates = {}
# package.json
updates['package.json'] = json.dumps({
"name": "your-package",
"version": version_str,
"description": "Your package description",
"main": "index.js"
}, indent=2)
# pyproject.toml
updates['pyproject.toml'] = f'''[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "your-package"
version = "{version_str}"
description = "Your package description"
authors = [
{{name = "Your Name", email = "[email protected]"}},
]
'''
# setup.py
updates['setup.py'] = f'''from setuptools import setup, find_packages
setup(
name="your-package",
version="{version_str}",
description="Your package description",
packages=find_packages(),
python_requires=">=3.8",
)
'''
# Cargo.toml
updates['Cargo.toml'] = f'''[package]
name = "your-package"
version = "{version_str}"
edition = "2021"
description = "Your package description"
'''
# __init__.py
updates['__init__.py'] = f'''"""Your package."""
__version__ = "{version_str}"
__author__ = "Your Name"
__email__ = "[email protected]"
'''
return updates
def analyze_commits(self) -> Dict:
"""Provide detailed analysis of commits for version bumping."""
if not self.commits:
return {
'total_commits': 0,
'by_type': {},
'breaking_changes': [],
'features': [],
'fixes': [],
'ignored': []
}
analysis = {
'total_commits': len(self.commits),
'by_type': {},
'breaking_changes': [],
'features': [],
'fixes': [],
'ignored': []
}
type_counts = {}
for commit in self.commits:
type_counts[commit.type] = type_counts.get(commit.type, 0) + 1
if commit.is_breaking:
analysis['breaking_changes'].append({
'type': commit.type,
'scope': commit.scope,
'description': commit.description,
'breaking_description': commit.breaking_description,
'hash': commit.hash
})
elif commit.type in ['feat', 'add']:
analysis['features'].append({
'scope': commit.scope,
'description': commit.description,
'hash': commit.hash
})
elif commit.type in ['fix', 'security', 'perf', 'bugfix']:
analysis['fixes'].append({
'scope': commit.scope,
'description': commit.description,
'hash': commit.hash
})
elif commit.type in self.ignore_types:
analysis['ignored'].append({
'type': commit.type,
'scope': commit.scope,
'description': commit.description,
'hash': commit.hash
})
analysis['by_type'] = type_counts
return analysis
def main():
"""Main CLI entry point."""
parser = argparse.ArgumentParser(description="Determine version bump based on conventional commits")
parser.add_argument('--current-version', '-c', required=True,
help='Current version (e.g., 1.2.3, v1.2.3)')
parser.add_argument('--input', '-i', type=str,
help='Input file with commits (default: stdin)')
parser.add_argument('--input-format', choices=['git-log', 'json'],
default='git-log', help='Input format')
parser.add_argument('--prerelease', '-p',
choices=['alpha', 'beta', 'rc'],
help='Generate pre-release version')
parser.add_argument('--output-format', '-f',
choices=['text', 'json', 'commands'],
default='text', help='Output format')
parser.add_argument('--output', '-o', type=str,
help='Output file (default: stdout)')
parser.add_argument('--include-commands', action='store_true',
help='Include bump commands in output')
parser.add_argument('--include-files', action='store_true',
help='Include file update snippets')
parser.add_argument('--custom-rules', type=str,
help='JSON string with custom type->bump rules')
parser.add_argument('--ignore-types', type=str,
help='Comma-separated list of types to ignore')
parser.add_argument('--analysis', '-a', action='store_true',
help='Include detailed commit analysis')
args = parser.parse_args()
# Read input
if args.input:
with open(args.input, 'r', encoding='utf-8') as f:
input_data = f.read()
else:
input_data = sys.stdin.read()
if not input_data.strip():
print("No input data provided", file=sys.stderr)
sys.exit(1)
# Initialize version bumper
bumper = VersionBumper()
try:
bumper.set_current_version(args.current_version)
except ValueError as e:
print(f"Invalid current version: {e}", file=sys.stderr)
sys.exit(1)
# Apply custom rules
if args.custom_rules:
try:
custom_rules = json.loads(args.custom_rules)
for commit_type, bump_type_str in custom_rules.items():
bump_type = BumpType(bump_type_str.lower())
bumper.add_custom_rule(commit_type, bump_type)
except Exception as e:
print(f"Invalid custom rules: {e}", file=sys.stderr)
sys.exit(1)
# Set ignore types
if args.ignore_types:
bumper.ignore_types = [t.strip() for t in args.ignore_types.split(',')]
# Parse commits
try:
if args.input_format == 'json':
bumper.parse_commits_from_json(input_data)
else:
bumper.parse_commits_from_git_log(input_data)
except Exception as e:
print(f"Error parsing commits: {e}", file=sys.stderr)
sys.exit(1)
# Determine pre-release type
prerelease_type = None
if args.prerelease:
prerelease_type = PreReleaseType(args.prerelease)
# Generate recommendation
try:
recommended_version = bumper.recommend_version(prerelease_type)
bump_type = bumper.determine_bump_type()
except Exception as e:
print(f"Error determining version: {e}", file=sys.stderr)
sys.exit(1)
# Generate output
output_data = {}
if args.output_format == 'json':
output_data = {
'current_version': args.current_version,
'recommended_version': recommended_version.to_string(),
'recommended_version_with_v': recommended_version.to_string(include_v_prefix=True),
'bump_type': bump_type.value,
'prerelease': args.prerelease
}
if args.analysis:
output_data['analysis'] = bumper.analyze_commits()
if args.include_commands:
output_data['commands'] = bumper.generate_bump_commands(recommended_version)
if args.include_files:
output_data['file_updates'] = bumper.generate_file_updates(recommended_version)
output_text = json.dumps(output_data, indent=2)
elif args.output_format == 'commands':
commands = bumper.generate_bump_commands(recommended_version)
output_lines = [
f"# Version Bump Commands",
f"# Current: {args.current_version}",
f"# New: {recommended_version.to_string()}",
f"# Bump Type: {bump_type.value}",
""
]
for category, cmd_list in commands.items():
output_lines.append(f"## {category.upper()}")
for cmd in cmd_list:
output_lines.append(cmd)
output_lines.append("")
output_text = '\n'.join(output_lines)
else: # text format
output_lines = [
f"Current Version: {args.current_version}",
f"Recommended Version: {recommended_version.to_string()}",
f"With v prefix: {recommended_version.to_string(include_v_prefix=True)}",
f"Bump Type: {bump_type.value}",
""
]
if args.analysis:
analysis = bumper.analyze_commits()
output_lines.extend([
"Commit Analysis:",
f"- Total commits: {analysis['total_commits']}",
f"- Breaking changes: {len(analysis['breaking_changes'])}",
f"- New features: {len(analysis['features'])}",
f"- Bug fixes: {len(analysis['fixes'])}",
f"- Ignored commits: {len(analysis['ignored'])}",
""
])
if analysis['breaking_changes']:
output_lines.append("Breaking Changes:")
for change in analysis['breaking_changes']:
scope = f"({change['scope']})" if change['scope'] else ""
output_lines.append(f" - {change['type']}{scope}: {change['description']}")
output_lines.append("")
if args.include_commands:
commands = bumper.generate_bump_commands(recommended_version)
output_lines.append("Bump Commands:")
for category, cmd_list in commands.items():
output_lines.append(f" {category}:")
for cmd in cmd_list:
if not cmd.startswith('#'):
output_lines.append(f" {cmd}")
output_lines.append("")
output_text = '\n'.join(output_lines)
# Write output
if args.output:
with open(args.output, 'w', encoding='utf-8') as f:
f.write(output_text)
else:
print(output_text)
if __name__ == '__main__':
main() Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill engineering/release-manager Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Development
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
engineering/release-manager/SKILL.md
People who install this also use
CI/CD Pipeline Builder
Build production CI/CD pipelines for GitHub Actions, GitLab CI, and CircleCI — from lint and test to deploy with environment promotion and rollbacks.
@alirezarezvani
Senior DevOps Engineer
CI/CD pipeline design, Infrastructure as Code, containerization with Docker and Kubernetes, and deployment automation from a senior DevOps perspective.
@alirezarezvani
Incident Commander
Lead incident response from detection to resolution — coordinate teams, run war rooms, draft status updates, and produce postmortems.
@alirezarezvani