COO Advisor
Operations leadership — process design, OKR execution, scaling playbooks, operational efficiency, and cross-functional alignment from a COO perspective.
What this skill does
Act as your virtual Chief Operating Officer to design efficient processes, set up company goals, and establish the operational routines needed to scale your business. You will receive actionable playbooks for each growth stage that identify bottlenecks and ensure every team's work connects directly to company vision. Reach for this guidance when strategy isn't translating into results or rapid growth is creating organizational chaos.
name: “coo-advisor” description: “Operations leadership for scaling companies. Process design, OKR execution, operational cadence, and scaling playbooks. Use when designing operations, setting up OKRs, building processes, scaling teams, analyzing bottlenecks, planning operational cadence, or when user mentions COO, operations, process improvement, OKRs, scaling, operational efficiency, or execution.” license: MIT metadata: version: 1.0.0 author: Alireza Rezvani category: c-level domain: coo-leadership updated: 2026-03-05 python-tools: ops_efficiency_analyzer.py, okr_tracker.py frameworks: scaling-playbook, ops-cadence, process-frameworks
COO Advisor
Operational frameworks and tools for turning strategy into execution, scaling processes, and building the organizational engine.
Keywords
COO, chief operating officer, operations, operational excellence, process improvement, OKRs, objectives and key results, scaling, operational efficiency, execution, bottleneck analysis, process design, operational cadence, meeting cadence, org scaling, lean operations, continuous improvement
Quick Start
python scripts/ops_efficiency_analyzer.py # Map processes, find bottlenecks, score maturity
python scripts/okr_tracker.py # Cascade OKRs, track progress, flag at-risk items
Core Responsibilities
1. Strategy Execution
The CEO sets direction. The COO makes it happen. Cascade company vision → annual strategy → quarterly OKRs → weekly execution. See references/ops_cadence.md for full OKR cascade framework.
2. Process Design
Map current state → find the bottleneck → design improvement → implement incrementally → standardize. See references/process_frameworks.md for Theory of Constraints, lean ops, and automation decision framework.
Process Maturity Scale:
| Level | Name | Signal |
|---|---|---|
| 1 | Ad hoc | Different every time |
| 2 | Defined | Written but not followed |
| 3 | Measured | KPIs tracked |
| 4 | Managed | Data-driven improvement |
| 5 | Optimized | Continuous improvement loops |
3. Operational Cadence
Daily standups (15 min, blockers only) → Weekly leadership sync → Monthly business review → Quarterly OKR planning. See references/ops_cadence.md for full templates.
4. Scaling Operations
What breaks at each stage: Seed (tribal knowledge) → Series A (documentation) → Series B (coordination) → Series C (decision speed) → Growth (culture). See references/scaling_playbook.md for detailed playbook per stage.
5. Cross-Functional Coordination
RACI for key decisions. Escalation framework: Team lead → Dept head → COO → CEO based on impact scope.
Key Questions a COO Asks
- “What’s the bottleneck? Not what’s annoying — what limits throughput.”
- “How many manual steps? Which break at 3x volume?”
- “Who’s the single point of failure?”
- “Can every team articulate how their work connects to company goals?”
- “The same blocker appeared 3 weeks in a row. Why isn’t it fixed?”
Operational Metrics
| Category | Metric | Target |
|---|---|---|
| Execution | OKR progress (% on track) | > 70% |
| Execution | Quarterly goals hit rate | > 80% |
| Speed | Decision cycle time | < 48 hours |
| Quality | Customer-facing incidents | < 2/month |
| Efficiency | Revenue per employee | Track trend |
| Efficiency | Burn multiple | < 2x |
| People | Regrettable attrition | < 10% |
Red Flags
- OKRs consistently 1.0 (not ambitious) or < 0.3 (disconnected from reality)
- Teams can’t explain how their work maps to company goals
- Leadership meetings produce no action items two weeks running
- Same blocker in three consecutive syncs
- Process exists but nobody follows it
- Departments optimize local metrics at expense of company metrics
Integration with Other C-Suite Roles
| When… | COO works with… | To… |
|---|---|---|
| Strategy shifts | CEO | Translate direction into ops plan |
| Roadmap changes | CPO + CTO | Assess operational impact |
| Revenue targets change | CRO | Adjust capacity planning |
| Budget constraints | CFO | Find efficiency gains |
| Hiring plans | CHRO | Align headcount with ops needs |
| Security incidents | CISO | Coordinate response |
Detailed References
references/scaling_playbook.md— what changes at each growth stagereferences/ops_cadence.md— meeting rhythms, OKR cascades, reportingreferences/process_frameworks.md— lean ops, TOC, automation decisions
Proactive Triggers
Surface these without being asked when you detect them in company context:
- Same blocker appearing 3+ weeks → process is broken, not just slow
- OKR check-in overdue → prompt quarterly review
- Team growing past a scaling threshold (10→30, 30→80) → flag what will break
- Decision cycle time increasing → authority structure needs adjustment
- Meeting cadence not established → propose rhythm before chaos sets in
Output Artifacts
| Request | You Produce |
|---|---|
| ”Set up OKRs” | Cascaded OKR framework (company → dept → team) |
| “We’re scaling fast” | Scaling readiness report with what breaks next |
| ”Our process is broken” | Process map with bottleneck identified + fix plan |
| ”How efficient are we?” | Ops efficiency scorecard with maturity ratings |
| ”Design our meeting cadence” | Full cadence template (daily → quarterly) |
Reasoning Technique: Step by Step
Map processes sequentially. Identify each step, handoff, and decision point. Find the bottleneck using throughput analysis. Propose improvements one step at a time.
Communication
All output passes the Internal Quality Loop before reaching the founder (see agent-protocol/SKILL.md).
- Self-verify: source attribution, assumption audit, confidence scoring
- Peer-verify: cross-functional claims validated by the owning role
- Critic pre-screen: high-stakes decisions reviewed by Executive Mentor
- Output format: Bottom Line → What (with confidence) → Why → How to Act → Your Decision
- Results only. Every finding tagged: 🟢 verified, 🟡 medium, 🔴 assumed.
Context Integration
- Always read
company-context.mdbefore responding (if it exists) - During board meetings: Use only your own analysis in Phase 2 (no cross-pollination)
- Invocation: You can request input from other roles:
[INVOKE:role|question]
Operational Cadence: Meetings, Async, Decisions, and Reporting
The rhythm of your company determines its output. Bad cadence = constant context-switching, decisions made without information, and a leadership team that's always reactive.
Philosophy
Meetings are a tax. Every hour in a meeting is an hour not spent building, selling, or serving customers. A good cadence minimizes meeting time while ensuring the right people have the right information at the right time.
Async is default, sync is exception. Most information sharing and routine updates should happen in writing. Reserve synchronous time for things that genuinely require real-time discussion: decisions with significant disagreement, complex problem-solving, relationship-building.
Cadence serves strategy. The calendar reflects priorities. If you're doing monthly all-hands but weekly status updates, you've inverted the importance.
Meeting Cadence Templates
Daily Operations
Daily Standup (Engineering / Product Teams)
Format: Async-first (Slack/Loom); sync only if blocked
Sync duration: 15 minutes max
Participants: Team (5–10 people)
Facilitator: Team lead or rotating
ASYNC FORMAT (post in #standup channel):
Yesterday: [What I completed]
Today: [What I'm working on]
Blocked: [Anything blocking me — tag the person who can unblock]Rules:
- No status reporting in sync standup if everyone can read the async update
- Standups are not problem-solving sessions — take issues offline
- Skip standup if the team has a full-team session that day
- Kill standup if the team consistently has nothing blocked; replace with async
Daily Leadership Check-in (COO)
Format: Async only — read, don't meet
Time: 8:00–8:30 AM
COO morning read:
- Yesterday's key metrics dashboard (5 min)
- Overnight Slack/email escalations (5 min)
- Today's decisions needed list (5 min)
- Any P0/P1 incidents (check status page + on-call logs)
Weekly Cadence
Leadership Sync (Weekly)
Duration: 60–90 minutes
Participants: C-suite + VP level
Owner: COO (or CEO)
Day/Time: Monday or Tuesday, morning
AGENDA TEMPLATE:
00:00–10:00 Metrics pulse (pre-read required — no presenting charts)
- Revenue: ACV, pipeline, churn delta
- Product: shipped last week, blockers this week
- Engineering: incidents, velocity
- CS: escalations, NPS delta
- People: open reqs, attrition flag
10:00–45:00 Priority items (submitted in advance, max 3)
- Item 1: [Owner: Name] [Decision needed / FYI / Input needed]
- Item 2: [Owner: Name]
- Item 3: [Owner: Name]
45:00–60:00 Parking lot / open
- Anything not covered
- Next week flaggingPre-meeting requirements:
- Metrics dashboard updated by EOD Friday
- Priority items submitted by Sunday 6 PM
- Anyone who hasn't read the pre-read gets no floor time
Output: Decision log updated with outcomes, action items assigned in tracking system
1:1 (Manager ↔ Direct Report)
Duration: 30–45 minutes
Frequency: Weekly (skip-levels: bi-weekly)
Owner: Report (the direct report sets agenda)
1:1 STRUCTURE:
[5 min] What's on your mind / temperature check
[15 min] Their agenda — what they want to discuss
[10 min] Manager agenda — feedback, context, decisions
[5 min] Action items review from last week1:1 anti-patterns to eliminate:
- Using 1:1 for status updates (that's what standups are for)
- Manager dominating the agenda
- Skipping because "things are fine"
- No written record of what was discussed
Private 1:1 doc: Every manager/report pair maintains a shared doc with running notes, action items, and career development thread.
Cross-Functional Weekly Sync
Duration: 45 minutes
Participants: 2–4 team leads with shared dependencies
Examples: Product + Engineering, Sales + CS, Marketing + Sales
AGENDA:
00–10 Shared metrics (things both teams care about)
10–30 Active collaboration items — what needs coordination this week
30–40 Blockers + dependencies (what do I need from your team?)
40–45 Upcoming: what's coming that the other team should know aboutMonthly Cadence
All-Hands / Town Hall
Duration: 60–90 minutes
Participants: Entire company
Owner: CEO + functional heads
Format: In-person preferred; video if distributed
ALL-HANDS AGENDA (60 min version):
00–05 Opening — CEO sets the tone
05–20 Business update
- Where we are vs. plan (actuals vs. budget)
- Key wins and learning moments from last month
- What we're focused on this month
20–40 Functional spotlights (2 functions, 10 min each)
- What we shipped / what we did
- What we learned
- What's next
40–55 Open Q&A (no screened questions — take everything)
55–60 Closing
ALL-HANDS PREP CHECKLIST:
□ CEO talking points reviewed 48h in advance
□ Metrics slides reviewed by Finance for accuracy
□ Q&A prep — leadership team briefs on likely questions
□ Recording setup confirmed
□ Async option for timezones (recording posted within 2h)
□ Action items from Q&A captured and published within 24hMonthly Business Review (MBR)
Duration: 2 hours
Participants: Leadership team
Owner: COO
MBR AGENDA:
00–20 Financial review (Finance presents)
- Revenue vs. plan, by segment
- Burn rate, runway
- Headcount actual vs. plan
- Key cost drivers
20–60 Functional reviews (each VP, 8 min each)
Standard template per function:
- Metrics: [3 key metrics vs. prior month vs. plan]
- Wins: [top 2-3 wins]
- Gaps: [where we missed and why]
- Next 30 days: [top 3 priorities]
60–90 Strategic topics (pre-submitted)
- Items requiring cross-functional decision
- Risks or issues needing leadership visibility
90–110 Decisions and action items
- Document decisions made
- Assign owners and deadlines
110–120 Retrospective
- What's working in how we operate?
- What needs to change?MBR pre-read package (published 48h before):
- Financial summary (1 page)
- Each function's 1-pager (see template below)
FUNCTIONAL 1-PAGER TEMPLATE:
Function: [Name] Month: [Month Year]
Owner: [VP Name]
TOP METRICS:
| Metric | Target | Actual | vs. LM | vs. Plan |
|--------|--------|--------|--------|----------|
| [M1] | | | | |
| [M2] | | | | |
| [M3] | | | | |
WINS (2-3 bullets):
•
•
GAPS (be honest — no spin):
•
•
DEPENDENCIES (what I need from other teams):
•
NEXT 30 DAYS (top 3 priorities):
1.
2.
3.Quarterly Cadence
Quarterly Business Review (QBR)
Duration: Half day (4 hours)
Participants: Leadership team + key functional leads
Owner: CEO + COO
QBR AGENDA (4 hours):
PART 1: Look back (90 min)
- CEO: Business context and narrative (15 min)
- Finance: Full quarter P&L review (20 min)
- Each function: 10-min review against OKRs
Format: Hit/Miss/Partial for each objective + root cause
PART 2: Look forward (90 min)
- Product/Engineering: What ships next quarter (20 min)
- Sales/Marketing: Pipeline and demand plan (20 min)
- People: Headcount plan and key hires (15 min)
- Finance: Budget and forecast (20 min)
- Cross-functional dependencies (15 min)
PART 3: Strategic discussion (60 min)
- 1–2 strategic topics requiring deep discussion
- Pre-submitted and pre-read
PART 4: OKR setting for next quarter (30 min)
- Draft OKRs reviewed and challenged
- Final OKRs locked or assigned for next week finalizationQuarterly Leadership Off-site
Duration: 1–2 days (Series B+)
Participants: C-suite + VPs
Purpose: Strategy alignment, relationship building, hard conversations
Off-site agenda principles:
- No laptops during sessions (phones away)
- At least 50% discussion, max 50% presentation
- Include one session on how the leadership team is functioning (not just what the business is doing)
- Output: 1-page summary of decisions and commitments shared with the company
Annual Cadence
Annual Planning Cycle
Timeline: Start 8–10 weeks before fiscal year end
ANNUAL PLANNING TIMELINE:
Week -10: Company strategic priorities draft (CEO + COO)
Week -8: Revenue model + market analysis (Finance + Sales)
Week -7: Functional goal-setting begins
Week -6: Headcount planning by function
Week -5: Draft plans reviewed by COO
Week -4: Cross-functional dependency alignment
Week -3: Budget finalization
Week -2: Board review (if applicable)
Week -1: Final company OKRs published
Week 0: Year kick-off all-handsYear Kick-off All-Hands
Duration: 2–4 hours
Participants: Entire company
Purpose: Align entire company on year strategy and goals
KICK-OFF AGENDA:
- Last year retrospective: What we accomplished, what we learned
- Market context: Why now, why us
- Year strategy: The 2-3 things that matter most
- OKRs: Company-level goals, each function's goals
- Culture: How we'll work together
- Q&A: Open and honestAsync Communication Frameworks
The Writing-First Culture
All communication defaults to written unless real-time is genuinely necessary. This is how you scale decision-making without scaling meetings.
Written first means:
- Decisions are documented before they're communicated
- Updates are published before questions are asked
- Problems are described before solutions are proposed
Slack Channel Architecture
REQUIRED CHANNELS:
#announcements Read-only. Major company announcements only.
#general Company-wide conversation
#leadership-public Leadership decisions visible to all (transparency)
#incidents P0/P1 incidents only. Auto-resolved when incident is closed.
#metrics Automated metric updates. No discussion here.
#wins Customer wins, team wins. Culture channel.
FUNCTIONAL CHANNELS:
#engineering, #product, #sales, #marketing, #cs, #people, #finance
PROJECT CHANNELS:
#proj-[name] Temporary. Archive when project ships.
DECISION CHANNELS:
#decisions All cross-team decisions logged here with contextAnti-patterns to eliminate:
- DMs for work decisions (decisions belong in channels, visible to team)
- @channel abuse (train people — this means everyone stops what they're doing)
- Thread avoidance (all replies go in threads, period)
- Multiple channels for same function (merge aggressively)
Async Decision Template
When a decision needs input but doesn't require a meeting:
DECISION REQUEST (post in #decisions):
**Context:** [1-3 sentences on why this decision is needed]
**Options considered:**
A) [Option A] — Pros: X. Cons: Y.
B) [Option B] — Pros: X. Cons: Y.
**Recommendation:** [Your recommendation and why]
**Input needed from:** @person1, @person2 (tag specific people)
**Decide by:** [Date/Time — give at least 24 hours]
**If no response:** [Default action if no input received]Loom / Video for Async Communication
Use async video for:
- Explaining complex technical architecture
- Walking through a design or document with context
- Giving feedback that needs tone/nuance
- Team updates that would otherwise be a meeting
Loom best practices:
- Keep under 5 minutes; break up anything longer
- Always include a summary comment with key points
- Ask viewers to leave timestamp comments for specific questions
Decision-Making Frameworks
RAPID
The most practical decision-making framework for startups scaling to enterprises.
| Role | Meaning | Responsibility |
|---|---|---|
| R — Recommend | Proposes decision with analysis | Does the work, gathers input, makes recommendation |
| A — Agree | Must agree before decision is final | Has veto power; should be used sparingly |
| P — Perform | Executes the decision | Consulted during recommendation phase |
| I — Input | Consulted for perspective | Shares point of view; not binding |
| D — Decide | Makes the final call | One person only — groups don't decide |
How to use RAPID:
- For every significant decision, explicitly assign R, A, P, I, D before work begins
- The D role is always one person — never a committee
- Agree (A) roles should be limited to 2–3 people maximum; more = paralysis
- Post the RAPID in the decision doc so everyone knows the structure
Example application:
Decision: Migrate from PostgreSQL to distributed database
R: VP Engineering
A: CTO, COO (for cost implications)
P: Infrastructure team
I: Product leads, Finance
D: CTORACI
Better for ongoing processes than one-time decisions. Use RACI for recurring operational responsibilities.
| Role | Meaning |
|---|---|
| R — Responsible | Does the work |
| A — Accountable | Owns the outcome; one person only |
| C — Consulted | Input before decisions/actions |
| I — Informed | Told of decisions/actions after the fact |
RACI matrix template:
PROCESS: Customer Escalation Handling
Task | CS Lead | VP CS | Eng Lead | CEO
------------------------|---------|-------|----------|----
Receive escalation | R | I | I | -
Diagnose issue | R | C | C | -
Communicate to customer | R | A | - | I (major)
Resolve technical issue | C | - | R | -
Close escalation | R | A | I | -
Post-mortem (P0/P1) | C | A | R | ICommon RACI mistakes:
- Multiple A roles (breaks accountability)
- R and A always same person (defeats the purpose)
- Too many C roles (everyone's consulted, nothing moves)
- Not distinguishing C from I (different obligations)
DRI (Directly Responsible Individual)
Apple's framework; used widely in fast-moving tech companies. Simpler than RAPID/RACI for internal use.
The rule: Every project, deliverable, and decision has exactly one DRI. The DRI is the person who gets credit when it succeeds and gets called on when it fails. No DRI = no accountability.
DRI requirements:
- Listed by name in every project brief
- Has authority to make decisions within scope
- Is responsible for communicating status
- Cannot blame lack of resources — their job is to escalate when blocked
DRI vs. RACI: Use DRI for project ownership and RACI for process ownership. They complement each other.
Decision Log
Every significant decision gets logged. Significant = affects more than one team, costs more than $10K, or is difficult to reverse.
DECISION LOG FORMAT:
Date: [YYYY-MM-DD]
Decision: [One sentence summary]
Context: [Why was this decision needed? What was the situation?]
Options considered: [What alternatives were evaluated?]
Decision made: [What was decided?]
Rationale: [Why this option?]
Owner: [Who made the final call?]
Reversible: [Yes / No / Partially]
Review date: [When should this decision be revisited?]
Outcome: [Filled in later — what actually happened?]Reporting Templates
Weekly CEO/COO Dashboard
COMPANY HEALTH — WEEK OF [DATE]
REVENUE
ARR: $[X]M (vs. plan: +/-X%, vs. LW: +/-X%)
New ARR this week: $[X]K
Churned ARR: $[X]K
Pipeline (90-day): $[X]M
PRODUCT
Shipped this week: [Brief list]
P0/P1 incidents: [Count] — [1-line summary if any]
Deploy frequency: [X per week]
CUSTOMER
Active customers: [X]
NPS (rolling 30d): [X]
Open escalations: [X] (P0: [X], P1: [X])
PEOPLE
Headcount: [X] (vs. plan: [X])
Open reqs: [X]
Attrition (30d): [X]
CASH
Cash on hand: $[X]M
Burn (last 30d): $[X]M
Runway: [X] months
🔴 ISSUES (needs leadership attention):
•
•
🟡 WATCH (monitor, no action yet):
•
🟢 WINS:
•Monthly Investor/Board Update
[COMPANY NAME] — MONTHLY UPDATE — [MONTH YEAR]
THE HEADLINE
[2-3 sentences: what was the defining story of this month?]
KEY METRICS
| Metric | [Month] | vs. Prior | vs. Plan |
|--------|---------|-----------|----------|
| ARR | | | |
| MRR Added | | | |
| Churn | | | |
| NRR | | | |
| Burn | | | |
| Runway | | | |
WINS
1. [Specific, concrete win with numbers]
2. [Second win]
3. [Third win]
CHALLENGES
1. [Honest description of challenge + what you're doing about it]
2. [Second challenge]
KEY DECISIONS MADE
• [Decision + brief rationale]
ASKS FROM INVESTORS
• [Specific ask with context — intros, advice, etc.]
NEXT MONTH PRIORITIES
1.
2.
3.Quarterly OKR Progress Report
Q[X] OKR PROGRESS — [COMPANY NAME]
SCORING GUIDE:
🟢 On track (>70% confidence of hitting target)
🟡 At risk (50-70% confidence)
🔴 Off track (<50% confidence)
COMPANY OBJECTIVES:
O1: [Objective title]
KR1.1: [Key Result] ............... [X]% 🟢
KR1.2: [Key Result] ............... [X]% 🟡
Objective confidence: 🟢 | Notes: [1 line]
O2: [Objective title]
KR2.1: [Key Result] ............... [X]% 🔴
KR2.2: [Key Result] ............... [X]% 🟢
Objective confidence: 🟡 | Notes: [1 line]
FUNCTIONAL OBJECTIVES:
[Same format per function]
OVERALL QUARTER HEALTH: 🟡
Summary: [2-3 sentences on overall trajectory]
TOP 3 ACTIONS TO GET BACK ON TRACK:
1. [Action + owner + deadline]
2.
3.Cadence Anti-Patterns to Eliminate
| Anti-Pattern | What It Looks Like | Fix |
|---|---|---|
| Meeting creep | Calendar blocks added over time, never removed | Quarterly calendar audit — delete all recurring meetings, re-add only what's essential |
| Update theater | Meetings where people read from slides | Require pre-reads; ban in-meeting presentations |
| Decision avoidance | Topics recur across multiple meetings | Assign a D (decider) before the meeting. If no D, don't hold the meeting. |
| Sync for async | Using meetings for information sharing | Move updates to Loom/Slack; protect sync time for discussion |
| HIPPO problem | Highest-paid person in room wins | Structure discussions so data is presented before opinions |
| Retrospective theater | Retros with no action items | Every retro must produce ≥1 committed change |
| Silent agenda | Agenda not shared until meeting starts | Agendas published 24h in advance, required reading |
Cadence framework synthesized from Amazon's PR/FAQ culture, Google's OKR playbook, GitLab's remote work handbook, and operational patterns from 50+ Series A–C companies.
Process Frameworks for Startup Operations
Theory of Constraints, Lean, process mapping, automation, and change management — applied to real startup contexts, not factory floors.
Part 1: Theory of Constraints (TOC) Applied to Startups
What TOC Actually Says
Eliyahu Goldratt's core insight: every system has exactly one constraint that limits throughput. Improving anything other than the constraint is waste. The goal isn't to optimize every function — it's to identify the single bottleneck and exploit it until a new constraint emerges.
The Five Focusing Steps:
- Identify the constraint — what limits the system's output?
- Exploit it — get maximum output from the constraint without adding resources
- Subordinate everything else — other activities serve the constraint's needs
- Elevate it — add resources to increase constraint capacity
- Repeat — when the constraint moves, find the new one
Finding the Constraint in Your Startup
The constraint is almost never where people think it is. Sales thinks it's Marketing. Engineering thinks it's Product. Everyone thinks it's someone else.
Method: Map your value stream (see Part 3), measure throughput at each step, find the step with the lowest throughput or the highest queue in front of it.
Common startup constraints by stage:
| Stage | Most Common Constraint | Why |
|---|---|---|
| Pre-PMF | Learning speed | Not enough customer feedback cycles |
| Series A | Sales capacity | Demand > sales team's ability to close |
| Series B | Engineering velocity | Product backlog growing faster than shipping rate |
| Series C | Onboarding throughput | New customer volume > CS team's onboarding capacity |
| Growth | Hiring throughput | Headcount plan > recruiting team's capacity |
Applying TOC to Product Development
The five visible constraints in product development:
1. Requirements clarity Symptom: Engineering asks for clarification mid-sprint. Tickets re-opened. Scope creep. Fix: Never pull a story into sprint until acceptance criteria are written and reviewed. Product manager must be available same-day for clarification.
2. Review and approval bottleneck Symptom: PRs sit unreviewed for >24 hours. Deploys waiting for sign-off. Fix: Code review SLA: 2-hour response for small PRs (<100 lines), 4-hour for medium. Design reviews: 24-hour turnaround. Anyone waiting >SLA can escalate to manager.
3. QA throughput Symptom: "Done" pile grows faster than QA can test. Release day crunch. Fix: QA is pulled into sprint planning and sprint review. Testing starts as features finish, not all at end. Automated test coverage as a sprint exit criterion.
4. Deployment pipeline speed Symptom: Deploy takes 45+ minutes. Engineers wait. Hotfix urgency causes dangerous shortcuts. Fix: Measure deploy time weekly. Set target (10 min for most apps). Build optimization into engineering roadmap as a real ticket.
5. Feedback loop latency Symptom: You ship features and don't know if they worked for weeks. Fix: Every shipped feature has instrumented metrics reviewed within 5 business days. If no metrics exist, feature doesn't ship.
Applying TOC to Sales
The sales pipeline as a system of constraints:
Lead generation → Qualification → Demo → Proposal → Negotiation → Close
[X] → [X] → [X] → [X] → [X] → [X]
Measure: conversion rate and time-in-stage at each step.
The constraint is the step with the LOWEST conversion rate × volume.Example diagnosis:
- Lead → Qualified: 40% conversion, 2 days
- Qualified → Demo: 80% conversion, 5 days ← High conversion but slow (queue)
- Demo → Proposal: 60% conversion, 3 days
- Proposal → Close: 30% conversion, 14 days ← Constraint (lowest conversion)
Diagnosis: Proposals are being sent to wrong buyers or proposals aren't compelling. Fix: proposal template audit, champion coaching, economic buyer access earlier in process.
Part 2: Lean Operations for Tech Companies
The Lean Toolkit (What's Actually Useful)
Lean Manufacturing was designed for car factories. Most of the original toolkit doesn't apply to software. Here's what does:
Value Stream Mapping — Map the full flow of work from customer request to delivery. Label value-add time vs. wait time. Most processes are 90% wait time and 10% actual work.
5S — Sort, Set in order, Shine, Standardize, Sustain. Applied to digital work:
- Sort: Delete unused tools, channels, documents
- Set in order: Organize information architecture so things are findable
- Shine: Regular cleanup sprints (documentation, tech debt, tool hygiene)
- Standardize: Templates, conventions, naming standards
- Sustain: Assign owners; entropy is the default state
Pull vs. Push — Don't push work onto people's plates. Pull = people take work when they have capacity. Push = work is assigned to people regardless of capacity. Most companies push; lean companies pull.
Kaizen — Continuous small improvements. Build this into your operating rhythm:
- Weekly: each team identifies one small improvement to their process
- Monthly: review and close out improvement items
- Quarterly: broader process retrospective
Waste Categories (TIMWOODS) — Applied to Operations:
| Waste Type | Factory Example | Startup Example |
|---|---|---|
| Transportation | Moving parts | Handing off work between tools with no integration |
| Inventory | Parts stockpile | Unreviewed PRs, unworked backlog items, unread reports |
| Motion | Worker movement | Context switching between apps / communication channels |
| Waiting | Machine idle | Waiting for approvals, waiting for data, waiting for decisions |
| Overproduction | Making more than needed | Features built that weren't validated |
| Overprocessing | Extra steps | 6-step approval for $200 purchase |
| Defects | Rework | Bug fixes, incorrect specs, miscommunicated requirements |
| Skills | Underutilized talent | Senior engineers doing manual QA |
Exercise: For your most important process, walk through each waste category and estimate hours/week wasted. This exercise typically reveals 20–40% improvement opportunities in the first pass.
Cycle Time and Lead Time
Lead time: Time from when a request enters the system to when it exits (customer perspective). Cycle time: Time a unit of work is actively being worked on (team perspective).
Lead Time = Cycle Time + Wait TimeMost teams only measure cycle time. Customers only experience lead time. The gap between the two is pure waste.
Measuring in your context:
- Engineering: Lead time = ticket created → in production. Cycle time = in progress → PR merged.
- Sales: Lead time = lead created → closed won. Cycle time = demo completed → proposal sent.
- CS: Lead time = ticket opened → customer confirms resolved. Cycle time = ticket in-progress → resolution sent.
Improvement pattern:
- Measure lead time (not just cycle time)
- Find the steps where tickets sit waiting
- Remove the wait (automation, reduced approval layers, clearer handoff criteria)
WIP Limits
Work-In-Progress limits prevent the multi-tasking trap. When people work on 5 things simultaneously, each thing takes 5x longer and quality drops.
Recommended WIP limits:
- Individual IC: 2–3 active items at once
- Team sprint: WIP = number of engineers × 1.5
- Leadership team: No more than 3 company-level priorities per quarter
Implementation: In Jira/Linear, add a WIP column. Set a hard limit. When the column is full, no new work starts until something ships.
Part 3: Process Mapping Techniques
When to Map a Process
Map a process when:
- It's done by more than 2 people
- It fails regularly (errors, rework, complaints)
- It needs to scale (you're about to add people or volume)
- You're automating it (you must understand the manual process first)
- You're onboarding someone new to it
Don't map processes that are genuinely ad-hoc, one-person, or will change significantly in the next 90 days.
The Three Levels of Process Maps
Level 1: Swim Lane Map (for cross-functional processes)
Best for: Customer onboarding, sales-to-CS handoff, escalation handling, hiring
Example: Sales to CS Handoff
| Sales AE | Sales Ops | CS Manager | CS Rep |
--------|---------------|---------------|---------------|---------------|
Step 1 | Close deal | | | |
Step 2 | Fill handoff | | | |
| doc | | | |
Step 3 | | Route to CS | | |
Step 4 | | | Review & | |
| | | assign | |
Step 5 | | | | Send welcome |
Step 6 | | | | Schedule kick-|
| | | | off |Level 2: Flowchart (for decision-heavy processes)
Best for: Escalation routing, incident response, approval workflows
Use standard symbols:
- Rectangle = action/task
- Diamond = decision (yes/no branch)
- Oval = start/end
- Parallelogram = input/output
Level 3: Work Instructions (for execution-level processes)
Best for: Checklists, SOPs, how-to guides
Format:
Process: [Name]
Owner: [Role]
Last reviewed: [Date]
Trigger: [What starts this process]
Step 1: [Action] — [Who does it] — [Tool used] — [Expected output]
Step 2: ...
Exceptions:
- If [condition], then [alternative action]
Done when: [Definition of done]Process Audit Technique
Run this quarterly on your most critical processes:
1. Walk the process — Literally follow a unit of work from start to finish. Ask the people doing it, not the people managing it.
2. Measure three numbers:
- How long does it actually take? (lead time)
- How often does it go wrong? (error/rework rate)
- What's the cost of a failure? (downstream impact)
3. Score it:
PROCESS HEALTH SCORE:
Lead time vs. target: [+2 on target / 0 delayed / -2 significantly delayed]
Error rate: [+2 <5% / 0 5-15% / -2 >15%]
Documented: [+1 yes / -1 no]
Owner named: [+1 yes / -1 no]
Last reviewed (< 6 months): [+1 yes / -1 no]
Max: 7. Score <3 = needs immediate attention.Part 4: Automation Decision Framework
The "Should I Automate This?" Test
Not everything should be automated. Bad automation of a broken process = faster broken process.
The five-question filter:
Is the process stable? If it changes monthly, automate later. Automating unstable processes locks in the wrong behavior.
How often does it happen? Weekly or more frequent = good candidate. Monthly or less = probably not worth it.
What's the error rate without automation? If the manual process is accurate 95%+ of the time, automation ROI is lower.
What's the cost of failure? Customer-facing, compliance, or financial processes deserve higher automation priority than internal reporting.
Is the process well-documented? If you can't describe it in a flowchart, you can't automate it. Document first.
Automation ROI Calculation
Annual hours saved = (minutes per occurrence / 60) × occurrences per year
Annual labor cost saved = hours saved × fully-loaded cost per hour
Net annual value = labor cost saved + error reduction value + speed improvement value
Build/buy cost = development time + maintenance overhead
Payback period = build/buy cost ÷ net annual value
Rule of thumb: automate if payback period < 12 monthsExample:
- Process: Weekly sales report compilation
- Time: 3 hours/week manually
- Fully-loaded cost: $75/hour
- Annual manual cost: 3 × 52 × $75 = $11,700
- Automation cost: 40 hours to build = $3,000
- Payback: 3,000 ÷ 11,700 = 3 months → Automate
Automation Tiers
Tier 1: No-code automation (0–8 hours to implement)
- Tools: Zapier, Make (Integromat), n8n, HubSpot workflows
- Use for: Notification triggers, data syncs between tools, simple conditional routing
- Example: New customer in CRM → create CS ticket → send welcome Slack message
Tier 2: Low-code automation (8–40 hours to implement)
- Tools: Retool, internal scripts, Google Apps Script, Airtable Automations
- Use for: Internal dashboards, data transformation, approval workflows
- Example: Weekly metrics compilation from Salesforce + Mixpanel + HubSpot into Notion dashboard
Tier 3: Engineered automation (40+ hours to implement)
- Built by engineering team as product/infrastructure work
- Use for: Customer-facing workflows, compliance-critical processes, high-volume operations
- Example: Automated customer health score calculation → CS alert → playbook trigger
Automation Prioritization Matrix
HIGH FREQUENCY
|
Tier 1 now | Tier 2-3 now
(quick win) | (high-value)
|
LOW VALUE ________________|________________ HIGH VALUE
|
Don't bother | Plan for later
| (when it's bigger)
|
LOW FREQUENCYPlace each manual process in the quadrant. Execute top-right first, Tier 1 items second.
Automation Governance
As automation grows, it needs governance:
Automation registry: Maintain a list of all automations with:
- Name and description
- Owner (person responsible if it breaks)
- Tools used
- Trigger and action
- Last tested date
- Business impact if down
Review cadence: Quarterly review of automation registry. Kill automations nobody uses.
Failure alerting: Every production automation must have failure notifications sent to a named owner. Silent failures are worse than no automation.
Part 5: Change Management for Process Rollouts
Why Process Changes Fail
Most process changes fail not because the process is wrong, but because of how it's rolled out. Common failure modes:
- Top-down dictate: Process designed by leadership, announced to team, implemented poorly because people weren't involved and don't understand why.
- No training: "Here's the new process" with no demonstration or practice.
- No feedback loop: Process is rolled out and never adjusted based on what the team discovers.
- No accountability: Process is optional in practice because there are no consequences for ignoring it.
- Old behavior still possible: You introduce a new tool but don't turn off the old way.
The Change Management Framework (ADKAR)
ADKAR (Awareness, Desire, Knowledge, Ability, Reinforcement) is the most practical model for operational change.
A — Awareness: Does everyone understand WHY the change is needed?
- Don't just announce the new process — explain what was broken about the old one
- Share the data: "Our current onboarding takes 45 days, customers who onboard faster have 2x better retention. The new process targets 21 days."
D — Desire: Do people want to change?
- Resistance is information. Listen to it.
- Involve front-line workers in process design. People support what they help build.
- Address WIIFM (What's In It For Me) for each affected group
K — Knowledge: Do people know HOW to do the new process?
- Write it down (work instructions format above)
- Run live demos and practice sessions
- Create a "first time" checklist
A — Ability: Can people actually do the new process?
- Identify where people get stuck (first 2 weeks of rollout)
- Have a designated expert for questions
- Remove friction: if the new process requires 3 clicks where the old required 1, people will revert
R — Reinforcement: Does the change stick?
- Measure adoption (are people actually using the new process?)
- Celebrate early adopters
- Address non-adoption promptly — call it out without shame
Change Rollout Checklist
PRE-LAUNCH:
□ Process designed and documented
□ Stakeholders identified (people affected by change)
□ Champions identified (people who will help adoption)
□ Training materials created
□ Success metrics defined (how will you know it worked?)
□ Rollback plan documented (what if it breaks something?)
□ Launch timeline set and communicated
LAUNCH WEEK:
□ Announcement sent with WHY, WHAT, and WHEN
□ Training sessions held (at least 2 options for different schedules)
□ Feedback channel opened (Slack thread, form, or dedicated meeting)
□ Champions briefed to support peers
2-WEEK CHECK:
□ Adoption rate measured
□ Friction points documented
□ Quick fixes implemented
□ Feedback reviewed and responded to
30-DAY REVIEW:
□ Success metrics reviewed vs. baseline
□ Process adjustments made based on learnings
□ Champions recognized
□ Process documentation updated with lessons learned
90-DAY CLOSE:
□ Full adoption confirmed or non-adoption addressed
□ Process owners confirmed
□ Handoff to BAU (business as usual) operationsManaging Resistance
Types of resistance and responses:
| Resistance Type | What It Sounds Like | Right Response |
|---|---|---|
| Legitimate concern | "This process won't work because X happens" | Acknowledge, investigate, fix or explain |
| Anxiety | "I don't know how to do this" | Training, support, reassurance |
| Loss of control | "This takes away my judgment" | Involve them in design; give them ownership of part of it |
| Passive non-compliance | Silent ignoring of the new process | Direct conversation; make it visible and required |
| Organizational inertia | "We've always done it this way" | Show the cost of the status quo in concrete terms |
The three levers of adoption:
- Make the new way easier than the old way (remove the old path if possible)
- Make non-adoption visible (dashboards showing who's using the process)
- Connect process to meaningful outcomes (show how it affects things people care about)
Process Documentation Standards
Every process should have exactly one owner responsible for keeping it current.
Minimum documentation for any process:
- Process name and one-sentence purpose
- Owner: Named individual, not a team
- Trigger: What starts this process
- Steps: Written at the level that a new employee could execute
- Exceptions: Common edge cases and how to handle them
- Done definition: How you know the process is complete
- Review date: Set a future date when this gets reviewed
Documentation debt kills scale. The most valuable time to document is right after you've run the process for the third time — you've found the edge cases, you know the real steps, and the process is still fresh.
Framework Selection Guide
| Situation | Framework |
|---|---|
| We're slow and can't figure out why | Theory of Constraints — find the bottleneck |
| We have lots of waste and overhead | Lean — waste audit (TIMWOODS) |
| Process is inconsistent across team | Process mapping — Level 1 swim lane |
| Deciding what to automate | Automation decision framework + ROI calc |
| New process keeps getting ignored | ADKAR change management |
| Unclear who's responsible | RACI or DRI framework |
| Too many decisions escalating to leadership | RAPID decision rights |
Frameworks synthesized from: Eliyahu Goldratt's The Goal and Critical Chain; Womack and Jones' Lean Thinking; Prosci ADKAR model; Scaled Agile Framework (SAFe) process guidance; operational playbooks from Stripe, Airbnb, and Shopify operations teams.
Scaling Playbook: What Breaks at Each Growth Stage
Compiled from patterns across 100+ high-growth companies. Not theory — this is what actually breaks and what to do about it.
How to Use This Playbook
Each stage section covers:
- What breaks — the specific failure modes that kill companies at this stage
- Hiring — who to bring in and when
- Process — what to formalize vs. keep loose
- Tools — infrastructure that unlocks the next stage
- Communication — how information flow changes
- Culture — what to protect and what to let go
Benchmarks are medians — your mileage varies by sector, geography, and business model.
Stage 0: Pre-Seed / Seed ($0–$2M ARR, 1–15 people)
Key Benchmarks
| Metric | Benchmark |
|---|---|
| Revenue per employee | $0–$100K (still finding PMF) |
| Manager:IC ratio | N/A (no managers) |
| Burn multiple | 2–5x (acceptable) |
| Runway | 12–18 months minimum |
| Time-to-hire | 2–4 weeks |
What Breaks
Premature process. The #1 mistake at seed stage is adding process before you have a repeatable model. Sprint ceremonies, OKR frameworks, and performance reviews are all theater when you haven't found PMF. Every hour spent in process is an hour not spent learning.
Wrong first hires. Hiring "senior" people who've only worked in structured environments. You need people who can operate in chaos, not people who expect process to already exist.
Founder communication bottleneck. Founders try to be in every decision. Fine at 5 people, fatal at 12. No written decisions means knowledge lives in founders' heads — unscalable.
Technical debt accepted as strategy. "We'll fix it later" said about core data models, auth systems, or billing. Later comes at Series A and it costs 3x more to fix.
Hiring
- Don't hire for scale you don't have. Hire for the next 12 months.
- First 10 hires set culture permanently. Get them wrong and you'll spend years correcting.
- Hire athletes, not specialists. Generalists who can do multiple jobs outperform specialists at this stage.
- Avoid VP titles early. Inflated titles block future hires and create expectations you can't meet.
- Founder-referral bias is real. Your network is homogeneous. Force diversity early.
Who to hire first (in rough order):
- Engineers who can ship product (2–3 generalists)
- First sales/GTM if B2B (founder-led sales first, then one closer)
- Designer/product (often a hybrid)
- Customer success (often a founder at first)
Process
Formalize nothing before PMF. Literally. Run on Slack, shared docs, and founder judgment.
After PMF signals appear, formalize only:
- How you handle customer escalations
- How you deploy code (even basic CI/CD)
- How you onboard new hires (a 1-page checklist is enough)
Decision rule: If a founder has to answer the same question three times, write it down. Once.
Tools
| Function | Seed-Stage Tool |
|---|---|
| Communication | Slack + Google Workspace |
| Project tracking | Linear or Notion (pick one, stay consistent) |
| CRM | HubSpot free or Notion |
| Engineering | GitHub + basic CI (GitHub Actions) |
| Finance | Brex/Mercury + QuickBooks |
| HR | Rippling or Gusto (basic) |
| Analytics | Mixpanel or PostHog (free tier) |
Rule: One tool per function. No tool sprawl. Every extra tool is a coordination tax.
Communication
- Weekly all-hands (30 min max). What shipped, what's stuck, what's next.
- No status meetings. Anyone can see status in Linear/Notion.
- Founder write-ups. Every major decision gets a 1-paragraph Slack post explaining why.
- Group chat discipline. One channel per project/customer. Inbox zero mentality.
Culture
What to build deliberately:
- High ownership: everyone acts like they own the company, because they do
- Direct feedback: brutal honesty delivered with care
- Bias to ship: done > perfect
- Customer obsession: founders talk to customers weekly
What to watch for:
- "Hero culture" where one person saves everything — unsustainable
- Over-indexing on culture fit (code for homogeneity)
- Avoidance of conflict — mistaking silence for agreement
Stage 1: Series A ($2–$10M ARR, 15–50 people)
Key Benchmarks
| Metric | Benchmark |
|---|---|
| Revenue per employee | $100–$200K |
| Manager:IC ratio | 1:6–1:8 |
| Burn multiple | 1.5–2.5x |
| Sales efficiency (CAC payback) | <18 months |
| Churn (B2B SaaS) | <10% net annual |
| Engineering velocity | Feature shipped every 1–2 weeks |
| Time-to-hire | 4–6 weeks |
| Offer acceptance rate | >80% |
What Breaks
Founder-as-manager bottleneck. At 20+ people, founders can't manage everyone. The first layer of management needs to appear — and it's usually picked wrong (best IC ≠ best manager).
Tribal knowledge explosion. "Ask Sarah" stops working when Sarah has 15 things open. Documentation becomes critical — not for bureaucracy, but because institutional knowledge is now a flight risk.
Sales process fragmentation. Without a defined sales process, every rep closes differently. You can't train, debug, or scale what you can't see.
Scope creep in product. With Series A money comes investor pressure to expand scope. Teams try to build three things at once and ship nothing well.
Compensation chaos. Early employees got equity-heavy deals. New hires get market cash. Someone compares, someone gets upset. No comp philosophy = constant re-negotiation.
Recruiting becomes a job in itself. Founders can't hire 30 people themselves. First dedicated recruiter needed by 25 people.
Hiring
Who to hire at Series A:
- Head of Engineering (if founder is CTO): needs to be an operator, not just an architect
- First Sales Manager (when you have 3+ reps): don't promote the best seller
- HR/People Ops (generalist, by 30 people): comp, compliance, recruiting coordination
- Finance (fractional CFO or strong controller): Series A board needs real numbers
- Customer Success Lead: retention is everything at this stage
Hiring mistakes to avoid:
- Hiring "big company" execs who need large teams and established process
- Assuming your Series A lead can recruit (they can intro, not close)
- Taking too long — top candidates have 2–3 offers. Move in <2 weeks from first call to offer.
Leveling: Build a simple career ladder before the compensation complaints start. 3–4 levels per function is enough.
Process
What to formalize at Series A:
- Sprint planning (2-week sprints, public roadmap)
- Sales process (defined stages with entry/exit criteria)
- Onboarding (30/60/90 day plan for each function)
- 1:1 cadence (weekly for direct reports, bi-weekly for skip-levels)
- Incident response (P0/P1/P2 definition, on-call rotation)
- Quarterly planning (OKRs or goals framework — keep it lightweight)
What to keep loose:
- Internal project process (let teams self-organize)
- Meeting formats (let teams evolve their own rituals)
- Tool selection within approved stack
Documentation standard: Write decisions down in a shared wiki. "Decision log" with date, decision, context, owner, and outcome. Takes 5 minutes, saves hours.
Tools
| Function | Series A Tool |
|---|---|
| Project/Product | Linear + Notion |
| CRM | HubSpot or Salesforce (Starter) |
| Engineering | GitHub + CI/CD pipeline + Sentry |
| HR/People | Rippling or Lattice (performance) |
| Finance | NetSuite or QBO + Brex |
| Analytics | Mixpanel/Amplitude + Looker (or Metabase) |
| Customer Success | Intercom + HubSpot or Zendesk |
| Docs | Notion or Confluence |
Communication
Introduce structured communication layers:
- Company all-hands (monthly, 60 min): CEO share, metrics review, team spotlights, Q&A
- Leadership sync (weekly, 60 min): cross-functional issues, blockers, priorities
- Team standups (async or 15 min daily): what's in progress, what's blocked
- 1:1s (weekly): direct report health, career, performance
- Written updates (weekly to investors + board): CEO memo format
Information hierarchy: Everyone in the company should know: (1) company goals this quarter, (2) their team's goals, (3) what they personally own. If they don't, your communication structure is broken.
Culture
Deliberate culture work starts here. You're too big for culture to be accidental.
- Write down values. Real values with examples of what they look like in action. Not "integrity" — "we tell investors bad news before we tell them good news."
- Performance management. First PIPs (Performance Improvement Plans) happen at this stage. Handle them well — the team is watching.
- Equity culture. Make sure people understand what their equity is worth in different outcomes. Lack of transparency breeds resentment.
- First layoff plan. Even if you never use it, know the criteria. Reactive layoffs destroy trust; plan-based ones (even painful) preserve it.
Stage 2: Series B ($10–$30M ARR, 50–150 people)
Key Benchmarks
| Metric | Benchmark |
|---|---|
| Revenue per employee | $150–$300K |
| Manager:IC ratio | 1:5–1:7 |
| Burn multiple | 1.0–1.5x |
| CAC payback | <12 months |
| NRR (net revenue retention) | >110% |
| Engineering: Product ratio | ~3:1 |
| Sales: CS ratio | ~3:1 |
| Time-to-hire (senior) | 6–10 weeks |
| Annual attrition | <15% voluntary |
What Breaks
Middle management void. You now have managers managing managers. The "player-coach" model breaks — people can't be ICs and managers simultaneously at this scale. Force the choice.
Planning misalignment. Sales promises what product hasn't built. Product builds what customers didn't ask for. Engineering ships what QA didn't test. Fixing this requires cross-functional planning ceremonies.
Data fragmentation. Five different versions of "how are we doing." Sales sees Salesforce. Product sees Amplitude. Finance sees spreadsheets. Nobody agrees. You need a single source of truth.
Process debt. The Series A processes are starting to creak. Onboarding that worked for 5 hires/quarter doesn't work for 20. Customer escalation paths built for 50 customers fail at 500.
Cultural fragmentation. Engineering culture ≠ Sales culture ≠ Support culture. Sub-cultures form. The shared identity you had at 30 people requires active work to maintain at 100.
The "brilliant jerk" problem. High performers with bad behavior were tolerated early. Now they're managers with bad behavior, and it's systemic. Act decisively or lose your best people.
Hiring
Who to hire at Series B:
- COO or VP Operations: founder is overwhelmed, someone needs to run the machine
- VP Sales: first Sales Manager won't scale to 20-rep org
- VP Marketing: demand gen and brand need dedicated ownership
- Dedicated Recruiting: 2–3 recruiters minimum; you're hiring 30–50 people/year
- Data/Analytics: dedicated analyst or data engineer to consolidate reporting
- Legal counsel: fractional or in-house; contracts and compliance are getting complex
The "big company exec" trap. Series B is when companies hire their first VP from FAANG or a large SaaS company. 60% of these fail within 18 months. They're used to: large teams, established brand, existing process, political navigation. They struggle with: scrappy execution, no support staff, ambiguous direction. Vet explicitly for startup experience.
Span of control. At this stage, hold managers to 5–8 direct reports. More than 8 = no time for actual management. Less than 3 = management overhead isn't justified.
Process
What to formalize at Series B:
- Quarterly Business Reviews (QBRs) — every function presents metrics, wins, gaps
- Annual planning — budget, headcount plan, strategic priorities
- Cross-functional roadmap alignment — product/sales/marketing in sync quarterly
- Promotion criteria — written, public, applied consistently
- Interview scorecards — structured interviews with defined rubrics
- Change management — how major process changes get communicated and adopted
- Vendor management — evaluation criteria, approval process, contract management
SOPs for critical processes:
- Customer onboarding (if >50 customers)
- Sales handoff from SDR to AE to CS
- Engineering release process
- Incident response playbook
- Contractor/vendor procurement
Tools
| Function | Series B Tool |
|---|---|
| Project/Product | Jira or Linear (with roadmapping) |
| CRM | Salesforce (full) |
| ERP/Finance | NetSuite |
| HR | Workday or BambooHR + Lattice |
| Analytics | Looker or Tableau + data warehouse |
| Customer Success | Gainsight or ChurnZero |
| Engineering | GitHub Enterprise + full CI/CD + observability |
| Security | 1Password Teams + SSO (Okta) + endpoint management |
Communication
At 50+ people, informal communication breaks down. Information no longer flows naturally — it has to be architected.
Communication stack:
- Monthly all-hands (90 min): metrics deep-dive, strategy update, team Q&A
- Weekly leadership team (90 min): cross-functional priorities, decisions, escalations
- Bi-weekly skip-levels (30 min): every manager holds these with their manager's reports
- Quarterly town halls (2 hrs): broader context, financial update, roadmap preview
- Written company update (bi-weekly): CEO to all-hands via Slack/email
The information gradient problem. People at the top know too much. People at the bottom know too little. Fix this with a deliberate "broadcast" culture — any decision affecting more than 5 people gets written up and shared.
Culture
Retention becomes an existential issue. At Series B, you have 50–150 people who've been with you through something hard. They're valuable. And they have options.
- Career ladders are non-negotiable by this stage. People leave when they can't see a future.
- Manager quality determines retention. Invest in manager training. Run manager effectiveness surveys.
- Compensation benchmarking quarterly. If you're more than 10% below market, you're losing people silently.
- Culture carriers. Identify the 10–15 people who embody your culture and make them formally responsible for transmitting it. Give them a platform.
Stage 3: Series C ($30–$75M ARR, 150–500 people)
Key Benchmarks
| Metric | Benchmark |
|---|---|
| Revenue per employee | $200–$400K |
| Manager:IC ratio | 1:5–1:6 |
| Burn multiple | 0.75–1.25x |
| NRR | >115% |
| CAC payback | <9 months |
| Sales cycle (Enterprise) | 60–120 days |
| Engineering team % | 30–40% of headcount |
| Annual attrition target | <12% voluntary |
| Time-to-hire (senior) | 8–12 weeks |
What Breaks
Strategy execution gap. Leadership agrees on strategy. Middle management interprets it differently. ICs execute on their interpretation. By the time work ships, it barely resembles the original strategy. Fix: strategy must cascade in writing with explicit outcomes.
Process bureaucracy. The processes you built at Series B start generating bureaucracy. Approval chains lengthen. Simple decisions require three meetings. The antidote is explicit process owners empowered to eliminate friction.
Org design complexity. Do you have functional teams (all engineers in one org) or product teams (engineers embedded in product squads)? The answer affects everything: career paths, knowledge sharing, delivery speed. Most companies get this wrong twice before getting it right.
Geographic complexity. First international office or remote-heavy team introduces timezone, communication, and culture challenges that don't exist when everyone is in one room.
Leadership team dysfunction. Seven VPs who were all individual contributors two years ago are now running $10M+ organizations. Some have grown into it. Some haven't. This is the stage where hard leadership team changes happen.
Hiring
Series C hiring is about depth, not breadth. You have functional coverage — now hire people who go deep within functions.
- Functional leaders' deputies: VP Engineering needs a Director of Platform Engineering, Director of Product Engineering, etc.
- Internal promotions: 40–60% of leadership roles should be filled internally by now. If you're hiring externally for everything, you've failed at development.
- Specialists: Security, data science, UX research, RevOps — functions that were "shared" become dedicated.
- General Counsel: Legal volume justifies full-time counsel.
Headcount planning discipline. Every hire should have a business case. "The team is busy" is not a business case. "This role will unlock $X in revenue or save Y hours/week" is a business case.
Process
Process consolidation. Audit every process. Kill anything that doesn't have a clear owner and clear outcome. The average Series C company has 40% more process than it needs.
Key processes to have locked at Series C:
- Annual planning cycle (strategy → goals → headcount → budget)
- Quarterly operating review (progress against plan, forecast, adjustments)
- Product development lifecycle (discovery → design → build → launch → measure)
- Revenue operations (forecasting, pipeline management, territory planning)
- People operations (performance cycles, promotion cadence, compensation philosophy)
- Risk management (operational, security, compliance, legal)
Delegation architecture. At 200+ people, the COO cannot know about every decision. Build explicit decision rights: what decisions require CEO/COO approval vs. VP vs. Director vs. IC.
Tools
Consolidate the tech stack. By Series C, you have tool sprawl. The average 200-person company has 100+ SaaS tools. 40% are redundant. Consolidation saves $200–500K/year and reduces security surface.
Must-have by Series C:
- Enterprise SSO (Okta/Google Workspace with MFA everywhere)
- Data warehouse (Snowflake/BigQuery) + BI layer
- HRIS with performance management (Workday, Rippling, BambooHR)
- Revenue intelligence (Gong, Chorus)
- Security tooling (endpoint, SIEM basics, SOC 2 compliance)
Communication
Internal comms becomes a function. You cannot rely on ad-hoc Slack and email at 200+ people. Someone needs to own internal communications.
- Monthly CEO update (written, 500 words max): company performance, strategic context, what's next
- Quarterly all-hands (2 hrs): comprehensive business review, open Q&A
- Leadership alignment sessions (quarterly): leadership team off-site to calibrate on strategy
- Manager cascade (after every major announcement): managers brief their teams with tailored context
Culture
Culture is now a function, not an instinct. By Series C, your original culture-carriers are managers or have left. New people joining have never seen how you worked when you were small.
- Culture explicitly documented — not a values poster, a behavioral handbook
- Onboarding redesigned for culture transmission at scale
- Manager enablement — managers are your primary culture delivery mechanism; invest heavily
- Listening infrastructure — eNPS quarterly, exit interviews, skip-level feedback — all analyzed systematically
Stage 4: Growth Stage ($75M+ ARR, 500+ people)
Key Benchmarks
| Metric | Benchmark |
|---|---|
| Revenue per employee | $300–$600K |
| Manager:IC ratio | 1:4–1:6 |
| Burn multiple (path to profitability) | <0.5x |
| NRR | >120% |
| S&M as % of revenue | 25–35% |
| R&D as % of revenue | 15–25% |
| G&A as % of revenue | 8–12% |
| Rule of 40 | >40 (growth rate + profit margin) |
| Annual attrition target | <10% voluntary |
What Breaks
Execution at scale. The larger you are, the harder it is to move fast. The average decision at a 500-person company takes 3x longer than at a 50-person company. This is not inevitable — but fixing it requires explicit investment.
Internal politics. Org boundaries create fiefdoms. VPs protect headcount. Teams optimize for their metrics at the expense of company metrics. This is the #1 culture problem at scale.
Innovation starvation. The core business is optimized, but new bets are starved of resources. The people working on new initiatives are constrained by processes designed for a mature product. Structural solution required: separate P&L, separate team, different metrics.
Middle management bloat. Growth-stage companies often have too many managers and not enough ICs. A manager managing one other manager managing three ICs is a 3-level chain where 2 people add no value. Flatten aggressively.
Hiring
You're now competing for talent with FAANG. Your advantage is mission, equity, and the ability to have impact. Candidates who want to join a Fortune 500 will not join you. Stop trying to attract them.
- Leadership pipeline: promote from within at 50%+ for senior roles
- Talent density over headcount: 30 strong engineers > 50 average engineers
- Diverse hiring: by this stage, lack of diversity is a business problem, not just an ethical one
Operational Priorities at Scale
- Operational efficiency over growth: headcount growth should lag revenue growth
- Process ownership: every major process has a named owner accountable for outcomes
- Quarterly operating model: budget vs. actual, full P&L transparency to VP level
- Automation: manual operational processes that cost >40 hrs/week should be automated
Cross-Stage Principles
The Three Things That Kill Companies at Every Stage
- Running out of cash before finding the next unlock — runway management is sacred
- Hiring the wrong person for a critical role — one bad VP can set you back 18 months
- Moving too slowly — market timing matters; perfect is the enemy of shipped
The Org Design Progression
Seed: Flat | Everyone reports to founder | No structure
Series A: Functional pods | First-line managers | Light structure
Series B: Functional departments | VPs emerge | Defined structure
Series C: Business units or product squads | Directors + VPs | Full structure
Growth: Divisional or matrix | EVPs/SVPs | Corporate structureRevenue per Employee by Function (B2B SaaS benchmarks)
| Function | Series A | Series B | Series C | Growth |
|---|---|---|---|---|
| Engineering | $400K | $500K | $600K | $700K |
| Sales | $250K | $350K | $450K | $500K |
| Customer Success | $300K | $400K | $500K | $600K |
| Marketing | $500K | $700K | $900K | $1M+ |
| G&A | $600K | $800K | $1M | $1.2M |
Revenue per employee = ARR / headcount in function
The Management Span Rule
- Individual contributors being managed: 1 manager per 6–8 ICs
- Managers being managed: 1 director per 4–6 managers
- Directors being managed: 1 VP per 3–5 directors
- VPs being managed: 1 C-level per 5–8 VPs
Violation of this creates either manager burnout (too wide) or management theater (too narrow).
Red Flags by Stage
| Stage | Red Flag | Likely Cause |
|---|---|---|
| Seed | Missed 3+ product deadlines | Wrong team or unclear prioritization |
| Series A | Churn >20% | PMF not actually found, or CS underfunded |
| Series B | >6-month sales cycle on SMB | Pricing/packaging problem |
| Series C | NRR <100% | Product-market fit eroding or CS broken |
| Growth | Rule of 40 <20 | Efficiency problem; hiring ahead of revenue |
Sources: Sequoia, a16z operating frameworks; First Round Capital COO benchmarks; SaaStr metrics databases; OpenView SaaS benchmarks; Bain operational maturity models.
#!/usr/bin/env python3
"""
okr_tracker.py — OKR Cascade and Alignment Tracker
Tracks OKR progress from company → department → team level.
Calculates scores, flags at-risk key results, and generates alignment reports.
Scoring: Google's 0.0–1.0 scale (target: 0.6–0.7; hitting 1.0 means goal was too easy)
Usage:
python okr_tracker.py # Runs with sample data
python okr_tracker.py --input okrs.json # Custom OKR data
python okr_tracker.py --input okrs.json --output report.txt
python okr_tracker.py --format json # Machine-readable output
"""
import json
import sys
import argparse
from datetime import datetime, date
from typing import Any
# ---------------------------------------------------------------------------
# Scoring Engine
# ---------------------------------------------------------------------------
# OKR health thresholds (Google-style 0.0–1.0 scale)
SCORE_THRESHOLDS = {
"on_track": 0.70, # Above this: healthy
"at_risk": 0.40, # Between at_risk and on_track: needs attention
# Below at_risk: off track
}
STATUS_LABELS = {
"on_track": "🟢 On Track",
"at_risk": "🟡 At Risk",
"off_track": "🔴 Off Track",
"complete": "✅ Complete",
"not_started": "⬜ Not Started",
}
RISK_LABELS = {
"critical": "🔴 Critical",
"high": "🟠 High",
"medium": "🟡 Medium",
"low": "🟢 Low",
}
def calculate_kr_score(kr: dict) -> float:
"""
Calculate a Key Result's progress score (0.0–1.0).
Supports multiple KR types:
- numeric: current_value / target_value
- percentage: current_pct / target_pct
- milestone: milestone_score (0.0–1.0 provided directly)
- boolean: done (1.0) / not done (0.0)
"""
kr_type = kr.get("type", "numeric")
if kr_type == "boolean":
return 1.0 if kr.get("done", False) else 0.0
elif kr_type == "milestone":
# Milestone KRs have explicit score (0.0–1.0) or count of milestones hit
milestones_total = kr.get("milestones_total", 1)
milestones_hit = kr.get("milestones_hit", 0)
explicit_score = kr.get("score")
if explicit_score is not None:
return max(0.0, min(1.0, float(explicit_score)))
return milestones_hit / milestones_total if milestones_total > 0 else 0.0
elif kr_type == "percentage":
target = kr.get("target_pct", 100)
current = kr.get("current_pct", 0)
baseline = kr.get("baseline_pct", 0)
if target == baseline:
return 0.0
score = (current - baseline) / (target - baseline)
return max(0.0, min(1.0, score))
else: # numeric (default)
target = kr.get("target_value", 0)
current = kr.get("current_value", 0)
baseline = kr.get("baseline_value", 0)
if target == baseline:
return 0.0
# Handle "lower is better" metrics (e.g., churn, response time)
if kr.get("lower_is_better", False):
if current <= target:
return 1.0
improvement = baseline - current
needed = baseline - target
score = improvement / needed if needed != 0 else 0.0
else:
score = (current - baseline) / (target - baseline)
return max(0.0, min(1.0, score))
def get_kr_status(score: float, quarter_progress: float, kr: dict) -> str:
"""
Determine KR status based on score, time elapsed in quarter, and trend.
A KR is at-risk if its score is significantly behind the time elapsed.
E.g., if we're 70% through the quarter but KR is at 30%, it's at risk.
"""
if kr.get("done", False):
return "complete"
# Not started
if score == 0.0 and quarter_progress < 0.1:
return "not_started"
# Check against absolute thresholds
if score >= SCORE_THRESHOLDS["on_track"]:
return "on_track"
# Adjust for time: if we're early in quarter, lower scores are acceptable
adjusted_threshold = SCORE_THRESHOLDS["at_risk"] * (quarter_progress or 0.5)
if score >= max(adjusted_threshold, SCORE_THRESHOLDS["at_risk"]):
return "at_risk"
return "off_track"
def calculate_objective_score(objective: dict, quarter_progress: float) -> dict:
"""
Score an objective based on its key results.
Returns scored objective with KR scores and status.
"""
key_results = objective.get("key_results", [])
if not key_results:
return {**objective, "score": 0.0, "status": "not_started", "key_results_scored": []}
scored_krs = []
for kr in key_results:
score = calculate_kr_score(kr)
status = get_kr_status(score, quarter_progress, kr)
# Calculate time-adjusted gap
expected_score = quarter_progress * 0.85 # Expect 85% of time-proportional progress
gap = expected_score - score
risk_level = _assess_kr_risk(score, status, gap, quarter_progress, kr)
scored_krs.append({
**kr,
"score": round(score, 3),
"score_pct": f"{score * 100:.0f}%",
"status": status,
"status_label": STATUS_LABELS.get(status, status),
"expected_score": round(expected_score, 3),
"gap_vs_expected": round(gap, 3),
"risk_level": risk_level,
"risk_label": RISK_LABELS.get(risk_level, risk_level),
})
# Objective score = weighted average of KR scores
# Weight is explicit in KR data or defaults to equal weight
total_weight = sum(kr.get("weight", 1.0) for kr in key_results)
weighted_score = sum(
kr_scored["score"] * kr.get("weight", 1.0)
for kr_scored, kr in zip(scored_krs, key_results)
)
obj_score = weighted_score / total_weight if total_weight > 0 else 0.0
# Objective status = worst KR status (a chain is only as strong as weakest link)
status_priority = {"off_track": 0, "at_risk": 1, "not_started": 2, "on_track": 3, "complete": 4}
obj_status = min(scored_krs, key=lambda x: status_priority.get(x["status"], 2))["status"]
return {
**objective,
"score": round(obj_score, 3),
"score_pct": f"{obj_score * 100:.0f}%",
"status": obj_status,
"status_label": STATUS_LABELS.get(obj_status, obj_status),
"key_results_scored": scored_krs,
}
def _assess_kr_risk(
score: float,
status: str,
gap: float,
quarter_progress: float,
kr: dict,
) -> str:
"""Assess risk level for a key result."""
if status == "complete" or status == "on_track":
return "low"
weeks_remaining = kr.get("weeks_remaining", max(1, int((1 - quarter_progress) * 13)))
# Critical: off track with <4 weeks left
if status == "off_track" and weeks_remaining <= 4:
return "critical"
# High: significantly behind with limited time
if gap > 0.3 and weeks_remaining <= 6:
return "high"
# High: off track regardless of time
if status == "off_track":
return "high"
# Medium: at risk
if status == "at_risk":
return "medium"
return "low"
# ---------------------------------------------------------------------------
# OKR Cascade and Alignment Analysis
# ---------------------------------------------------------------------------
def build_okr_tree(data: dict, quarter_progress: float) -> dict:
"""
Build scored OKR tree: company → departments → teams.
Returns full hierarchy with scores at every level.
"""
company = data.get("company_okrs", {})
departments = data.get("department_okrs", [])
teams = data.get("team_okrs", [])
# Score company-level OKRs
company_scored = {
"name": company.get("name", "Company"),
"quarter": company.get("quarter", ""),
"objectives": [
calculate_objective_score(obj, quarter_progress)
for obj in company.get("objectives", [])
],
}
# Score department-level OKRs
depts_scored = []
for dept in departments:
dept_objectives = [
calculate_objective_score(obj, quarter_progress)
for obj in dept.get("objectives", [])
]
dept_score = (
sum(o["score"] for o in dept_objectives) / len(dept_objectives)
if dept_objectives else 0.0
)
depts_scored.append({
**dept,
"objectives": dept_objectives,
"overall_score": round(dept_score, 3),
"overall_score_pct": f"{dept_score * 100:.0f}%",
})
# Score team-level OKRs
teams_scored = []
for team in teams:
team_objectives = [
calculate_objective_score(obj, quarter_progress)
for obj in team.get("objectives", [])
]
team_score = (
sum(o["score"] for o in team_objectives) / len(team_objectives)
if team_objectives else 0.0
)
teams_scored.append({
**team,
"objectives": team_objectives,
"overall_score": round(team_score, 3),
"overall_score_pct": f"{team_score * 100:.0f}%",
})
return {
"company": company_scored,
"departments": depts_scored,
"teams": teams_scored,
}
def analyze_alignment(okr_tree: dict) -> dict:
"""
Analyze how team and department OKRs align to company OKRs.
Flags: orphaned OKRs (no company parent), missing coverage (company OKR with no team support).
"""
company_objective_ids = {
obj.get("id") for obj in okr_tree["company"].get("objectives", [])
if obj.get("id")
}
# Collect all alignment references from dept and team OKRs
alignment_map: dict[str, list[str]] = {oid: [] for oid in company_objective_ids}
orphaned = []
all_supporting = []
def check_objectives(objectives: list, owner_name: str, level: str):
for obj in objectives:
supports = obj.get("supports_company_objective_ids", [])
if not supports:
# Check if it's supposed to support something
if obj.get("supports_company_objective_id"):
supports = [obj["supports_company_objective_id"]]
if not supports:
orphaned.append({
"level": level,
"owner": owner_name,
"objective": obj.get("title", obj.get("name", "Unknown")),
"issue": "No link to company objective — may be misaligned or low priority",
})
else:
for cid in supports:
if cid in alignment_map:
alignment_map[cid].append(f"{level}:{owner_name}")
all_supporting.append(cid)
else:
orphaned.append({
"level": level,
"owner": owner_name,
"objective": obj.get("title", obj.get("name", "Unknown")),
"issue": f"References company objective '{cid}' which doesn't exist",
})
for dept in okr_tree["departments"]:
check_objectives(dept["objectives"], dept.get("name", "Unknown Dept"), "Department")
for team in okr_tree["teams"]:
check_objectives(team["objectives"], team.get("name", "Unknown Team"), "Team")
# Find company objectives with no support from below
unsupported = []
for obj in okr_tree["company"].get("objectives", []):
obj_id = obj.get("id")
if obj_id and obj_id not in all_supporting:
unsupported.append({
"objective_id": obj_id,
"objective": obj.get("title", obj.get("name", "Unknown")),
"issue": "No department or team OKR explicitly supports this company objective",
})
coverage_score = (
len(set(all_supporting)) / len(company_objective_ids) * 100
if company_objective_ids else 100
)
return {
"alignment_map": alignment_map,
"orphaned_okrs": orphaned,
"unsupported_company_objectives": unsupported,
"coverage_score_pct": round(coverage_score, 1),
}
def collect_at_risk_krs(okr_tree: dict) -> list[dict]:
"""Collect all at-risk and off-track key results across the full OKR tree."""
at_risk = []
def scan_objectives(objectives: list, owner: str, level: str):
for obj in objectives:
for kr in obj.get("key_results_scored", []):
if kr["status"] in ("at_risk", "off_track"):
at_risk.append({
"level": level,
"owner": owner,
"objective": obj.get("title", obj.get("name", "Unknown")),
"key_result": kr.get("title", kr.get("name", "Unknown")),
"score": kr["score"],
"score_pct": kr["score_pct"],
"status": kr["status"],
"status_label": kr["status_label"],
"risk_level": kr["risk_level"],
"risk_label": kr["risk_label"],
"gap_vs_expected": kr["gap_vs_expected"],
"notes": kr.get("notes", ""),
})
scan_objectives(
okr_tree["company"].get("objectives", []),
okr_tree["company"].get("name", "Company"),
"Company",
)
for dept in okr_tree["departments"]:
scan_objectives(dept["objectives"], dept.get("name", ""), "Department")
for team in okr_tree["teams"]:
scan_objectives(team["objectives"], team.get("name", ""), "Team")
# Sort: off_track before at_risk, then by gap
status_order = {"off_track": 0, "at_risk": 1}
at_risk.sort(key=lambda x: (status_order.get(x["status"], 2), -x.get("gap_vs_expected", 0)))
return at_risk
# ---------------------------------------------------------------------------
# Report Formatter
# ---------------------------------------------------------------------------
def _score_bar(score: float, width: int = 20) -> str:
"""Render a text progress bar for a 0.0–1.0 score."""
filled = round(score * width)
bar = "█" * filled + "░" * (width - filled)
return f"[{bar}] {score * 100:.0f}%"
def format_report(
okr_tree: dict,
alignment: dict,
at_risk_krs: list[dict],
quarter_progress: float,
quarter_label: str,
) -> str:
"""Format full OKR tracking report as plain text."""
lines = []
now = datetime.now().strftime("%Y-%m-%d %H:%M")
company_name = okr_tree["company"].get("name", "Company")
lines.append("=" * 70)
lines.append(f"OKR TRACKING REPORT — {company_name}")
lines.append(f"Quarter: {quarter_label} | Quarter progress: {quarter_progress * 100:.0f}%")
lines.append(f"Generated: {now}")
lines.append("=" * 70)
# --- Executive Summary ---
lines.append("\n📊 EXECUTIVE SUMMARY")
lines.append("-" * 40)
company_objectives = okr_tree["company"].get("objectives", [])
if company_objectives:
company_avg = sum(o["score"] for o in company_objectives) / len(company_objectives)
on_track = sum(1 for o in company_objectives if o["status"] == "on_track")
at_risk = sum(1 for o in company_objectives if o["status"] == "at_risk")
off_track = sum(1 for o in company_objectives if o["status"] == "off_track")
lines.append(f"Company OKR Score: {_score_bar(company_avg)}")
lines.append(f"Objectives: {len(company_objectives)} total — "
f"🟢 {on_track} on track, 🟡 {at_risk} at risk, 🔴 {off_track} off track")
lines.append(f"At-risk KRs (all): {len(at_risk_krs)}")
lines.append(f"Alignment coverage: {alignment['coverage_score_pct']}% of company objectives have team support")
# Overall health assessment
if company_avg >= 0.7:
health = "🟢 HEALTHY — On track for a strong quarter"
elif company_avg >= 0.5:
health = "🟡 CAUTION — Some objectives need attention"
elif company_avg >= 0.3:
health = "🔴 AT RISK — Multiple objectives behind; intervention needed"
else:
health = "🚨 CRITICAL — Quarter in serious jeopardy; executive review required"
lines.append(f"\nOverall Health: {health}")
# --- Company OKRs ---
lines.append("\n\n🏢 COMPANY OKRs")
lines.append("-" * 40)
for obj in company_objectives:
lines.append(f"\n Objective: {obj.get('title', obj.get('name', 'Unknown'))}")
lines.append(f" Owner: {obj.get('owner', 'Unassigned')} | Score: {_score_bar(obj['score'], 15)} {obj['status_label']}")
for kr in obj.get("key_results_scored", []):
risk_marker = f" {kr['risk_label']}" if kr["risk_level"] in ("critical", "high") else ""
lines.append(f"\n KR: {kr.get('title', kr.get('name', 'Unknown'))}")
lines.append(f" Score: {_score_bar(kr['score'], 12)} {kr['status_label']}{risk_marker}")
# Show actual progress
if kr.get("type") == "numeric":
current = kr.get("current_value", "?")
target = kr.get("target_value", "?")
baseline = kr.get("baseline_value", 0)
unit = kr.get("unit", "")
lines.append(f" Progress: {current}{unit} / {target}{unit} (baseline: {baseline}{unit})")
elif kr.get("type") == "percentage":
lines.append(f" Progress: {kr.get('current_pct', '?')}% / {kr.get('target_pct', '?')}%")
elif kr.get("type") == "milestone":
hit = kr.get("milestones_hit", "?")
total = kr.get("milestones_total", "?")
lines.append(f" Milestones: {hit} / {total}")
if kr.get("notes"):
lines.append(f" Note: {kr['notes']}")
# --- Department OKRs ---
lines.append("\n\n🏬 DEPARTMENT OKRs")
lines.append("-" * 40)
for dept in okr_tree["departments"]:
lines.append(f"\n 📁 {dept.get('name', 'Unknown')} | Score: {_score_bar(dept['overall_score'], 15)}")
for obj in dept.get("objectives", []):
lines.append(f"\n Objective: {obj.get('title', obj.get('name', 'Unknown'))}")
lines.append(f" Owner: {obj.get('owner', 'Unassigned')} | {obj['status_label']}")
supports = obj.get("supports_company_objective_ids", [])
if supports:
lines.append(f" Supports: Company Objective(s) {', '.join(supports)}")
for kr in obj.get("key_results_scored", []):
risk_marker = f" {kr['risk_label']}" if kr["risk_level"] in ("critical", "high") else ""
lines.append(f"\n KR: {kr.get('title', kr.get('name', 'Unknown'))}")
lines.append(f" {_score_bar(kr['score'], 10)} {kr['status_label']}{risk_marker}")
# --- Team OKRs ---
if okr_tree["teams"]:
lines.append("\n\n👥 TEAM OKRs")
lines.append("-" * 40)
for team in okr_tree["teams"]:
lines.append(f"\n 📋 {team.get('name', 'Unknown')} | Score: {_score_bar(team['overall_score'], 15)}")
for obj in team.get("objectives", []):
lines.append(f"\n Objective: {obj.get('title', obj.get('name', 'Unknown'))}")
supports = obj.get("supports_company_objective_ids", [])
if supports:
lines.append(f" Supports: {', '.join(supports)}")
for kr in obj.get("key_results_scored", []):
risk_marker = f" {kr['risk_label']}" if kr["risk_level"] in ("critical", "high") else ""
lines.append(
f" • {kr.get('title', kr.get('name', 'Unknown'))}: "
f"{kr['score_pct']} {kr['status_label']}{risk_marker}"
)
# --- At-Risk KRs ---
lines.append("\n\n⚠️ AT-RISK KEY RESULTS (Action Required)")
lines.append("-" * 40)
if not at_risk_krs:
lines.append("✅ No key results currently at risk or off track.")
else:
critical = [kr for kr in at_risk_krs if kr["risk_level"] == "critical"]
high = [kr for kr in at_risk_krs if kr["risk_level"] == "high"]
medium = [kr for kr in at_risk_krs if kr["risk_level"] == "medium"]
for group_label, group in [("🔴 CRITICAL", critical), ("🟠 HIGH", high), ("🟡 MEDIUM", medium)]:
if not group:
continue
lines.append(f"\n{group_label} ({len(group)} items):")
for kr in group:
lines.append(f"\n [{kr['level']}] {kr['owner']}")
lines.append(f" Obj: {kr['objective']}")
lines.append(f" KR: {kr['key_result']}")
lines.append(f" Score: {kr['score_pct']} {kr['status_label']} (gap vs expected: {kr['gap_vs_expected'] * 100:.0f}pp)")
if kr["notes"]:
lines.append(f" Note: {kr['notes']}")
# --- Alignment Report ---
lines.append("\n\n🔗 ALIGNMENT REPORT")
lines.append("-" * 40)
lines.append(f"Alignment coverage: {alignment['coverage_score_pct']}% of company objectives have explicit support\n")
# Show alignment map
lines.append("Company Objective Coverage:")
for obj in company_objectives:
obj_id = obj.get("id", "")
supporters = alignment["alignment_map"].get(obj_id, [])
obj_name = obj.get("title", obj.get("name", obj_id))
count = len(supporters)
marker = "✅" if count > 0 else "⚠️ "
lines.append(f" {marker} [{obj_id}] {obj_name}")
if supporters:
for s in supporters:
lines.append(f" ↑ {s}")
else:
lines.append(f" ↑ (no department or team OKR supports this)")
if alignment["unsupported_company_objectives"]:
lines.append(f"\n⚠️ Unsupported Company Objectives ({len(alignment['unsupported_company_objectives'])}):")
for u in alignment["unsupported_company_objectives"]:
lines.append(f" • [{u['objective_id']}] {u['objective']}")
lines.append(f" → {u['issue']}")
if alignment["orphaned_okrs"]:
lines.append(f"\n⚠️ Orphaned OKRs (not linked to company objectives):")
for o in alignment["orphaned_okrs"]:
lines.append(f" • [{o['level']}] {o['owner']}: {o['objective']}")
lines.append(f" → {o['issue']}")
# --- Recommendations ---
lines.append("\n\n📋 RECOMMENDED ACTIONS")
lines.append("-" * 40)
recs = _generate_recommendations(okr_tree, at_risk_krs, alignment, quarter_progress)
for i, rec in enumerate(recs, 1):
lines.append(f"\n{i}. {rec['title']}")
lines.append(f" {rec['detail']}")
lines.append(f" Owner: {rec['owner']} | When: {rec['when']}")
lines.append("\n" + "=" * 70)
lines.append("END OF REPORT")
lines.append("=" * 70)
return "\n".join(lines)
def _generate_recommendations(
okr_tree: dict,
at_risk_krs: list[dict],
alignment: dict,
quarter_progress: float,
) -> list[dict]:
"""Generate actionable recommendations based on OKR analysis."""
recs = []
# Critical KRs
critical = [kr for kr in at_risk_krs if kr["risk_level"] == "critical"]
if critical:
recs.append({
"title": f"Emergency review: {len(critical)} critical key result(s) need immediate intervention",
"detail": f"Critical KRs: {', '.join(kr['key_result'] for kr in critical[:3])}. "
f"With limited time remaining, these need escalation today.",
"owner": "COO + KR owners",
"when": "This week",
})
# Off-track objectives
off_track_objs = [
o for o in okr_tree["company"].get("objectives", [])
if o["status"] == "off_track"
]
if off_track_objs:
recs.append({
"title": f"Scope reset for {len(off_track_objs)} off-track company objective(s)",
"detail": "When a company objective is off track by mid-quarter, "
"the options are: (1) resource surge, (2) scope reduction, or (3) accept the miss. "
"Choose explicitly — don't let it drift.",
"owner": "CEO + COO",
"when": "Within 1 week",
})
# Alignment gaps
if alignment["coverage_score_pct"] < 80:
recs.append({
"title": "OKR alignment gap — not all company objectives have team support",
"detail": f"Only {alignment['coverage_score_pct']}% of company objectives have explicit team/dept OKRs supporting them. "
"Either add supporting OKRs or acknowledge these objectives are founder-owned.",
"owner": "COO + VPs",
"when": "Next OKR planning cycle",
})
if alignment["orphaned_okrs"]:
recs.append({
"title": f"{len(alignment['orphaned_okrs'])} orphaned OKR(s) with no company objective linkage",
"detail": "Team OKRs that don't connect to company objectives waste capacity. "
"Either link them explicitly or discontinue them.",
"owner": "Team leads + COO",
"when": "OKR review session",
})
# Late quarter: force ranking
if quarter_progress >= 0.67:
at_risk_count = sum(
1 for o in okr_tree["company"].get("objectives", [])
if o["status"] in ("at_risk", "off_track")
)
if at_risk_count > 0:
recs.append({
"title": f"Late quarter: force-rank which at-risk OKRs to save vs. accept as miss",
"detail": f"{at_risk_count} objectives at risk with <{int((1 - quarter_progress) * 13)} weeks left. "
"You cannot save everything. Pick the 1–2 most important and resource them fully. "
"Explicitly accept the others as misses and learn from them.",
"owner": "CEO + COO",
"when": "Immediately",
})
# Measurement gaps
unscored_krs = []
for obj in okr_tree["company"].get("objectives", []):
for kr in obj.get("key_results_scored", []):
if kr["score"] == 0.0 and kr["status"] == "not_started" and quarter_progress > 0.25:
unscored_krs.append(kr.get("title", kr.get("name", "Unknown")))
if unscored_krs:
recs.append({
"title": f"{len(unscored_krs)} key result(s) show no progress past Q1",
"detail": "KRs with zero progress after 25% of quarter has elapsed are either not started, "
"unmeasured, or forgotten. Require owners to update scores this week.",
"owner": "KR owners",
"when": "This week — before next leadership sync",
})
return recs
def format_json_output(okr_tree: dict, alignment: dict, at_risk_krs: list[dict]) -> str:
"""Format analysis as machine-readable JSON."""
return json.dumps(
{
"generated_at": datetime.now().isoformat(),
"company_score": (
sum(o["score"] for o in okr_tree["company"].get("objectives", []))
/ max(1, len(okr_tree["company"].get("objectives", [])))
),
"at_risk_count": len(at_risk_krs),
"alignment_coverage_pct": alignment["coverage_score_pct"],
"objectives": okr_tree["company"].get("objectives", []),
"departments": okr_tree["departments"],
"teams": okr_tree["teams"],
"at_risk_key_results": at_risk_krs,
"alignment": alignment,
},
indent=2,
)
# ---------------------------------------------------------------------------
# Main Entrypoint
# ---------------------------------------------------------------------------
def main():
parser = argparse.ArgumentParser(
description="OKR Cascade and Alignment Tracker — COO Advisor Tool",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument("--input", "-i", help="Path to JSON OKR data file", default=None)
parser.add_argument("--output", "-o", help="Path to write report (default: stdout)", default=None)
parser.add_argument(
"--format", "-f",
choices=["text", "json"],
default="text",
help="Output format: text (default) or json",
)
parser.add_argument(
"--quarter-progress",
type=float,
default=None,
help="Override quarter progress (0.0–1.0). Default: auto-calculated from quarter dates.",
)
args = parser.parse_args()
if args.input:
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: Input file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file specified — running with sample data.\n")
data = SAMPLE_DATA
# Determine quarter progress
if args.quarter_progress is not None:
quarter_progress = args.quarter_progress
else:
quarter_progress = _calculate_quarter_progress(data)
quarter_label = data.get("company_okrs", {}).get("quarter", "Unknown Quarter")
# Run analysis
okr_tree = build_okr_tree(data, quarter_progress)
alignment = analyze_alignment(okr_tree)
at_risk_krs = collect_at_risk_krs(okr_tree)
# Format output
if args.format == "json":
output = format_json_output(okr_tree, alignment, at_risk_krs)
else:
output = format_report(okr_tree, alignment, at_risk_krs, quarter_progress, quarter_label)
if args.output:
with open(args.output, "w") as f:
f.write(output)
print(f"Report written to: {args.output}")
else:
print(output)
def _calculate_quarter_progress(data: dict) -> float:
"""Auto-calculate quarter progress from start/end dates in data, or default to 0.5."""
q = data.get("company_okrs", {})
start_str = q.get("quarter_start")
end_str = q.get("quarter_end")
if not start_str or not end_str:
return 0.5 # Default to mid-quarter if not specified
try:
start = date.fromisoformat(start_str)
end = date.fromisoformat(end_str)
today = date.today()
total_days = (end - start).days
elapsed_days = (today - start).days
progress = elapsed_days / total_days if total_days > 0 else 0.5
return max(0.0, min(1.0, progress))
except (ValueError, TypeError):
return 0.5
# ---------------------------------------------------------------------------
# Sample Data
# ---------------------------------------------------------------------------
SAMPLE_DATA = {
"company_okrs": {
"name": "AcmeSaaS",
"quarter": "Q1 2025",
"quarter_start": "2025-01-01",
"quarter_end": "2025-03-31",
"objectives": [
{
"id": "CO1",
"title": "Achieve breakout revenue growth",
"owner": "CEO",
"key_results": [
{
"id": "CO1-KR1",
"title": "Reach $5M net new ARR",
"type": "numeric",
"baseline_value": 0,
"current_value": 2800000,
"target_value": 5000000,
"unit": "",
"notes": "Strong January, February softer; pipeline looks better for March",
},
{
"id": "CO1-KR2",
"title": "Achieve 115% NRR",
"type": "percentage",
"baseline_pct": 108,
"current_pct": 110,
"target_pct": 115,
"notes": "Expansion motion improved; churn still elevated in SMB segment",
},
{
"id": "CO1-KR3",
"title": "Close 3 enterprise deals (>$150K ACV)",
"type": "numeric",
"baseline_value": 0,
"current_value": 1,
"target_value": 3,
"unit": " deals",
"notes": "1 closed, 2 in late-stage negotiation",
},
],
},
{
"id": "CO2",
"title": "Build a world-class product that customers love",
"owner": "CPO",
"key_results": [
{
"id": "CO2-KR1",
"title": "Increase feature adoption rate to 65% (% of customers using 3+ core features)",
"type": "percentage",
"baseline_pct": 48,
"current_pct": 52,
"target_pct": 65,
"notes": "Onboarding improvements shipped; adoption curve is moving",
},
{
"id": "CO2-KR2",
"title": "Ship the integration platform (milestone)",
"type": "milestone",
"milestones_total": 4,
"milestones_hit": 1,
"milestones": [
"API design complete",
"Internal alpha",
"Beta with 5 customers",
"GA launch",
],
"notes": "API design shipped. Internal alpha delayed 2 weeks.",
},
{
"id": "CO2-KR3",
"title": "NPS score reaches 45",
"type": "numeric",
"baseline_value": 32,
"current_value": 38,
"target_value": 45,
"unit": "",
},
],
},
{
"id": "CO3",
"title": "Build an operationally excellent company",
"owner": "COO",
"key_results": [
{
"id": "CO3-KR1",
"title": "Reduce burn multiple from 1.8x to 1.3x",
"type": "numeric",
"baseline_value": 1.8,
"current_value": 1.65,
"target_value": 1.3,
"lower_is_better": True,
"unit": "x",
},
{
"id": "CO3-KR2",
"title": "Achieve <30-day customer onboarding (avg)",
"type": "numeric",
"baseline_value": 47,
"current_value": 38,
"target_value": 30,
"lower_is_better": True,
"unit": " days",
"notes": "Good progress; blocked by technical setup step (avg 12 days)",
},
{
"id": "CO3-KR3",
"title": "Voluntary attrition <10%",
"type": "numeric",
"baseline_value": 15,
"current_value": 12,
"target_value": 10,
"lower_is_better": True,
"unit": "%",
"notes": "2 unexpected departures in January; retention initiatives launched",
},
],
},
],
},
"department_okrs": [
{
"name": "Sales",
"owner": "VP Sales",
"objectives": [
{
"title": "Drive net new ARR to hit company growth target",
"owner": "VP Sales",
"supports_company_objective_ids": ["CO1"],
"key_results": [
{
"title": "Close $4M in new business ARR",
"type": "numeric",
"baseline_value": 0,
"current_value": 2200000,
"target_value": 4000000,
"unit": "",
},
{
"title": "Maintain pipeline coverage ratio ≥3x",
"type": "numeric",
"baseline_value": 2.5,
"current_value": 3.1,
"target_value": 3.0,
"unit": "x",
},
{
"title": "Reduce average sales cycle to 42 days",
"type": "numeric",
"baseline_value": 58,
"current_value": 50,
"target_value": 42,
"lower_is_better": True,
"unit": " days",
},
],
}
],
},
{
"name": "Engineering",
"owner": "VP Engineering",
"objectives": [
{
"title": "Deliver the integration platform on schedule",
"owner": "VP Engineering",
"supports_company_objective_ids": ["CO2"],
"key_results": [
{
"title": "Integration platform beta live with 5 customers",
"type": "milestone",
"milestones_total": 3,
"milestones_hit": 1,
"notes": "Alpha delayed — dependency on API gateway refactor",
},
{
"title": "Deploy frequency ≥10/week",
"type": "numeric",
"baseline_value": 6,
"current_value": 9,
"target_value": 10,
"unit": "/week",
},
{
"title": "P0/P1 incidents <2 per month",
"type": "numeric",
"baseline_value": 5,
"current_value": 2.5,
"target_value": 2,
"lower_is_better": True,
"unit": "/month",
},
],
}
],
},
{
"name": "Customer Success",
"owner": "VP CS",
"objectives": [
{
"title": "Drive retention and expansion to fuel NRR growth",
"owner": "VP CS",
"supports_company_objective_ids": ["CO1", "CO2"],
"key_results": [
{
"title": "Gross retention ≥92%",
"type": "percentage",
"baseline_pct": 88,
"current_pct": 89,
"target_pct": 92,
"notes": "3 at-risk accounts in red status",
},
{
"title": "Average onboarding time ≤30 days",
"type": "numeric",
"baseline_value": 47,
"current_value": 38,
"target_value": 30,
"lower_is_better": True,
"unit": " days",
},
{
"title": "Expansion ARR from existing customers: $800K",
"type": "numeric",
"baseline_value": 0,
"current_value": 580000,
"target_value": 800000,
"unit": "",
},
],
}
],
},
],
"team_okrs": [
{
"name": "Platform Engineering",
"department": "Engineering",
"objectives": [
{
"title": "Build the integration API infrastructure",
"supports_company_objective_ids": ["CO2"],
"key_results": [
{
"title": "API gateway v2 deployed to production",
"type": "boolean",
"done": False,
"notes": "Targeting end of week 8",
},
{
"title": "Webhook system handles 10K events/sec",
"type": "boolean",
"done": False,
},
{
"title": "P99 API latency <200ms",
"type": "numeric",
"baseline_value": 380,
"current_value": 290,
"target_value": 200,
"lower_is_better": True,
"unit": "ms",
},
],
}
],
},
{
"name": "Enterprise Sales Team",
"department": "Sales",
"objectives": [
{
"title": "Land 3 enterprise accounts",
"supports_company_objective_ids": ["CO1"],
"key_results": [
{
"title": "3 enterprise deals closed",
"type": "numeric",
"baseline_value": 0,
"current_value": 1,
"target_value": 3,
"unit": " deals",
},
{
"title": "5 enterprise POCs initiated",
"type": "numeric",
"baseline_value": 0,
"current_value": 4,
"target_value": 5,
"unit": " POCs",
},
],
}
],
},
],
}
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""
ops_efficiency_analyzer.py — Operational Efficiency Analyzer
Analyzes startup operational efficiency using Theory of Constraints,
process maturity scoring, and bottleneck identification.
Usage:
python ops_efficiency_analyzer.py # Runs with sample data
python ops_efficiency_analyzer.py --input data.json # Custom data
python ops_efficiency_analyzer.py --input data.json --output report.txt
Input format: See SAMPLE_DATA at bottom of file.
"""
import json
import sys
import argparse
import math
from datetime import datetime
from typing import Any, Optional
# ---------------------------------------------------------------------------
# Data Models (plain dicts with type aliases for clarity)
# ---------------------------------------------------------------------------
ProcessData = dict[str, Any]
TeamData = dict[str, Any]
MetricsData = dict[str, Any]
# ---------------------------------------------------------------------------
# Process Maturity Scoring
# ---------------------------------------------------------------------------
MATURITY_LEVELS = {
1: "Ad Hoc",
2: "Defined",
3: "Managed",
4: "Optimized",
5: "Innovating",
}
MATURITY_DESCRIPTIONS = {
1: "No documented process. Outcomes depend on individual heroics.",
2: "Process exists and is documented. Inconsistently followed.",
3: "Process is followed consistently. Metrics are tracked.",
4: "Process is optimized based on metrics. Proactively improved.",
5: "Process enables competitive advantage. Continuously innovating.",
}
MATURITY_CRITERIA = {
"documentation": {
"weight": 0.20,
"levels": {
0: "No documentation",
1: "Informal notes or tribal knowledge",
2: "Process documented but not maintained",
3: "Documented, current, accessible",
4: "Documented with examples, edge cases, and owner",
5: "Living doc with version history and improvement log",
},
},
"ownership": {
"weight": 0.15,
"levels": {
0: "No owner",
1: "Unclear ownership, multiple people responsible",
2: "Named team responsible",
3: "Named individual DRI",
4: "DRI with metrics accountability",
5: "DRI with improvement mandate and resources",
},
},
"metrics": {
"weight": 0.20,
"levels": {
0: "No metrics",
1: "Anecdotal measurement",
2: "Some metrics tracked, not regularly reviewed",
3: "Key metrics tracked and reviewed monthly",
4: "Metrics drive decisions, targets set",
5: "Predictive metrics, benchmarked externally",
},
},
"automation": {
"weight": 0.20,
"levels": {
0: "100% manual",
1: "Mostly manual, some tools used",
2: "Key steps automated, significant manual work remains",
3: "Majority automated, manual exception handling",
4: "Mostly automated with exception playbooks",
5: "Fully automated with human oversight only",
},
},
"consistency": {
"weight": 0.15,
"levels": {
0: "Never consistent",
1: "Consistent <50% of time",
2: "Consistent 50-75% of time",
3: "Consistent 75-90% of time",
4: "Consistent >90% of time",
5: "Six Sigma level (>99.7%)",
},
},
"feedback_loop": {
"weight": 0.10,
"levels": {
0: "No feedback loop",
1: "Ad hoc complaints surface issues",
2: "Periodic review when problems arise",
3: "Regular review cadence",
4: "Structured improvement cycles",
5: "Real-time feedback with automated triggers",
},
},
}
def score_process_maturity(process: ProcessData) -> dict[str, Any]:
"""
Score a single process on 1-5 maturity scale.
Returns scored process with dimension breakdown and recommendations.
"""
maturity_inputs = process.get("maturity", {})
total_score = 0.0
dimension_scores = {}
recommendations = []
for dimension, config in MATURITY_CRITERIA.items():
raw_score = maturity_inputs.get(dimension, 0)
# Normalize raw score (0-5) to weight
normalized = (raw_score / 5.0) * config["weight"] * 5
total_score += normalized
dimension_scores[dimension] = raw_score
# Generate recommendation if below threshold
if raw_score < 3:
severity = "🔴 Critical" if raw_score < 2 else "🟡 Needs work"
recommendations.append({
"dimension": dimension,
"current_score": raw_score,
"target_score": 3,
"severity": severity,
"action": _get_improvement_action(dimension, raw_score),
})
# Clamp to 1-5 range (scores can't be below 1 for a running process)
maturity_score = max(1.0, min(5.0, total_score))
maturity_level = round(maturity_score)
return {
"name": process["name"],
"maturity_score": round(maturity_score, 2),
"maturity_level": maturity_level,
"maturity_label": MATURITY_LEVELS[maturity_level],
"dimension_scores": dimension_scores,
"recommendations": recommendations,
"process_data": process,
}
def _get_improvement_action(dimension: str, current_score: int) -> str:
"""Return a concrete improvement action for a given dimension and score."""
actions = {
"documentation": {
0: "Write a basic SOP this week: trigger, steps, owner, done-definition",
1: "Convert tribal knowledge into a written process doc with clear steps",
2: "Assign a process owner to maintain and update documentation quarterly",
},
"ownership": {
0: "Assign a DRI (Directly Responsible Individual) today",
1: "Clarify ownership: assign one named person, remove ambiguity",
2: "Give the named owner accountability for process metrics",
},
"metrics": {
0: "Define 1-2 metrics that measure if this process is working",
1: "Set up automated metric collection and add to monthly review",
2: "Set targets for each metric and review monthly",
},
"automation": {
0: "Identify the highest-volume manual step; automate it first",
1: "Run automation ROI calc — if payback <12 months, build it",
2: "Automate exception routing and error notifications",
},
"consistency": {
0: "Root-cause why the process fails; fix the #1 failure mode",
1: "Create a checklist for the process; require sign-off",
2: "Add process adherence check to team's weekly review",
},
"feedback_loop": {
0: "Add this process to monthly operational review agenda",
1: "Create a feedback channel (Slack thread, form) for process issues",
2: "Set a quarterly review date for this process",
},
}
return actions.get(dimension, {}).get(current_score, "Improve this dimension")
# ---------------------------------------------------------------------------
# Bottleneck Analysis (Theory of Constraints)
# ---------------------------------------------------------------------------
def analyze_bottlenecks(processes: list[ProcessData]) -> dict[str, Any]:
"""
Identify bottlenecks using throughput analysis.
Bottleneck = step with lowest throughput (or highest queue buildup).
"""
bottlenecks = []
throughput_chain = []
for process in processes:
steps = process.get("steps", [])
if not steps:
continue
step_analysis = []
min_throughput = float("inf")
bottleneck_step = None
for step in steps:
throughput = step.get("throughput_per_day", 0)
queue_depth = step.get("current_queue", 0)
avg_wait_hours = step.get("avg_wait_hours", 0)
# Utilization estimate
capacity = step.get("capacity_per_day", throughput * 1.2)
utilization = (throughput / capacity * 100) if capacity > 0 else 100
step_info = {
"name": step["name"],
"throughput_per_day": throughput,
"queue_depth": queue_depth,
"avg_wait_hours": avg_wait_hours,
"utilization_pct": round(utilization, 1),
"is_bottleneck": False,
}
step_analysis.append(step_info)
if throughput < min_throughput:
min_throughput = throughput
bottleneck_step = step_info
if bottleneck_step:
bottleneck_step["is_bottleneck"] = True
# Calculate flow efficiency
total_lead_time = sum(
s.get("avg_wait_hours", 0) + s.get("avg_process_hours", 1)
for s in steps
)
total_process_time = sum(s.get("avg_process_hours", 1) for s in steps)
flow_efficiency = (
(total_process_time / total_lead_time * 100)
if total_lead_time > 0
else 0
)
bottlenecks.append({
"process": process["name"],
"bottleneck_step": bottleneck_step["name"],
"bottleneck_throughput": min_throughput,
"bottleneck_queue": bottleneck_step["queue_depth"],
"flow_efficiency_pct": round(flow_efficiency, 1),
"steps": step_analysis,
"toc_recommendation": _generate_toc_recommendation(
bottleneck_step, process
),
})
throughput_chain.append({
"process": process["name"],
"steps": step_analysis,
})
# Rank bottlenecks by severity (queue depth × utilization)
for b in bottlenecks:
b["severity_score"] = b["bottleneck_queue"] * (b["bottleneck_throughput"] or 1)
bottlenecks.sort(key=lambda x: x["severity_score"], reverse=True)
return {
"bottlenecks": bottlenecks,
"throughput_chain": throughput_chain,
}
def _generate_toc_recommendation(bottleneck_step: dict, process: ProcessData) -> str:
"""Generate a Theory of Constraints recommendation for a bottleneck."""
util = bottleneck_step["utilization_pct"]
queue = bottleneck_step["queue_depth"]
step_name = bottleneck_step["name"]
if util >= 90:
return (
f"ELEVATE: '{step_name}' is at {util}% utilization — at capacity. "
f"Add resources (people, automation, or parallel processing) immediately. "
f"Queue of {queue} units will grow until capacity is increased."
)
elif util >= 70:
return (
f"EXPLOIT: '{step_name}' has capacity headroom but is the constraint. "
f"Eliminate non-value-add work in this step. Protect it from interruptions. "
f"Ensure upstream steps feed it steadily, not in batches."
)
else:
return (
f"INVESTIGATE: '{step_name}' shows low throughput ({bottleneck_step['throughput_per_day']}/day) "
f"despite available capacity. Root cause may be upstream blocking, "
f"unclear handoffs, or quality issues requiring rework."
)
# ---------------------------------------------------------------------------
# Team Structure Analysis
# ---------------------------------------------------------------------------
def analyze_team_structure(team: TeamData) -> dict[str, Any]:
"""
Analyze team structure for span of control, layer count, and hiring gaps.
"""
issues = []
recommendations = []
warnings = []
total_headcount = team.get("total_headcount", 0)
departments = team.get("departments", [])
# Span of control analysis
span_issues = []
for dept in departments:
for manager in dept.get("managers", []):
direct_reports = manager.get("direct_reports", 0)
manages_managers = manager.get("manages_managers", False)
optimal_min = 3 if manages_managers else 5
optimal_max = 5 if manages_managers else 8
if direct_reports < optimal_min:
span_issues.append({
"manager": manager["name"],
"dept": dept["name"],
"reports": direct_reports,
"issue": "Under-span",
"recommendation": f"Merge team or promote ICs — {direct_reports} reports is management overhead",
})
elif direct_reports > optimal_max:
span_issues.append({
"manager": manager["name"],
"dept": dept["name"],
"reports": direct_reports,
"issue": "Over-span",
"recommendation": f"Split team — {direct_reports} reports means minimal 1:1 time and poor feedback loops",
})
# Management layers analysis
max_layers = team.get("management_layers", 0)
expected_layers = _expected_layers(total_headcount)
if max_layers > expected_layers + 1:
issues.append({
"type": "Over-layered",
"detail": f"{max_layers} management layers for {total_headcount} people. "
f"Expected: {expected_layers}. Excess layers slow decisions.",
"recommendation": "Flatten: remove middle management layers that don't add decision value",
})
# Revenue per employee by department
annual_revenue = team.get("annual_revenue_usd", 0)
dept_analysis = []
for dept in departments:
headcount = dept.get("headcount", 0)
if headcount > 0 and annual_revenue > 0:
rev_per_employee = annual_revenue / headcount
benchmark = _dept_revenue_benchmark(dept["name"], team.get("stage", "series_a"))
efficiency_pct = (rev_per_employee / benchmark * 100) if benchmark > 0 else None
dept_analysis.append({
"department": dept["name"],
"headcount": headcount,
"revenue_per_employee": round(rev_per_employee),
"benchmark": benchmark,
"efficiency_vs_benchmark_pct": round(efficiency_pct, 1) if efficiency_pct else "N/A",
"status": _efficiency_status(efficiency_pct),
})
# Open req health
open_reqs = team.get("open_requisitions", 0)
req_to_headcount_ratio = (open_reqs / total_headcount * 100) if total_headcount > 0 else 0
if req_to_headcount_ratio > 20:
warnings.append(
f"High open req ratio: {open_reqs} open reqs against {total_headcount} headcount "
f"({req_to_headcount_ratio:.0f}%). This level of hiring while operating is operationally disruptive."
)
return {
"total_headcount": total_headcount,
"management_layers": max_layers,
"expected_layers": expected_layers,
"span_of_control_issues": span_issues,
"structural_issues": issues,
"department_efficiency": dept_analysis,
"open_req_health": {
"open_reqs": open_reqs,
"ratio_pct": round(req_to_headcount_ratio, 1),
"warnings": warnings,
},
}
def _expected_layers(headcount: int) -> int:
if headcount <= 15:
return 1
elif headcount <= 50:
return 2
elif headcount <= 150:
return 3
elif headcount <= 500:
return 4
else:
return 5
def _dept_revenue_benchmark(dept_name: str, stage: str) -> int:
"""Revenue per employee benchmark by department and stage (USD)."""
benchmarks = {
"series_a": {
"engineering": 400000,
"sales": 250000,
"customer_success": 300000,
"marketing": 500000,
"operations": 400000,
"product": 400000,
"default": 200000,
},
"series_b": {
"engineering": 500000,
"sales": 350000,
"customer_success": 400000,
"marketing": 700000,
"operations": 500000,
"product": 500000,
"default": 300000,
},
"series_c": {
"engineering": 600000,
"sales": 450000,
"customer_success": 500000,
"marketing": 900000,
"operations": 600000,
"product": 600000,
"default": 400000,
},
}
stage_data = benchmarks.get(stage, benchmarks["series_a"])
dept_key = dept_name.lower().replace(" ", "_").replace("-", "_")
return stage_data.get(dept_key, stage_data["default"])
def _efficiency_status(efficiency_pct: Optional[float]) -> str:
if efficiency_pct is None:
return "N/A"
if efficiency_pct >= 90:
return "🟢 On benchmark"
elif efficiency_pct >= 70:
return "🟡 Below benchmark"
else:
return "🔴 Significantly below"
# ---------------------------------------------------------------------------
# Improvement Plan Generator
# ---------------------------------------------------------------------------
def generate_improvement_plan(
process_scores: list[dict],
bottleneck_analysis: dict,
team_analysis: dict,
metrics: MetricsData,
) -> list[dict]:
"""
Generate a prioritized improvement plan combining all analysis outputs.
Priority = Impact × Urgency / Effort
"""
items = []
# Priority 1: Process bottlenecks (Theory of Constraints — fix the constraint first)
for b in bottleneck_analysis.get("bottlenecks", [])[:3]:
items.append({
"priority": 1,
"category": "Bottleneck",
"item": f"Resolve bottleneck in '{b['process']}' at step '{b['bottleneck_step']}'",
"detail": b["toc_recommendation"],
"impact": "HIGH — constraint limits entire system throughput",
"effort": "MEDIUM",
"owner_suggestion": "COO + process owner",
"timebox": "2-4 weeks",
"success_metric": f"Throughput at {b['bottleneck_step']} increases by 25%+",
})
# Priority 2: Critical process maturity gaps
critical_processes = [
p for p in process_scores if p["maturity_score"] < 2.0
]
for proc in sorted(critical_processes, key=lambda x: x["maturity_score"]):
for rec in proc["recommendations"][:2]: # Top 2 recs per critical process
items.append({
"priority": 2,
"category": "Process Maturity",
"item": f"Fix {rec['dimension']} in '{proc['name']}' (score: {rec['current_score']}/5)",
"detail": rec["action"],
"impact": "HIGH — ad-hoc processes create inconsistency and risk",
"effort": "LOW-MEDIUM",
"owner_suggestion": "Process owner",
"timebox": "1-2 weeks",
"success_metric": f"Dimension score improves to 3/5",
})
# Priority 3: Team structural issues
for issue in team_analysis.get("structural_issues", []):
items.append({
"priority": 3,
"category": "Org Structure",
"item": issue["type"],
"detail": issue["detail"],
"impact": "MEDIUM — structural issues compound over time",
"effort": "HIGH",
"owner_suggestion": "COO + People",
"timebox": "1-2 quarters",
"success_metric": "Management layer count normalized",
})
for span_issue in team_analysis.get("span_of_control_issues", []):
severity = "HIGH" if span_issue["issue"] == "Over-span" else "MEDIUM"
items.append({
"priority": 3,
"category": "Span of Control",
"item": f"{span_issue['issue']}: {span_issue['manager']} ({span_issue['dept']})",
"detail": span_issue["recommendation"],
"impact": severity,
"effort": "MEDIUM",
"owner_suggestion": f"VP {span_issue['dept']}",
"timebox": "1 quarter",
"success_metric": "Span within 5-8 for ICs, 3-5 for managers",
})
# Priority 4: Maturity improvements for non-critical processes
medium_processes = [
p for p in process_scores if 2.0 <= p["maturity_score"] < 3.5
]
for proc in sorted(medium_processes, key=lambda x: x["maturity_score"])[:3]:
if proc["recommendations"]:
top_rec = proc["recommendations"][0]
items.append({
"priority": 4,
"category": "Process Improvement",
"item": f"Improve {top_rec['dimension']} in '{proc['name']}'",
"detail": top_rec["action"],
"impact": "MEDIUM",
"effort": "LOW",
"owner_suggestion": "Process owner",
"timebox": "2-4 weeks",
"success_metric": f"Dimension score reaches 3/5",
})
# Priority 5: Metrics-driven flags
burn_multiple = metrics.get("burn_multiple")
if burn_multiple and burn_multiple > 2.0:
items.append({
"priority": 2,
"category": "Financial Efficiency",
"item": f"Burn multiple of {burn_multiple:.1f}x is above healthy range",
"detail": "Burn multiple >1.5x indicates spending exceeds efficient growth. Review headcount-to-revenue ratio by department.",
"impact": "HIGH",
"effort": "MEDIUM",
"owner_suggestion": "COO + CFO",
"timebox": "30 days to diagnose, 60-90 days to act",
"success_metric": "Burn multiple <1.5x within 2 quarters",
})
nrr = metrics.get("net_revenue_retention_pct")
if nrr and nrr < 100:
items.append({
"priority": 1,
"category": "Revenue Health",
"item": f"NRR of {nrr}% — losing more from churn/contraction than gaining from expansion",
"detail": "NRR <100% means the customer base shrinks without new sales. Investigate churn root causes immediately.",
"impact": "CRITICAL",
"effort": "HIGH",
"owner_suggestion": "COO + VP CS",
"timebox": "Immediate — 30 days to root cause, 90 days to fix",
"success_metric": "NRR >100% within 2 quarters",
})
# Sort by priority then impact
priority_order = {"CRITICAL": 0, "HIGH": 1, "MEDIUM": 2, "LOW": 3}
items.sort(key=lambda x: (x["priority"], priority_order.get(x["impact"].split(" — ")[0], 9)))
return items
# ---------------------------------------------------------------------------
# Report Formatter
# ---------------------------------------------------------------------------
def format_report(
process_scores: list[dict],
bottleneck_analysis: dict,
team_analysis: dict,
improvement_plan: list[dict],
metrics: MetricsData,
) -> str:
"""Format the full analysis report as plain text."""
lines = []
now = datetime.now().strftime("%Y-%m-%d %H:%M")
lines.append("=" * 70)
lines.append("OPERATIONAL EFFICIENCY ANALYSIS REPORT")
lines.append(f"Generated: {now}")
lines.append("=" * 70)
# --- Executive Summary ---
lines.append("\n📊 EXECUTIVE SUMMARY")
lines.append("-" * 40)
avg_maturity = (
sum(p["maturity_score"] for p in process_scores) / len(process_scores)
if process_scores else 0
)
critical_count = sum(1 for p in process_scores if p["maturity_score"] < 2.0)
bottleneck_count = len(bottleneck_analysis.get("bottlenecks", []))
plan_items = len(improvement_plan)
lines.append(f"Average Process Maturity: {avg_maturity:.1f}/5.0 ({MATURITY_LEVELS.get(round(avg_maturity), 'Unknown')})")
lines.append(f"Critical Process Gaps: {critical_count}")
lines.append(f"Active Bottlenecks: {bottleneck_count}")
lines.append(f"Improvement Plan Items: {plan_items}")
if metrics:
lines.append("\nKey Business Metrics:")
if metrics.get("burn_multiple"):
flag = " ⚠️" if metrics["burn_multiple"] > 2.0 else ""
lines.append(f" Burn Multiple: {metrics['burn_multiple']:.1f}x{flag}")
if metrics.get("net_revenue_retention_pct"):
flag = " ⚠️" if metrics["net_revenue_retention_pct"] < 100 else ""
lines.append(f" NRR: {metrics['net_revenue_retention_pct']}%{flag}")
if metrics.get("cac_payback_months"):
flag = " ⚠️" if metrics["cac_payback_months"] > 18 else ""
lines.append(f" CAC Payback: {metrics['cac_payback_months']} months{flag}")
# --- Process Maturity Scores ---
lines.append("\n\n📋 PROCESS MATURITY SCORES")
lines.append("-" * 40)
lines.append(f"{'Process':<35} {'Score':>6} {'Level':<12} {'Status'}")
lines.append(f"{'─'*35} {'─'*6} {'─'*12} {'─'*20}")
for p in sorted(process_scores, key=lambda x: x["maturity_score"]):
score = p["maturity_score"]
label = p["maturity_label"]
status = "🔴 Critical" if score < 2 else ("🟡 Needs work" if score < 3.5 else "🟢 Healthy")
lines.append(f"{p['name']:<35} {score:>6.1f} {label:<12} {status}")
# Dimension heatmap
lines.append("\n\nDimension Breakdown (scores 0-5):")
lines.append(f"{'Process':<30} {'Doc':>4} {'Own':>4} {'Met':>4} {'Aut':>4} {'Con':>4} {'Fbk':>4}")
lines.append(f"{'─'*30} {'─'*4} {'─'*4} {'─'*4} {'─'*4} {'─'*4} {'─'*4}")
for p in sorted(process_scores, key=lambda x: x["maturity_score"]):
d = p["dimension_scores"]
lines.append(
f"{p['name']:<30} {d.get('documentation',0):>4} {d.get('ownership',0):>4} "
f"{d.get('metrics',0):>4} {d.get('automation',0):>4} "
f"{d.get('consistency',0):>4} {d.get('feedback_loop',0):>4}"
)
# --- Bottleneck Analysis ---
lines.append("\n\n🔍 BOTTLENECK ANALYSIS (Theory of Constraints)")
lines.append("-" * 40)
bottlenecks = bottleneck_analysis.get("bottlenecks", [])
if not bottlenecks:
lines.append("No process steps defined for bottleneck analysis.")
else:
for i, b in enumerate(bottlenecks, 1):
lines.append(f"\n{i}. {b['process']}")
lines.append(f" Bottleneck step: {b['bottleneck_step']}")
lines.append(f" Throughput: {b['bottleneck_throughput']}/day")
lines.append(f" Queue depth: {b['bottleneck_queue']} units")
lines.append(f" Flow efficiency: {b['flow_efficiency_pct']}%")
lines.append(f" Recommendation: {b['toc_recommendation']}")
lines.append(f"\n Step-by-step throughput:")
for step in b["steps"]:
marker = " ← BOTTLENECK" if step["is_bottleneck"] else ""
lines.append(
f" {step['name']:<30} {step['throughput_per_day']:>4}/day "
f"Queue: {step['queue_depth']:>4} Util: {step['utilization_pct']:>5.1f}%{marker}"
)
# --- Team Structure ---
lines.append("\n\n👥 TEAM STRUCTURE ANALYSIS")
lines.append("-" * 40)
lines.append(f"Total headcount: {team_analysis['total_headcount']}")
lines.append(f"Management layers: {team_analysis['management_layers']} (expected: {team_analysis['expected_layers']})")
span_issues = team_analysis.get("span_of_control_issues", [])
if span_issues:
lines.append(f"\n⚠️ Span of Control Issues ({len(span_issues)}):")
for issue in span_issues:
lines.append(f" {issue['issue']}: {issue['manager']} ({issue['dept']}) — {issue['reports']} reports")
lines.append(f" → {issue['recommendation']}")
dept_eff = team_analysis.get("department_efficiency", [])
if dept_eff:
lines.append(f"\nDepartment Revenue Efficiency:")
lines.append(f"{'Department':<20} {'HC':>4} {'Rev/Head':>10} {'Benchmark':>10} {'vs Bench':>9} {'Status'}")
lines.append(f"{'─'*20} {'─'*4} {'─'*10} {'─'*10} {'─'*9} {'─'*20}")
for d in dept_eff:
rev = f"${d['revenue_per_employee']:,}" if d['revenue_per_employee'] else "N/A"
bench = f"${d['benchmark']:,}" if d['benchmark'] else "N/A"
vs_bench = f"{d['efficiency_vs_benchmark_pct']}%" if d['efficiency_vs_benchmark_pct'] != "N/A" else "N/A"
lines.append(
f"{d['department']:<20} {d['headcount']:>4} {rev:>10} {bench:>10} {vs_bench:>9} {d['status']}"
)
# --- Improvement Plan ---
lines.append("\n\n🎯 PRIORITIZED IMPROVEMENT PLAN")
lines.append("-" * 40)
lines.append("Items ranked by priority (1=highest). Fix Priority 1 before starting Priority 2.\n")
current_priority = None
for i, item in enumerate(improvement_plan, 1):
if item["priority"] != current_priority:
current_priority = item["priority"]
lines.append(f"\nPRIORITY {current_priority}")
lines.append("─" * 30)
lines.append(f"\n{i}. [{item['category']}] {item['item']}")
lines.append(f" Detail: {item['detail']}")
lines.append(f" Impact: {item['impact']}")
lines.append(f" Effort: {item['effort']}")
lines.append(f" Owner: {item['owner_suggestion']}")
lines.append(f" Timebox: {item['timebox']}")
lines.append(f" Success: {item['success_metric']}")
lines.append("\n" + "=" * 70)
lines.append("END OF REPORT")
lines.append("=" * 70)
return "\n".join(lines)
# ---------------------------------------------------------------------------
# Main Entrypoint
# ---------------------------------------------------------------------------
def run_analysis(data: dict) -> str:
"""Run the full analysis pipeline on input data."""
processes = data.get("processes", [])
team = data.get("team", {})
metrics = data.get("metrics", {})
# 1. Score process maturity
process_scores = [score_process_maturity(p) for p in processes]
# 2. Analyze bottlenecks
bottleneck_analysis = analyze_bottlenecks(processes)
# 3. Analyze team structure
team_analysis = analyze_team_structure(team)
# 4. Generate improvement plan
improvement_plan = generate_improvement_plan(
process_scores, bottleneck_analysis, team_analysis, metrics
)
# 5. Format and return report
return format_report(
process_scores, bottleneck_analysis, team_analysis, improvement_plan, metrics
)
def main():
parser = argparse.ArgumentParser(
description="Operational Efficiency Analyzer — COO Advisor Tool",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
parser.add_argument(
"--input", "-i",
help="Path to JSON input file (default: use built-in sample data)",
default=None,
)
parser.add_argument(
"--output", "-o",
help="Path to write report (default: stdout)",
default=None,
)
args = parser.parse_args()
if args.input:
try:
with open(args.input, "r") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: Input file not found: {args.input}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in input file: {e}", file=sys.stderr)
sys.exit(1)
else:
print("No input file specified — running with sample data.\n")
data = SAMPLE_DATA
report = run_analysis(data)
if args.output:
with open(args.output, "w") as f:
f.write(report)
print(f"Report written to: {args.output}")
else:
print(report)
# ---------------------------------------------------------------------------
# Sample Data
# ---------------------------------------------------------------------------
SAMPLE_DATA = {
"company": "AcmeSaaS",
"stage": "series_b",
"metrics": {
"annual_revenue_usd": 18000000,
"burn_multiple": 1.8,
"net_revenue_retention_pct": 108,
"cac_payback_months": 14,
"headcount": 85,
"monthly_churn_pct": 1.2,
},
"processes": [
{
"name": "Customer Onboarding",
"category": "Customer Success",
"maturity": {
"documentation": 3,
"ownership": 4,
"metrics": 3,
"automation": 2,
"consistency": 3,
"feedback_loop": 2,
},
"steps": [
{
"name": "Contract signed → kickoff scheduled",
"throughput_per_day": 4,
"capacity_per_day": 6,
"current_queue": 3,
"avg_wait_hours": 4,
"avg_process_hours": 1,
},
{
"name": "Technical setup & integration",
"throughput_per_day": 2,
"capacity_per_day": 3,
"current_queue": 8,
"avg_wait_hours": 24,
"avg_process_hours": 8,
},
{
"name": "Training & enablement",
"throughput_per_day": 3,
"capacity_per_day": 4,
"current_queue": 2,
"avg_wait_hours": 8,
"avg_process_hours": 4,
},
{
"name": "Go-live confirmation",
"throughput_per_day": 4,
"capacity_per_day": 6,
"current_queue": 1,
"avg_wait_hours": 2,
"avg_process_hours": 1,
},
],
},
{
"name": "Sales Deal Qualification",
"category": "Sales",
"maturity": {
"documentation": 2,
"ownership": 3,
"metrics": 4,
"automation": 2,
"consistency": 2,
"feedback_loop": 3,
},
"steps": [
{
"name": "Inbound lead review",
"throughput_per_day": 15,
"capacity_per_day": 20,
"current_queue": 5,
"avg_wait_hours": 2,
"avg_process_hours": 0.5,
},
{
"name": "BANT qualification call",
"throughput_per_day": 8,
"capacity_per_day": 10,
"current_queue": 12,
"avg_wait_hours": 24,
"avg_process_hours": 1,
},
{
"name": "Demo scheduling & prep",
"throughput_per_day": 6,
"capacity_per_day": 8,
"current_queue": 4,
"avg_wait_hours": 8,
"avg_process_hours": 0.5,
},
],
},
{
"name": "Engineering Deployment",
"category": "Engineering",
"maturity": {
"documentation": 4,
"ownership": 5,
"metrics": 4,
"automation": 4,
"consistency": 5,
"feedback_loop": 4,
},
"steps": [
{
"name": "PR submitted",
"throughput_per_day": 20,
"capacity_per_day": 25,
"current_queue": 8,
"avg_wait_hours": 3,
"avg_process_hours": 2,
},
{
"name": "Code review",
"throughput_per_day": 18,
"capacity_per_day": 22,
"current_queue": 10,
"avg_wait_hours": 4,
"avg_process_hours": 1,
},
{
"name": "CI pipeline",
"throughput_per_day": 18,
"capacity_per_day": 30,
"current_queue": 2,
"avg_wait_hours": 0.5,
"avg_process_hours": 0.5,
},
{
"name": "Deploy to production",
"throughput_per_day": 16,
"capacity_per_day": 20,
"current_queue": 1,
"avg_wait_hours": 0.5,
"avg_process_hours": 0.25,
},
],
},
{
"name": "Incident Response",
"category": "Engineering / Operations",
"maturity": {
"documentation": 2,
"ownership": 2,
"metrics": 1,
"automation": 1,
"consistency": 2,
"feedback_loop": 1,
},
"steps": [],
},
{
"name": "Employee Onboarding",
"category": "People",
"maturity": {
"documentation": 2,
"ownership": 2,
"metrics": 1,
"automation": 1,
"consistency": 2,
"feedback_loop": 2,
},
"steps": [],
},
{
"name": "Vendor Procurement",
"category": "Operations",
"maturity": {
"documentation": 1,
"ownership": 1,
"metrics": 0,
"automation": 0,
"consistency": 1,
"feedback_loop": 0,
},
"steps": [],
},
],
"team": {
"total_headcount": 85,
"annual_revenue_usd": 18000000,
"stage": "series_b",
"management_layers": 3,
"open_requisitions": 18,
"departments": [
{
"name": "Engineering",
"headcount": 32,
"managers": [
{"name": "VP Engineering", "direct_reports": 4, "manages_managers": True},
{"name": "Engineering Manager (Platform)", "direct_reports": 7, "manages_managers": False},
{"name": "Engineering Manager (Product)", "direct_reports": 8, "manages_managers": False},
{"name": "Engineering Manager (Infra)", "direct_reports": 9, "manages_managers": False},
],
},
{
"name": "Sales",
"headcount": 18,
"managers": [
{"name": "VP Sales", "direct_reports": 3, "manages_managers": True},
{"name": "Sales Manager (SMB)", "direct_reports": 6, "manages_managers": False},
{"name": "Sales Manager (Enterprise)", "direct_reports": 4, "manages_managers": False},
],
},
{
"name": "Customer Success",
"headcount": 12,
"managers": [
{"name": "VP CS", "direct_reports": 2, "manages_managers": False},
],
},
{
"name": "Marketing",
"headcount": 8,
"managers": [
{"name": "VP Marketing", "direct_reports": 7, "manages_managers": False},
],
},
{
"name": "Operations",
"headcount": 6,
"managers": [
{"name": "COO", "direct_reports": 5, "manages_managers": True},
],
},
{
"name": "Product",
"headcount": 9,
"managers": [
{"name": "VP Product", "direct_reports": 8, "manages_managers": False},
],
},
],
},
}
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill c-level-advisor/coo-advisor Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Leadership
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
c-level-advisor/coo-advisor/SKILL.md
People who install this also use
CEO Advisor
Executive leadership coaching — strategic decision-making, organizational development, board governance, and navigating high-stakes business challenges.
@alirezarezvani
Chief of Staff
C-suite orchestration layer — executive alignment, cross-functional prioritization, meeting cadence design, and turning strategy into executable plans.
@alirezarezvani
Strategic Alignment
Cascade strategy from vision to team-level execution — OKRs, initiative mapping, alignment verification, and gap identification across the organization.
@alirezarezvani