Manage Corrective and Preventive Actions — root cause analysis, CAPA documentation, effectiveness verification, and closing the quality loop per ISO standards.
Streamline compliance by resolving quality issues permanently through systematic investigations and corrective action planning. You will generate documented evidence that proves problems are fixed and verified, keeping your quality loop closed per ISO standards. Reach for this workflow whenever you encounter audit findings, customer complaints, or recurring errors that require formal Corrective and Preventive Action (CAPA) management.
name: “capa-officer”
description: CAPA system management for medical device QMS. Covers root cause analysis, corrective action planning, effectiveness verification, and CAPA metrics. Use for CAPA investigations, 5-Why analysis, fishbone diagrams, root cause determination, corrective action tracking, effectiveness verification, or CAPA program optimization.
triggers:
CAPA investigation
root cause analysis
5 Why analysis
fishbone diagram
corrective action
preventive action
effectiveness verification
CAPA metrics
nonconformance investigation
quality issue investigation
CAPA tracking
audit finding CAPA
CAPA Officer
Corrective and Preventive Action (CAPA) management within Quality Management Systems, focusing on systematic root cause analysis, action implementation, and effectiveness verification.
CAPA Officer, Process Owner, Subject Matter Expert
Minor
CAPA Officer, Process Owner
Evidence Collection Checklist
Problem description with specific details (what, where, when, who, how much)
Timeline of events leading to issue
Relevant records and documentation
Interview notes from involved personnel
Photos or physical evidence (if applicable)
Related complaints, NCs, or previous CAPAs
Process parameters and specifications
Root Cause Analysis
Select and apply appropriate RCA methodology based on problem characteristics.
RCA Method Selection Decision Tree
Is the issue safety-critical or involves system reliability?├── Yes → Use FAULT TREE ANALYSIS└── No → Is human error the suspected primary cause? ├── Yes → Use HUMAN FACTORS ANALYSIS └── No → How many potential contributing factors? ├── 1-2 factors (linear causation) → Use 5 WHY ANALYSIS ├── 3-6 factors (complex, systemic) → Use FISHBONE DIAGRAM └── Unknown/proactive assessment → Use FMEA
5 Why Analysis
Use when: Single-cause issues with linear causation, process deviations with clear failure point.
Template:
PROBLEM: [Clear, specific statement]WHY 1: Why did [problem] occur?BECAUSE: [First-level cause]EVIDENCE: [Supporting data]WHY 2: Why did [first-level cause] occur?BECAUSE: [Second-level cause]EVIDENCE: [Supporting data]WHY 3: Why did [second-level cause] occur?BECAUSE: [Third-level cause]EVIDENCE: [Supporting data]WHY 4: Why did [third-level cause] occur?BECAUSE: [Fourth-level cause]EVIDENCE: [Supporting data]WHY 5: Why did [fourth-level cause] occur?BECAUSE: [Root cause]EVIDENCE: [Supporting data]
Example - Calibration Overdue:
PROBLEM: pH meter (EQ-042) found 2 months overdue for calibrationWHY 1: Why was calibration overdue?BECAUSE: Equipment was not on calibration scheduleEVIDENCE: Calibration schedule reviewed, EQ-042 not listedWHY 2: Why was it not on the schedule?BECAUSE: Schedule not updated when equipment was purchasedEVIDENCE: Purchase date 2023-06-15, schedule dated 2023-01-01WHY 3: Why was the schedule not updated?BECAUSE: No process requires schedule update at equipment purchaseEVIDENCE: SOP-EQ-001 reviewed, no such requirementWHY 4: Why is there no such requirement?BECAUSE: Procedure written before equipment tracking was centralizedEVIDENCE: SOP last revised 2019, equipment system implemented 2021WHY 5: Why has procedure not been updated?BECAUSE: Periodic review did not assess compatibility with new systemsEVIDENCE: No review against new equipment system documentedROOT CAUSE: Procedure review process does not assess compatibilitywith organizational systems implemented after original procedure creation.
Fishbone Diagram Categories (6M)
Category
Focus Areas
Typical Causes
Man (People)
Training, competency, workload
Skill gaps, fatigue, communication
Machine (Equipment)
Calibration, maintenance, age
Wear, malfunction, inadequate capacity
Method (Process)
Procedures, work instructions
Unclear steps, missing controls
Material
Specifications, suppliers, storage
Out-of-spec, degradation, contamination
Measurement
Calibration, methods, interpretation
Instrument error, wrong method
Mother Nature
Temperature, humidity, cleanliness
Environmental excursions
See references/rca-methodologies.md for complete method details and templates.
Root Cause Validation
Before proceeding to action planning, validate root cause:
Root cause can be verified with objective evidence
If root cause is eliminated, problem would not recur
Allow adequate implementation period (minimum 30-90 days)
Collect post-implementation data
Compare to pre-implementation baseline
Evaluate against success criteria
Verify no recurrence during verification period
Document verification evidence
Determine CAPA effectiveness
Validation: All criteria met with objective evidence; no recurrence observed
Verification Timeline Guidelines
CAPA Severity
Wait Period
Verification Window
Critical
30 days
30-90 days post-implementation
Major
60 days
60-180 days post-implementation
Minor
90 days
90-365 days post-implementation
Verification Methods
Method
Use When
Evidence Required
Data trend analysis
Quantifiable issues
Pre/post comparison, trend charts
Process audit
Procedure compliance issues
Audit checklist, interview notes
Record review
Documentation issues
Sample records, compliance rate
Testing/inspection
Product quality issues
Test results, pass/fail data
Interview/observation
Training issues
Interview notes, observation records
Effectiveness Determination
Did recurrence occur during verification period?├── Yes → CAPA INEFFECTIVE (re-investigate root cause)└── No → Were all effectiveness criteria met? ├── Yes → CAPA EFFECTIVE (proceed to closure) └── No → Extent of gap? ├── Minor gap → Extend verification or accept with justification └── Significant gap → CAPA INEFFECTIVE (revise actions)
See references/effectiveness-verification-guide.md for detailed procedures.
CAPA Metrics and Reporting
Monitor CAPA program performance through key indicators.
Verification planning must occur BEFORE corrective action implementation:
Stage
Planning Activity
Owner
CAPA Initiation
Define preliminary verification approach
CAPA Owner
Root Cause Analysis
Refine criteria based on root cause
Investigation Team
Action Planning
Finalize verification method and timeline
CAPA Owner
Implementation
Schedule verification activities
Quality Assurance
Verification Timeline Guidelines
CAPA Severity
Minimum Wait Period
Verification Window
Critical (Safety)
30 days
30-90 days post-implementation
Major
60 days
60-180 days post-implementation
Minor
90 days
90-365 days post-implementation
Rationale: Waiting period ensures sufficient data collection and accounts for process variation.
Verification Plan Components
VERIFICATION PLAN TEMPLATECAPA Number: [CAPA-XXXX]Problem Statement: [Original issue]Root Cause: [Identified root cause]Corrective Action: [Implemented action]VERIFICATION METHOD:[ ] Data Trend Analysis[ ] Process Audit[ ] Record Review[ ] Testing/Inspection[ ] Interview/Observation[ ] Multiple Methods (specify)EFFECTIVENESS CRITERIA:1. [Measurable criterion 1]2. [Measurable criterion 2]3. [Measurable criterion 3]SUCCESS THRESHOLD:- [Quantitative threshold, e.g., "Zero recurrence for 90 days"]- [Qualitative threshold, e.g., "Procedure followed correctly 100%"]DATA COLLECTION:- Source: [Where data will come from]- Sample Size: [Number of records/instances to review]- Time Period: [Start and end dates]- Responsible: [Who collects data]VERIFICATION SCHEDULE:- Implementation Complete: [Date]- Waiting Period Ends: [Date]- Verification Start: [Date]- Verification Complete: [Date]- Report Due: [Date]APPROVAL:CAPA Owner: _____________ Date: _______Quality Assurance: _____________ Date: _______
Verification Methods
1. Data Trend Analysis
Best for: Quantifiable issues with measurable outcomes (defect rates, cycle times, complaint trends)
Procedure:
Collect post-implementation data for defined period
Compare to pre-implementation baseline
Apply statistical analysis if sample size permits
Document trend direction and magnitude
Example Criteria:
Defect rate reduced by ≥50% from baseline
Zero recurrence of specific failure mode
Process capability (Cpk) improved to ≥1.33
Evidence Required:
Pre-implementation baseline data
Post-implementation trend data
Statistical analysis (if applicable)
Trend charts with annotation
2. Process Audit
Best for: Procedure compliance issues, process control failures, systemic problems
Procedure:
Develop audit checklist based on corrective action
Conduct unannounced process audit
Interview operators and supervisors
Review records generated since implementation
Document compliance percentage
Example Criteria:
100% compliance with revised procedure
All operators demonstrate competency
No deviations observed during audit
Evidence Required:
Audit checklist completed
Interview notes
Record samples reviewed
Photos/observations (if applicable)
3. Record Review
Best for: Documentation issues, completeness problems, traceability failures
Procedure:
Define sample size based on volume (minimum 10 or 10%, whichever greater)
Review records generated post-implementation
Evaluate against specified requirements
Calculate compliance rate
Example Criteria:
100% of records meet completeness requirements
All required signatures present
Traceability maintained throughout
Evidence Required:
List of records reviewed
Compliance checklist results
Non-compliance summary (if any)
4. Testing/Inspection
Best for: Product quality issues, equipment failures, specification non-conformances
Procedure:
Define test protocol based on corrective action
Conduct testing on post-implementation units
Compare results to acceptance criteria
Document pass/fail rates
Example Criteria:
100% of units pass revised inspection criteria
All test results within specification
Zero failures of targeted parameter
Evidence Required:
Test protocol/method
Test results data
Pass/fail summary
Comparison to pre-implementation results
5. Interview/Observation
Best for: Training issues, communication problems, human factors causes
Procedure:
Develop structured interview questions
Interview representative sample of affected personnel
Observe process execution in real-time
Document responses and observations
Example Criteria:
All interviewed personnel demonstrate knowledge
Observed practices match documented procedure
No unsafe acts or workarounds observed
Evidence Required:
Interview questions and responses
Observation notes
Training records (supporting)
Effectiveness Criteria
Defining Good Criteria
Criteria must be SMART:
Element
Requirement
Example
Specific
Clearly defined what to measure
"Calibration overdue rate" not "equipment issues"
Measurable
Quantifiable or objectively verifiable
"<2% overdue rate" not "improved timeliness"
Achievable
Realistic given the corrective action
Within capability of implemented solution
Relevant
Directly related to root cause
Addresses the actual problem
Time-bound
Specified evaluation period
"For 90 consecutive days"
Criteria by Issue Type
Issue Type
Typical Criteria
Threshold
Nonconformance
Recurrence rate
Zero recurrence
Process deviation
Compliance rate
≥95% compliance
Complaint
Complaint trend
≥50% reduction
Calibration
Overdue rate
<2% overdue
Training
Competency pass rate
100% pass
Documentation
Completeness rate
100% complete
Supplier
Incoming reject rate
≤1% reject rate
Sample Size Guidelines
Population Size
Minimum Sample
<10
All (100%)
10-50
10
51-100
15
101-500
20
>500
25 or 10%, whichever less
Closure Requirements
Closure Checklist
CAPA Closure Prerequisites:
All corrective actions implemented
Implementation evidence documented
Verification waiting period complete
Verification activities performed
All effectiveness criteria met
Verification evidence documented
No recurrence during verification period
CAPA owner review complete
Quality Assurance review complete
Documentation complete and filed
Effectiveness Status Determination
EFFECTIVENESS DECISION TREE:Did recurrence occur during verification period?├── Yes → CAPA INEFFECTIVE (escalate per ineffective process)└── No → Were all effectiveness criteria met? ├── Yes → Were any related issues identified? │ ├── Yes → Open new CAPA if needed, close original │ └── No → CAPA EFFECTIVE - proceed to closure └── No → How many criteria missed? ├── Minor gap (1 criterion, marginal miss) → │ Extend verification period OR accept with justification └── Significant gap → CAPA INEFFECTIVEEFFECTIVENESS DETERMINATION:[ ] EFFECTIVE - All criteria met, no recurrence[ ] EFFECTIVE WITH CONDITIONS - Minor gap, justified acceptance[ ] INEFFECTIVE - Significant gaps or recurrence
Closure Documentation
EFFECTIVENESS VERIFICATION REPORTCAPA Number: [CAPA-XXXX]Verification Complete Date: [Date]Verified By: [Name, Title]VERIFICATION SUMMARY:| Criterion | Target | Actual | Status ||-----------|--------|--------|--------|| [Criterion 1] | [Target] | [Result] | ☑ Met / ☐ Not Met || [Criterion 2] | [Target] | [Result] | ☑ Met / ☐ Not Met || [Criterion 3] | [Target] | [Result] | ☑ Met / ☐ Not Met |RECURRENCE CHECK:- Recurrence during verification period: [ ] Yes [ ] No- Related issues identified: [ ] Yes [ ] No- If yes, describe: [Description]EVIDENCE SUMMARY:[List of evidence documents, record numbers, data sources]EFFECTIVENESS DETERMINATION:[ ] EFFECTIVE[ ] EFFECTIVE WITH CONDITIONS: [Justification][ ] INEFFECTIVE: [Reason]RECOMMENDED ACTION:[ ] Close CAPA[ ] Extend verification period to [Date][ ] Open new CAPA [CAPA-XXXX] for [Issue][ ] Re-investigate (return to root cause analysis)APPROVALS:CAPA Owner: _____________ Date: _______Quality Assurance: _____________ Date: _______Management (if Major/Critical): _____________ Date: _______
Ineffective CAPA Process
Definition of Ineffective
CAPA is ineffective when:
Original problem recurs during or after verification period
Effectiveness criteria not met
Root cause still present
Corrective action created new problems
Ineffective CAPA Workflow
INEFFECTIVE CAPA DETECTED │ ├── 1. Immediate Actions │ ├── Reopen CAPA (do not close as effective) │ ├── Implement containment for recurrence │ └── Notify CAPA owner and management │ ├── 2. Root Cause Re-evaluation │ ├── Was original root cause correct? │ │ ├── No → Conduct new root cause analysis │ │ └── Yes → Was corrective action appropriate? │ │ ├── No → Develop new corrective action │ │ └── Yes → Was implementation adequate? │ │ ├── No → Re-implement with improvements │ │ └── Yes → Escalate (systemic issue) │ ├── 3. Escalation Criteria │ ├── Second ineffective attempt → Management review required │ ├── Safety-related recurrence → Immediate escalation │ └── Pattern across multiple CAPAs → Systemic CAPA │ └── 4. Documentation ├── Document ineffective status with evidence ├── Record re-investigation results ├── Update CAPA metrics/trending └── Include in management review
Is the issue safety-critical or involves system reliability?├── Yes → Use FAULT TREE ANALYSIS└── No → Is human error the suspected primary cause? ├── Yes → Use HUMAN FACTORS ANALYSIS └── No → How many potential contributing factors? ├── 1-2 factors → Use 5 WHY ANALYSIS ├── 3-6 factors → Use FISHBONE DIAGRAM └── Unknown/Many → Use FMEA (proactive) or Fishbone (reactive)
5 Why Analysis
Overview
Simple, iterative technique asking "why" repeatedly (typically 5 times) to drill from symptoms to root cause.
PROBLEM STATEMENT:[Clear, specific description of what happened, when, where, and impact]WHY 1: Why did [problem] occur?BECAUSE: [First-level cause]EVIDENCE: [Data/observation supporting this cause]WHY 2: Why did [first-level cause] occur?BECAUSE: [Second-level cause]EVIDENCE: [Data/observation supporting this cause]WHY 3: Why did [second-level cause] occur?BECAUSE: [Third-level cause]EVIDENCE: [Data/observation supporting this cause]WHY 4: Why did [third-level cause] occur?BECAUSE: [Fourth-level cause]EVIDENCE: [Data/observation supporting this cause]WHY 5: Why did [fourth-level cause] occur?BECAUSE: [Root cause - typically systemic or management system failure]EVIDENCE: [Data/observation supporting this cause]ROOT CAUSE VALIDATION:- [ ] Can the root cause be verified with evidence?- [ ] If root cause is eliminated, would problem recur?- [ ] Is the root cause within organizational control?- [ ] Does the root cause explain all symptoms?
Example: Calibration Overdue
PROBLEM: pH meter (EQ-042) found 2 months overdue for calibrationWHY 1: Why was calibration overdue?BECAUSE: The equipment was not on the calibration scheduleEVIDENCE: Calibration schedule reviewed, EQ-042 not listedWHY 2: Why was it not on the calibration schedule?BECAUSE: The schedule was not updated when equipment was purchasedEVIDENCE: Purchase date 2023-06-15, schedule dated 2023-01-01WHY 3: Why was the schedule not updated?BECAUSE: No process requires schedule update at equipment purchaseEVIDENCE: Equipment procedure SOP-EQ-001 reviewed, no such requirementWHY 4: Why is there no requirement to update the schedule?BECAUSE: The procedure was written before equipment tracking was centralizedEVIDENCE: SOP-EQ-001 last revised 2019, equipment system implemented 2021WHY 5: Why has the procedure not been updated?BECAUSE: Periodic procedure review did not assess compatibility with new systemsEVIDENCE: No documented review of SOP-EQ-001 against new equipment systemROOT CAUSE: Procedure review process does not assess compatibilitywith organizational systems implemented after original procedure creation
Fishbone Diagram
Overview
Also called Ishikawa or cause-and-effect diagram. Organizes potential causes into categories branching from the problem statement.
Top-down, deductive analysis starting with undesired event and systematically identifying all potential causes using Boolean logic (AND/OR gates).
When to Use
Safety-critical system failures
Complex system reliability analysis
Events with multiple failure pathways
Regulatory-required investigations (FDA, MDR)
FTA Symbols
Symbol
Name
Meaning
Rectangle
Top Event / Intermediate Event
Undesired event or intermediate fault
Circle
Basic Event
Primary fault requiring no further analysis
Diamond
Undeveloped Event
Event not fully analyzed (data limitation)
AND Gate
Requires all inputs
All child events must occur for parent
OR Gate
Requires any input
Any child event causes parent
FTA Template
TOP EVENT: [Undesired event under investigation]LEVEL 1 (Immediate Causes):[Top Event] │ └── OR GATE ──┬── [Cause 1.1] ├── [Cause 1.2] └── [Cause 1.3]LEVEL 2 (Contributing Causes):[Cause 1.1] │ └── AND GATE ──┬── [Cause 2.1] └── [Cause 2.2]MINIMAL CUT SETS:(Combinations of basic events that cause top event)1. {Basic Event A, Basic Event B} ← Both required (AND)2. {Basic Event C} ← Single point failure (OR)3. {Basic Event D, Basic Event E} ← Both required (AND)CRITICAL PATH ANALYSIS:Most likely failure pathway: [Description]Single points of failure: [List]RECOMMENDATIONS:- Address single points of failure first- Add redundancy where AND gates show vulnerability- Prioritize controls on highest probability paths
Cut Set Analysis
Minimal cut sets identify the smallest combination of basic events causing the top event:
Single-element cut sets: Single points of failure (highest priority)
Two-element cut sets: Dual failure scenarios
Probability calculation: P(Top Event) = Union of P(Cut Sets)
Human Factors Analysis
Overview
Systematic analysis of human error focusing on cognitive, physical, and organizational factors contributing to performance failures.
HFACS Categories
Human Factors Analysis and Classification System:
Level
Category
Examples
Unsafe Acts
Errors, violations
Skill-based, decision, perceptual errors
Preconditions
Conditions for unsafe acts
Fatigue, mental state, CRM, physical environment
Unsafe Supervision
Supervisory failures
Inadequate supervision, planned inappropriate ops
Organizational Influences
Organizational failures
Resource management, organizational climate
Human Error Types
Type
Description
Example
Mitigation
Slip
Execution error in routine task
Wrong button pressed
Error-proofing, forcing functions
Lapse
Memory failure
Forgot step in procedure
Checklists, reminders
Mistake
Planning/decision error
Wrong procedure selected
Training, decision aids
Violation
Intentional deviation
Skipped step to save time
Culture change, supervision
Human Factors Investigation Template
INCIDENT DESCRIPTION:[What happened, who was involved, when, where]UNSAFE ACTS ANALYSIS:Type of Error: [ ] Slip [ ] Lapse [ ] Mistake [ ] ViolationDescription: [Specific action or inaction]Task Being Performed: [Activity at time of error]Experience Level: [Novice/Intermediate/Expert]PRECONDITIONS FOR UNSAFE ACTS:Cognitive Factors:- [ ] Task complexity exceeded capability- [ ] Time pressure- [ ] Distraction/interruption- [ ] Mental fatiguePhysical Factors:- [ ] Physical fatigue- [ ] Inadequate lighting- [ ] Noise interference- [ ] Workspace ergonomicsTeam Factors:- [ ] Communication breakdown- [ ] Coordination failure- [ ] Inadequate leadershipSUPERVISORY FACTORS:- [ ] Inadequate supervision- [ ] Failed to correct known problem- [ ] Inappropriate staffing- [ ] Authorized unnecessary riskORGANIZATIONAL FACTORS:- [ ] Resource management deficiency- [ ] Organizational process issue- [ ] Organizational culture/climateROOT CAUSE(S):[Human factors root causes identified]CORRECTIVE ACTIONS:| Action | Target Factor | Priority ||--------|---------------|----------|| [Action 1] | [Factor addressed] | High || [Action 2] | [Factor addressed] | Medium |
Failure Mode and Effects Analysis
Overview
Proactive, systematic technique identifying potential failure modes, their causes, and effects before failures occur.
FMEA Types
Type
Application
Scope
Design FMEA (DFMEA)
Product design
Component and system design failures
Process FMEA (PFMEA)
Manufacturing process
Process step failures
System FMEA
System-level analysis
System interaction failures
Risk Priority Number (RPN)
RPN = Severity (S) × Occurrence (O) × Detection (D)
Severity Scale (1-10):
Rating
Effect
Criteria
10
Hazardous
Failure affects safe operation, no warning
8-9
Very High
Primary function lost, high impact
6-7
High
Performance degraded, customer dissatisfied
4-5
Moderate
Some performance loss, moderate impact
2-3
Low
Minor effect, slight inconvenience
1
None
No discernible effect
Occurrence Scale (1-10):
Rating
Likelihood
Failure Rate
10
Very High
>1 in 10
7-9
High
1 in 20 - 1 in 100
4-6
Moderate
1 in 400 - 1 in 2,000
2-3
Low
1 in 15,000 - 1 in 150,000
1
Remote
<1 in 1,500,000
Detection Scale (1-10):
Rating
Detection
Criteria
10
Absolute Uncertainty
No inspection/control, defect will reach customer
7-9
Very Remote to Remote
Controls unlikely to detect
4-6
Moderate
Controls may detect
2-3
High
Controls likely to detect
1
Almost Certain
Controls will almost certainly detect
FMEA Template
PROCESS/PRODUCT: [Name]FMEA TEAM: [Members]DATE: [Date]| Item/Step | Failure Mode | Effect | S | Cause | O | Controls | D | RPN | Action ||-----------|--------------|--------|---|-------|---|----------|---|-----|--------|| [Item 1] | [How it fails] | [Impact] | 8 | [Why] | 4 | [Current] | 6 | 192 | [Action] || [Item 2] | [How it fails] | [Impact] | 6 | [Why] | 3 | [Current] | 4 | 72 | [Action] |RPN THRESHOLD: Actions required for RPN > [threshold]HIGH SEVERITY RULE: Actions required for S >= 9 regardless of RPNACTION PRIORITIZATION:1. Address all items with S >= 9 first2. Address items with highest RPN3. Focus on reducing Occurrence (prevention)4. Then improve Detection (inspection)
Selecting the Right Method
Decision Flowchart
START: Investigation Required │ ├── Is this a proactive assessment (no failure yet)? │ └── Yes → Use FMEA │ ├── Is the issue safety-critical? │ └── Yes → Use FAULT TREE ANALYSIS │ ├── Is human error the primary concern? │ └── Yes → Use HUMAN FACTORS ANALYSIS │ ├── Are there multiple contributing factors (3+)? │ ├── Yes → Use FISHBONE DIAGRAM │ └── No → Use 5 WHY ANALYSIS │ └── Uncertain? → Start with 5 WHY, escalate to FISHBONE if needed
Hybrid Approach
For complex investigations, combine methods:
Initial screening: 5 Why for quick cause identification
Detailed analysis: Fishbone to explore all categories
Validation: Fault Tree for critical failure paths
Systemic factors: Human Factors for people-related causes
Prevention: FMEA for future risk mitigation
Documentation Requirements
Method
Required Outputs
Retention
5 Why
Completed template with evidence
CAPA record
Fishbone
Diagram + prioritized causes
CAPA record
Fault Tree
FTA diagram + cut set analysis
DHF/CAPA record
Human Factors
HFACS analysis + actions
CAPA record
FMEA
FMEA worksheet + action tracking
Design file
#!/usr/bin/env python3"""CAPA Tracker - Corrective and Preventive Action Management ToolTracks CAPA status, calculates metrics, identifies overdue items,and generates reports for management review.Usage: python capa_tracker.py --capas capas.json python capa_tracker.py --interactive python capa_tracker.py --capas capas.json --output json"""import argparseimport jsonimport sysfrom dataclasses import dataclass, field, asdictfrom datetime import datetime, timedeltafrom typing import List, Dict, Optionalfrom enum import Enumclass CAPAStatus(Enum): OPEN = "Open" INVESTIGATION = "Investigation" ACTION_PLANNING = "Action Planning" IMPLEMENTATION = "Implementation" VERIFICATION = "Verification" CLOSED_EFFECTIVE = "Closed - Effective" CLOSED_INEFFECTIVE = "Closed - Ineffective"class CAPASeverity(Enum): CRITICAL = "Critical" MAJOR = "Major" MINOR = "Minor"class CAPASource(Enum): COMPLAINT = "Customer Complaint" AUDIT = "Internal Audit" EXTERNAL_AUDIT = "External Audit" NONCONFORMANCE = "Nonconformance" MANAGEMENT_REVIEW = "Management Review" TREND_ANALYSIS = "Trend Analysis" REGULATORY = "Regulatory Feedback" OTHER = "Other"@dataclassclass CAPA: capa_number: str title: str description: str source: CAPASource severity: CAPASeverity status: CAPAStatus open_date: str target_date: str owner: str root_cause: str = "" corrective_action: str = "" verification_date: Optional[str] = None close_date: Optional[str] = None days_open: int = 0 is_overdue: bool = False@dataclassclass CAPAMetrics: total_capas: int open_capas: int closed_capas: int overdue_capas: int avg_cycle_time: float effectiveness_rate: float by_status: Dict[str, int] by_severity: Dict[str, int] by_source: Dict[str, int] overdue_list: List[Dict] recommendations: List[str]class CAPATracker: """CAPA tracking and metrics calculator.""" # Target cycle times by severity (days) TARGET_CYCLE_TIMES = { CAPASeverity.CRITICAL: 30, CAPASeverity.MAJOR: 60, CAPASeverity.MINOR: 90, } def __init__(self, capas: List[CAPA]): self.capas = capas self.today = datetime.now() self._calculate_derived_fields() def _calculate_derived_fields(self): """Calculate days open and overdue status.""" for capa in self.capas: open_date = datetime.strptime(capa.open_date, "%Y-%m-%d") if capa.close_date: close_date = datetime.strptime(capa.close_date, "%Y-%m-%d") capa.days_open = (close_date - open_date).days else: capa.days_open = (self.today - open_date).days target_date = datetime.strptime(capa.target_date, "%Y-%m-%d") if not capa.close_date and self.today > target_date: capa.is_overdue = True def calculate_metrics(self) -> CAPAMetrics: """Calculate comprehensive CAPA metrics.""" total = len(self.capas) # Status counts closed_statuses = [CAPAStatus.CLOSED_EFFECTIVE, CAPAStatus.CLOSED_INEFFECTIVE] open_capas = [c for c in self.capas if c.status not in closed_statuses] closed_capas = [c for c in self.capas if c.status in closed_statuses] overdue_capas = [c for c in self.capas if c.is_overdue] # Average cycle time (closed CAPAs only) if closed_capas: avg_cycle = sum(c.days_open for c in closed_capas) / len(closed_capas) else: avg_cycle = 0.0 # Effectiveness rate effective = [c for c in self.capas if c.status == CAPAStatus.CLOSED_EFFECTIVE] ineffective = [c for c in self.capas if c.status == CAPAStatus.CLOSED_INEFFECTIVE] if effective or ineffective: effectiveness = len(effective) / (len(effective) + len(ineffective)) * 100 else: effectiveness = 0.0 # Counts by category by_status = {} for status in CAPAStatus: count = len([c for c in self.capas if c.status == status]) if count > 0: by_status[status.value] = count by_severity = {} for severity in CAPASeverity: count = len([c for c in self.capas if c.severity == severity]) if count > 0: by_severity[severity.value] = count by_source = {} for source in CAPASource: count = len([c for c in self.capas if c.source == source]) if count > 0: by_source[source.value] = count # Overdue list overdue_list = [] for capa in sorted(overdue_capas, key=lambda c: c.days_open, reverse=True): target = datetime.strptime(capa.target_date, "%Y-%m-%d") days_overdue = (self.today - target).days overdue_list.append({ "capa_number": capa.capa_number, "title": capa.title, "severity": capa.severity.value, "status": capa.status.value, "days_overdue": days_overdue, "owner": capa.owner }) # Generate recommendations recommendations = self._generate_recommendations( open_capas, overdue_capas, effectiveness, avg_cycle ) return CAPAMetrics( total_capas=total, open_capas=len(open_capas), closed_capas=len(closed_capas), overdue_capas=len(overdue_capas), avg_cycle_time=round(avg_cycle, 1), effectiveness_rate=round(effectiveness, 1), by_status=by_status, by_severity=by_severity, by_source=by_source, overdue_list=overdue_list, recommendations=recommendations ) def _generate_recommendations( self, open_capas: List[CAPA], overdue_capas: List[CAPA], effectiveness: float, avg_cycle: float ) -> List[str]: """Generate actionable recommendations.""" recommendations = [] # Overdue CAPAs if overdue_capas: critical_overdue = [c for c in overdue_capas if c.severity == CAPASeverity.CRITICAL] if critical_overdue: recommendations.append( f"URGENT: {len(critical_overdue)} critical CAPA(s) overdue. " "Escalate to management immediately." ) else: recommendations.append( f"ACTION: {len(overdue_capas)} CAPA(s) overdue. " "Review and update target dates or expedite closure." ) # Effectiveness rate if effectiveness < 80 and effectiveness > 0: recommendations.append( f"CONCERN: Effectiveness rate at {effectiveness:.0f}%. " "Review root cause analysis quality and corrective action adequacy." ) # Cycle time if avg_cycle > 60: recommendations.append( f"IMPROVEMENT: Average cycle time is {avg_cycle:.0f} days. " "Target is 60 days. Review investigation and approval bottlenecks." ) # Investigation backlog in_investigation = [c for c in open_capas if c.status == CAPAStatus.INVESTIGATION] if len(in_investigation) > 5: recommendations.append( f"WORKLOAD: {len(in_investigation)} CAPAs in investigation phase. " "Consider additional resources or prioritization." ) # Stuck in verification in_verification = [c for c in open_capas if c.status == CAPAStatus.VERIFICATION] old_verification = [c for c in in_verification if c.days_open > 120] if old_verification: recommendations.append( f"STALLED: {len(old_verification)} CAPA(s) in verification >120 days. " "Complete effectiveness checks or extend with justification." ) # Source patterns complaint_capas = [c for c in self.capas if c.source == CAPASource.COMPLAINT] if len(complaint_capas) > len(self.capas) * 0.4: recommendations.append( "TREND: >40% of CAPAs from customer complaints. " "Review preventive action effectiveness and quality controls." ) if not recommendations: recommendations.append( "CAPA program operating within targets. " "Continue monitoring key metrics." ) return recommendations def get_aging_report(self) -> Dict: """Generate aging analysis of open CAPAs.""" open_statuses = [ CAPAStatus.OPEN, CAPAStatus.INVESTIGATION, CAPAStatus.ACTION_PLANNING, CAPAStatus.IMPLEMENTATION, CAPAStatus.VERIFICATION ] open_capas = [c for c in self.capas if c.status in open_statuses] aging_buckets = { "0-30 days": [], "31-60 days": [], "61-90 days": [], "91-120 days": [], ">120 days": [] } for capa in open_capas: days = capa.days_open if days <= 30: bucket = "0-30 days" elif days <= 60: bucket = "31-60 days" elif days <= 90: bucket = "61-90 days" elif days <= 120: bucket = "91-120 days" else: bucket = ">120 days" aging_buckets[bucket].append({ "capa_number": capa.capa_number, "title": capa.title, "days_open": days, "status": capa.status.value, "severity": capa.severity.value }) return aging_bucketsdef format_text_output(metrics: CAPAMetrics, aging: Dict) -> str: """Format metrics as text report.""" lines = [ "=" * 70, "CAPA STATUS REPORT", "=" * 70, f"Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}", "", "SUMMARY METRICS", "-" * 40, f"Total CAPAs: {metrics.total_capas}", f"Open CAPAs: {metrics.open_capas}", f"Closed CAPAs: {metrics.closed_capas}", f"Overdue CAPAs: {metrics.overdue_capas}", f"Avg Cycle Time: {metrics.avg_cycle_time} days", f"Effectiveness Rate: {metrics.effectiveness_rate}%", "", "STATUS DISTRIBUTION", "-" * 40, ] for status, count in metrics.by_status.items(): bar = "█" * min(count, 20) lines.append(f" {status:<25} {bar} {count}") lines.extend([ "", "SEVERITY DISTRIBUTION", "-" * 40, ]) for severity, count in metrics.by_severity.items(): bar = "█" * min(count, 20) lines.append(f" {severity:<25} {bar} {count}") lines.extend([ "", "SOURCE DISTRIBUTION", "-" * 40, ]) for source, count in metrics.by_source.items(): bar = "█" * min(count, 20) lines.append(f" {source:<25} {bar} {count}") lines.extend([ "", "AGING ANALYSIS", "-" * 40, ]) for bucket, capas in aging.items(): lines.append(f" {bucket}: {len(capas)} CAPA(s)") if metrics.overdue_list: lines.extend([ "", "OVERDUE CAPAs", "-" * 40, f"{'CAPA #':<12} {'Title':<25} {'Days':<6} {'Owner':<15}", "-" * 60, ]) for item in metrics.overdue_list[:10]: title = item["title"][:24] if len(item["title"]) > 24 else item["title"] lines.append( f"{item['capa_number']:<12} {title:<25} " f"{item['days_overdue']:<6} {item['owner']:<15}" ) if len(metrics.overdue_list) > 10: lines.append(f"... and {len(metrics.overdue_list) - 10} more") lines.extend([ "", "RECOMMENDATIONS", "-" * 40, ]) for i, rec in enumerate(metrics.recommendations, 1): lines.append(f"{i}. {rec}") lines.append("=" * 70) return "\n".join(lines)def interactive_mode(): """Run interactive CAPA entry mode.""" print("=" * 60) print("CAPA Tracker - Interactive Mode") print("=" * 60) capas = [] print("\nEnter CAPAs (blank CAPA number to finish):\n") while True: capa_num = input("CAPA Number (e.g., CAPA-2024-001): ").strip() if not capa_num: break title = input("Title: ").strip() description = input("Description: ").strip() print("Source options: C=Complaint, A=Audit, N=Nonconformance, M=Management Review, T=Trend, O=Other") source_input = input("Source [C/A/N/M/T/O]: ").strip().upper() source_map = { "C": CAPASource.COMPLAINT, "A": CAPASource.AUDIT, "N": CAPASource.NONCONFORMANCE, "M": CAPASource.MANAGEMENT_REVIEW, "T": CAPASource.TREND_ANALYSIS, "O": CAPASource.OTHER } source = source_map.get(source_input, CAPASource.OTHER) print("Severity: C=Critical, M=Major, I=Minor") severity_input = input("Severity [C/M/I]: ").strip().upper() severity_map = { "C": CAPASeverity.CRITICAL, "M": CAPASeverity.MAJOR, "I": CAPASeverity.MINOR } severity = severity_map.get(severity_input, CAPASeverity.MINOR) print("Status: O=Open, I=Investigation, P=Action Planning, M=Implementation, V=Verification, E=Closed Effective, N=Closed Ineffective") status_input = input("Status [O/I/P/M/V/E/N]: ").strip().upper() status_map = { "O": CAPAStatus.OPEN, "I": CAPAStatus.INVESTIGATION, "P": CAPAStatus.ACTION_PLANNING, "M": CAPAStatus.IMPLEMENTATION, "V": CAPAStatus.VERIFICATION, "E": CAPAStatus.CLOSED_EFFECTIVE, "N": CAPAStatus.CLOSED_INEFFECTIVE } status = status_map.get(status_input, CAPAStatus.OPEN) open_date = input("Open Date (YYYY-MM-DD): ").strip() target_date = input("Target Date (YYYY-MM-DD): ").strip() owner = input("Owner: ").strip() close_date = None if status in [CAPAStatus.CLOSED_EFFECTIVE, CAPAStatus.CLOSED_INEFFECTIVE]: close_date = input("Close Date (YYYY-MM-DD): ").strip() capas.append(CAPA( capa_number=capa_num, title=title, description=description, source=source, severity=severity, status=status, open_date=open_date, target_date=target_date, owner=owner, close_date=close_date if close_date else None )) print(f"\nAdded: {capa_num}\n") if not capas: print("No CAPAs entered. Exiting.") return tracker = CAPATracker(capas) metrics = tracker.calculate_metrics() aging = tracker.get_aging_report() print("\n" + format_text_output(metrics, aging))def main(): parser = argparse.ArgumentParser( description="CAPA Tracking and Metrics Tool" ) parser.add_argument( "--capas", type=str, help="JSON file with CAPA data" ) parser.add_argument( "--output", choices=["text", "json"], default="text", help="Output format" ) parser.add_argument( "--interactive", action="store_true", help="Run in interactive mode" ) parser.add_argument( "--sample", action="store_true", help="Generate sample CAPA data file" ) args = parser.parse_args() if args.interactive: interactive_mode() return if args.sample: sample_data = { "capas": [ { "capa_number": "CAPA-2024-001", "title": "Calibration overdue for pH meter", "description": "pH meter EQ-042 found 2 months overdue", "source": "AUDIT", "severity": "MAJOR", "status": "VERIFICATION", "open_date": "2024-06-15", "target_date": "2024-08-15", "owner": "J. Smith", "root_cause": "No trigger for schedule update at equipment purchase", "corrective_action": "Updated SOP-EQ-001 to require schedule update" }, { "capa_number": "CAPA-2024-002", "title": "Customer complaint - labeling error", "description": "Wrong lot number on product label", "source": "COMPLAINT", "severity": "CRITICAL", "status": "INVESTIGATION", "open_date": "2024-09-01", "target_date": "2024-10-01", "owner": "M. Jones" }, { "capa_number": "CAPA-2024-003", "title": "Training records incomplete", "description": "Missing effectiveness verification for 3 operators", "source": "AUDIT", "severity": "MINOR", "status": "CLOSED_EFFECTIVE", "open_date": "2024-03-10", "target_date": "2024-06-10", "owner": "A. Brown", "close_date": "2024-05-20" } ] } print(json.dumps(sample_data, indent=2)) return if args.capas: with open(args.capas, "r") as f: data = json.load(f) capas = [] for c in data.get("capas", []): try: source = CAPASource[c.get("source", "OTHER").upper()] except KeyError: source = CAPASource.OTHER try: severity = CAPASeverity[c.get("severity", "MINOR").upper()] except KeyError: severity = CAPASeverity.MINOR try: status = CAPAStatus[c.get("status", "OPEN").upper()] except KeyError: status = CAPAStatus.OPEN capas.append(CAPA( capa_number=c["capa_number"], title=c.get("title", ""), description=c.get("description", ""), source=source, severity=severity, status=status, open_date=c["open_date"], target_date=c["target_date"], owner=c.get("owner", ""), root_cause=c.get("root_cause", ""), corrective_action=c.get("corrective_action", ""), verification_date=c.get("verification_date"), close_date=c.get("close_date") )) else: # Demo data if no file provided capas = [ CAPA( capa_number="CAPA-2024-001", title="Calibration overdue", description="pH meter overdue", source=CAPASource.AUDIT, severity=CAPASeverity.MAJOR, status=CAPAStatus.VERIFICATION, open_date="2024-06-15", target_date="2024-08-15", owner="J. Smith" ), CAPA( capa_number="CAPA-2024-002", title="Labeling error complaint", description="Wrong lot number", source=CAPASource.COMPLAINT, severity=CAPASeverity.CRITICAL, status=CAPAStatus.INVESTIGATION, open_date="2024-09-01", target_date="2024-10-01", owner="M. Jones" ), CAPA( capa_number="CAPA-2024-003", title="Training records incomplete", description="Missing effectiveness verification", source=CAPASource.AUDIT, severity=CAPASeverity.MINOR, status=CAPAStatus.CLOSED_EFFECTIVE, open_date="2024-03-10", target_date="2024-06-10", owner="A. Brown", close_date="2024-05-20" ) ] tracker = CAPATracker(capas) metrics = tracker.calculate_metrics() aging = tracker.get_aging_report() if args.output == "json": output = { "metrics": asdict(metrics), "aging": aging } print(json.dumps(output, indent=2)) else: print(format_text_output(metrics, aging))if __name__ == "__main__": main()
#!/usr/bin/env python3"""Root Cause Analyzer - Structured root cause analysis for CAPA investigations.Supports multiple analysis methodologies:- 5-Why Analysis- Fishbone (Ishikawa) Diagram- Fault Tree Analysis- Kepner-Tregoe Problem AnalysisGenerates structured root cause reports and CAPA recommendations.Usage: python root_cause_analyzer.py --method 5why --problem "High defect rate in assembly line" python root_cause_analyzer.py --interactive python root_cause_analyzer.py --data investigation.json --output json"""import argparseimport jsonimport sysfrom dataclasses import dataclass, field, asdictfrom typing import List, Dict, Optionalfrom enum import Enumfrom datetime import datetimeclass AnalysisMethod(Enum): FIVE_WHY = "5-Why" FISHBONE = "Fishbone" FAULT_TREE = "Fault Tree" KEPNER_TREGOE = "Kepner-Tregoe"class RootCauseCategory(Enum): MAN = "Man (People)" MACHINE = "Machine (Equipment)" MATERIAL = "Material" METHOD = "Method (Process)" MEASUREMENT = "Measurement" ENVIRONMENT = "Environment" MANAGEMENT = "Management (Policy)" SOFTWARE = "Software/Data"class SeverityLevel(Enum): LOW = "Low" MEDIUM = "Medium" HIGH = "High" CRITICAL = "Critical"@dataclassclass WhyStep: """A single step in 5-Why analysis.""" level: int question: str answer: str evidence: str = "" verified: bool = False@dataclassclass FishboneCause: """A cause in fishbone analysis.""" category: str cause: str sub_causes: List[str] = field(default_factory=list) is_root: bool = False evidence: str = ""@dataclassclass FaultEvent: """An event in fault tree analysis.""" event_id: str description: str is_basic: bool = True # Basic events have no children gate_type: str = "OR" # OR, AND children: List[str] = field(default_factory=list) probability: Optional[float] = None@dataclassclass RootCauseFinding: """Identified root cause with evidence.""" cause_id: str description: str category: str evidence: List[str] = field(default_factory=list) contributing_factors: List[str] = field(default_factory=list) systemic: bool = False # Whether it's a systemic vs. local issue@dataclassclass CAPARecommendation: """Corrective or preventive action recommendation.""" action_id: str action_type: str # "Corrective" or "Preventive" description: str addresses_cause: str # cause_id priority: str estimated_effort: str responsible_role: str effectiveness_criteria: List[str] = field(default_factory=list)@dataclassclass RootCauseAnalysis: """Complete root cause analysis result.""" investigation_id: str problem_statement: str analysis_method: str root_causes: List[RootCauseFinding] recommendations: List[CAPARecommendation] analysis_details: Dict confidence_level: float investigator_notes: List[str] = field(default_factory=list)class RootCauseAnalyzer: """Performs structured root cause analysis.""" def __init__(self): self.analysis_steps = [] self.findings = [] def analyze_5why(self, problem: str, whys: List[Dict] = None) -> Dict: """Perform 5-Why analysis.""" steps = [] if whys: for i, w in enumerate(whys, 1): steps.append(WhyStep( level=i, question=w.get("question", f"Why did this occur? (Level {i})"), answer=w.get("answer", ""), evidence=w.get("evidence", ""), verified=w.get("verified", False) )) # Analyze depth and quality depth = len(steps) has_root = any( s.answer and ("system" in s.answer.lower() or "policy" in s.answer.lower() or "process" in s.answer.lower()) for s in steps ) return { "method": "5-Why Analysis", "steps": [asdict(s) for s in steps], "depth": depth, "reached_systemic_cause": has_root, "quality_score": min(100, depth * 20 + (20 if has_root else 0)) } def analyze_fishbone(self, problem: str, causes: List[Dict] = None) -> Dict: """Perform fishbone (Ishikawa) analysis.""" categories = {} fishbone_causes = [] if causes: for c in causes: cat = c.get("category", "Method") cause = c.get("cause", "") sub = c.get("sub_causes", []) if cat not in categories: categories[cat] = [] categories[cat].append({ "cause": cause, "sub_causes": sub, "is_root": c.get("is_root", False), "evidence": c.get("evidence", "") }) fishbone_causes.append(FishboneCause( category=cat, cause=cause, sub_causes=sub, is_root=c.get("is_root", False), evidence=c.get("evidence", "") )) root_causes = [fc for fc in fishbone_causes if fc.is_root] return { "method": "Fishbone (Ishikawa) Analysis", "problem": problem, "categories": categories, "total_causes": len(fishbone_causes), "root_causes_identified": len(root_causes), "categories_covered": list(categories.keys()), "recommended_categories": [c.value for c in RootCauseCategory], "missing_categories": [c.value for c in RootCauseCategory if c.value.split(" (")[0] not in categories] } def analyze_fault_tree(self, top_event: str, events: List[Dict] = None) -> Dict: """Perform fault tree analysis.""" fault_events = {} if events: for e in events: fault_events[e["event_id"]] = FaultEvent( event_id=e["event_id"], description=e.get("description", ""), is_basic=e.get("is_basic", True), gate_type=e.get("gate_type", "OR"), children=e.get("children", []), probability=e.get("probability") ) # Find basic events (root causes) basic_events = {eid: ev for eid, ev in fault_events.items() if ev.is_basic} intermediate_events = {eid: ev for eid, ev in fault_events.items() if not ev.is_basic} return { "method": "Fault Tree Analysis", "top_event": top_event, "total_events": len(fault_events), "basic_events": len(basic_events), "intermediate_events": len(intermediate_events), "basic_event_details": [asdict(e) for e in basic_events.values()], "cut_sets": self._find_cut_sets(fault_events) } def _find_cut_sets(self, events: Dict[str, FaultEvent]) -> List[List[str]]: """Find minimal cut sets (combinations of basic events that cause top event).""" # Simplified cut set analysis cut_sets = [] for eid, event in events.items(): if not event.is_basic and event.gate_type == "AND": cut_sets.append(event.children) return cut_sets[:5] # Return top 5 def generate_recommendations( self, root_causes: List[RootCauseFinding], problem: str ) -> List[CAPARecommendation]: """Generate CAPA recommendations based on root causes.""" recommendations = [] for i, cause in enumerate(root_causes, 1): # Corrective action (fix the immediate cause) recommendations.append(CAPARecommendation( action_id=f"CA-{i:03d}", action_type="Corrective", description=f"Address immediate cause: {cause.description}", addresses_cause=cause.cause_id, priority=self._assess_priority(cause), estimated_effort=self._estimate_effort(cause), responsible_role=self._suggest_responsible(cause), effectiveness_criteria=[ f"Elimination of {cause.description} confirmed by audit", "No recurrence within 90 days", "Metrics return to acceptable range" ] )) # Preventive action (prevent recurrence in other areas) if cause.systemic: recommendations.append(CAPARecommendation( action_id=f"PA-{i:03d}", action_type="Preventive", description=f"Systemic prevention: Update process/procedure to prevent similar issues", addresses_cause=cause.cause_id, priority="Medium", estimated_effort="2-4 weeks", responsible_role="Quality Manager", effectiveness_criteria=[ "Updated procedure approved and implemented", "Training completed for affected personnel", "No similar issues in related processes within 6 months" ] )) return recommendations def _assess_priority(self, cause: RootCauseFinding) -> str: if cause.systemic or "safety" in cause.description.lower(): return "High" elif "quality" in cause.description.lower(): return "Medium" return "Low" def _estimate_effort(self, cause: RootCauseFinding) -> str: if cause.systemic: return "4-8 weeks" elif len(cause.contributing_factors) > 3: return "2-4 weeks" return "1-2 weeks" def _suggest_responsible(self, cause: RootCauseFinding) -> str: category_roles = { "Man": "Training Manager", "Machine": "Engineering Manager", "Material": "Supply Chain Manager", "Method": "Process Owner", "Measurement": "Quality Engineer", "Environment": "Facilities Manager", "Management": "Department Head", "Software": "IT/Software Manager" } cat_key = cause.category.split(" (")[0] if "(" in cause.category else cause.category return category_roles.get(cat_key, "Quality Manager") def full_analysis( self, problem: str, method: str = "5-Why", analysis_data: Dict = None ) -> RootCauseAnalysis: """Perform complete root cause analysis.""" investigation_id = f"RCA-{datetime.now().strftime('%Y%m%d-%H%M')}" analysis_details = {} root_causes = [] if method == "5-Why" and analysis_data: analysis_details = self.analyze_5why(problem, analysis_data.get("whys", [])) # Extract root cause from deepest why steps = analysis_details.get("steps", []) if steps: last_step = steps[-1] root_causes.append(RootCauseFinding( cause_id="RC-001", description=last_step.get("answer", "Unknown"), category="Systemic", evidence=[s.get("evidence", "") for s in steps if s.get("evidence")], systemic=analysis_details.get("reached_systemic_cause", False) )) elif method == "Fishbone" and analysis_data: analysis_details = self.analyze_fishbone(problem, analysis_data.get("causes", [])) for i, cat in enumerate(analysis_data.get("causes", [])): if cat.get("is_root"): root_causes.append(RootCauseFinding( cause_id=f"RC-{i+1:03d}", description=cat.get("cause", ""), category=cat.get("category", ""), evidence=[cat.get("evidence", "")] if cat.get("evidence") else [], sub_causes=cat.get("sub_causes", []), systemic=True )) recommendations = self.generate_recommendations(root_causes, problem) # Confidence based on evidence and method confidence = 0.7 if root_causes and any(rc.evidence for rc in root_causes): confidence = 0.85 if len(root_causes) > 1: confidence = min(0.95, confidence + 0.05) return RootCauseAnalysis( investigation_id=investigation_id, problem_statement=problem, analysis_method=method, root_causes=root_causes, recommendations=recommendations, analysis_details=analysis_details, confidence_level=confidence )def format_rca_text(rca: RootCauseAnalysis) -> str: """Format RCA report as text.""" lines = [ "=" * 70, "ROOT CAUSE ANALYSIS REPORT", "=" * 70, f"Investigation ID: {rca.investigation_id}", f"Analysis Method: {rca.analysis_method}", f"Confidence Level: {rca.confidence_level:.0%}", "", "PROBLEM STATEMENT", "-" * 40, f" {rca.problem_statement}", "", "ROOT CAUSES IDENTIFIED", "-" * 40, ] for rc in rca.root_causes: lines.extend([ f"", f" [{rc.cause_id}] {rc.description}", f" Category: {rc.category}", f" Systemic: {'Yes' if rc.systemic else 'No'}", ]) if rc.evidence: lines.append(f" Evidence:") for ev in rc.evidence: if ev: lines.append(f" • {ev}") if rc.contributing_factors: lines.append(f" Contributing Factors:") for cf in rc.contributing_factors: lines.append(f" - {cf}") lines.extend([ "", "RECOMMENDED ACTIONS", "-" * 40, ]) for rec in rca.recommendations: lines.extend([ f"", f" [{rec.action_id}] {rec.action_type}: {rec.description}", f" Priority: {rec.priority} | Effort: {rec.estimated_effort}", f" Responsible: {rec.responsible_role}", f" Effectiveness Criteria:", ]) for ec in rec.effectiveness_criteria: lines.append(f" ✓ {ec}") if "steps" in rca.analysis_details: lines.extend([ "", "5-WHY CHAIN", "-" * 40, ]) for step in rca.analysis_details["steps"]: lines.extend([ f"", f" Why {step['level']}: {step['question']}", f" → {step['answer']}", ]) if step.get("evidence"): lines.append(f" Evidence: {step['evidence']}") lines.append("=" * 70) return "\n".join(lines)def main(): parser = argparse.ArgumentParser(description="Root Cause Analyzer for CAPA Investigations") parser.add_argument("--problem", type=str, help="Problem statement") parser.add_argument("--method", choices=["5why", "fishbone", "fault-tree", "kt"], default="5why", help="Analysis method") parser.add_argument("--data", type=str, help="JSON file with analysis data") parser.add_argument("--output", choices=["text", "json"], default="text", help="Output format") parser.add_argument("--interactive", action="store_true", help="Interactive mode") args = parser.parse_args() analyzer = RootCauseAnalyzer() if args.data: with open(args.data) as f: data = json.load(f) problem = data.get("problem", "Unknown problem") method = data.get("method", "5-Why") rca = analyzer.full_analysis(problem, method, data) elif args.problem: method_map = {"5why": "5-Why", "fishbone": "Fishbone", "fault-tree": "Fault Tree", "kt": "Kepner-Tregoe"} rca = analyzer.full_analysis(args.problem, method_map.get(args.method, "5-Why")) else: # Demo demo_data = { "method": "5-Why", "whys": [ {"question": "Why did the product fail inspection?", "answer": "Surface defect detected on 15% of units", "evidence": "QC inspection records"}, {"question": "Why did surface defects occur?", "answer": "Injection molding temperature was outside spec", "evidence": "Process monitoring data"}, {"question": "Why was temperature outside spec?", "answer": "Temperature controller calibration drift", "evidence": "Calibration log"}, {"question": "Why did calibration drift go undetected?", "answer": "No automated alert for drift, manual checks missed it", "evidence": "SOP review"}, {"question": "Why was there no automated alert?", "answer": "Process monitoring system lacks drift detection capability - systemic gap", "evidence": "System requirements review"} ] } rca = analyzer.full_analysis("High defect rate in injection molding process", "5-Why", demo_data) if args.output == "json": result = { "investigation_id": rca.investigation_id, "problem": rca.problem_statement, "method": rca.analysis_method, "root_causes": [asdict(rc) for rc in rca.root_causes], "recommendations": [asdict(rec) for rec in rca.recommendations], "analysis_details": rca.analysis_details, "confidence": rca.confidence_level } print(json.dumps(result, indent=2, default=str)) else: print(format_rca_text(rca))if __name__ == "__main__": main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.