Instructions references/management-review-guide.md references/quality-kpi-framework.md scripts/management_review_tracker.py scripts/quality_effectiveness_monitor.py
name: “quality-manager-qmr”
description: Senior Quality Manager Responsible Person (QMR) for HealthTech and MedTech companies. Provides quality system governance, management review leadership, regulatory compliance oversight, and quality performance monitoring per ISO 13485 Clause 5.5.2.
triggers:
management review
quality policy
quality objectives
QMR responsibilities
quality system effectiveness
quality KPIs
cost of quality
quality performance
management accountability
regulatory oversight
quality culture
quality governance
Senior Quality Manager Responsible Person (QMR)
Quality system accountability, management review leadership, and regulatory compliance oversight per ISO 13485 Clause 5.5.2 requirements.
Table of Contents
QMR Responsibilities
ISO 13485 Clause 5.5.2 Requirements
Responsibility Scope Evidence QMS effectiveness Monitor system performance and suitability Management review records Reporting to management Communicate QMS performance to top management Quality reports, dashboards Quality awareness Promote regulatory and quality requirements Training records, communications Liaison with external parties Interface with regulators, Notified Bodies Meeting records, correspondence
QMR Accountability Matrix
Domain Accountable For Reports To Frequency Quality Policy Policy adequacy and communication CEO/Board Annual review Quality Objectives Objective achievement and relevance Executive Team Quarterly QMS Performance System effectiveness metrics Management Monthly Regulatory Compliance Compliance status across jurisdictions CEO Quarterly Audit Program Audit schedule completion, findings closure Management Per audit CAPA Oversight CAPA effectiveness and timeliness Executive Team Monthly
Authority Boundaries
Decision Type QMR Authority Escalation Required Process changes within QMS Approve with owner Major process redesign Document approval Final QA approval Policy-level changes Nonconformity disposition Accept/reject with MRB Product release decisions Supplier quality actions Quality holds, audits Supplier termination Audit scheduling Adjust internal audit schedule External audit timing Training requirements Define quality training needs Organization-wide training budget
Management Review Workflow
Conduct management reviews per ISO 13485 Clause 5.6 requirements.
Workflow: Prepare and Execute Management Review
Schedule management review (minimum annually, typically quarterly or semi-annually)
Notify all required attendees minimum 2 weeks prior
Collect required inputs from process owners:
Audit results (internal and external)
Customer feedback (complaints, satisfaction, returns)
Process performance and product conformity
CAPA status and effectiveness
Previous review action items
Changes affecting QMS (regulatory, organizational)
Recommendations for improvement
Compile input summary report with trend analysis
Prepare presentation materials with supporting data
Distribute agenda and input package 1 week prior
Conduct review meeting per agenda
Validation: All required inputs reviewed; decisions documented with owners and due dates
Required Attendees
Role Requirement Input Responsibility CEO/General Manager Required Strategic decisions QMR Chair Overall QMS status Department Heads Required Process performance RA Manager Required Regulatory changes Production Manager Required Product conformity Customer Quality Required Complaint data
MANAGEMENT REVIEW INPUT SUMMARY
Review Period: [Start Date] to [End Date]
Review Date: [Scheduled Date]
Prepared By: [QMR Name]
1. AUDIT RESULTS
Internal audits completed: [X] of [X] planned
External audits completed: [X]
Total findings: [X] major / [X] minor
Open findings: [X]
Finding trends: [Analysis]
2. CUSTOMER FEEDBACK
Complaints received: [X]
Complaint rate: [X per 1000 units]
Customer satisfaction score: [X.X/5.0]
Returns: [X] units ([X]%)
Top issues: [Categories]
3. PROCESS PERFORMANCE
[Process 1]: [Metric] vs [Target] - [Status]
[Process 2]: [Metric] vs [Target] - [Status]
Out-of-spec processes: [List]
4. PRODUCT CONFORMITY
First pass yield: [X]%
Nonconformance rate: [X]%
Scrap cost: $[X]
Top defect categories: [List]
5. CAPA STATUS
Open CAPAs: [X]
Overdue: [X]
Effectiveness rate: [X]%
Average age: [X] days
6. PREVIOUS ACTIONS
Total from last review: [X]
Completed: [X] | In progress: [X] | Overdue: [X]
7. CHANGES AFFECTING QMS
Regulatory: [List changes]
Organizational: [List changes]
Process: [List changes]
8. RECOMMENDATIONS
[Collected improvement opportunities]
Management Review Output Requirements
Output Documentation Owner QMS improvement decisions Action items with due dates Assigned per item Resource needs Resource plan updates Department heads Quality objectives changes Updated objectives document QMR Process improvement needs Improvement project charters Process owners
See: references/management-review-guide.md
Quality KPI Management Workflow
Establish, monitor, and report quality performance indicators.
Workflow: Establish Quality KPI Framework
Identify quality objectives requiring measurement
Select KPIs per objective using SMART criteria:
Specific: Clear definition and calculation
Measurable: Quantifiable with available data
Actionable: Team can influence results
Relevant: Aligned to quality objectives
Time-bound: Defined measurement frequency
Define target values based on baseline data and benchmarks
Assign data source and collection responsibility
Establish reporting frequency per KPI category
Configure dashboard displays and trend analysis
Define escalation thresholds and alert triggers
Validation: Each KPI has owner, target, data source, and escalation criteria
Core Quality KPIs
Category KPI Target Calculation Process First Pass Yield >95% (Units passed first time / Total units) × 100 Process Nonconformance Rate <1% (NC count / Total units) × 100 CAPA CAPA Closure Rate >90% (On-time closures / Due closures) × 100 CAPA CAPA Effectiveness >85% (Effective CAPAs / Verified CAPAs) × 100 Audit Finding Closure Rate >90% (On-time closures / Due closures) × 100 Audit Repeat Finding Rate <10% (Repeat findings / Total findings) × 100 Customer Complaint Rate <0.1% (Complaints / Units sold) × 100 Customer Satisfaction Score >4.0/5.0 Average of survey scores
KPI Review Frequency
KPI Type Review Frequency Trend Period Audience Safety/Compliance Daily monitoring Weekly Operations Production Quality Weekly Monthly Department heads Customer Quality Monthly Quarterly Executive team Strategic Quality Quarterly Annual Board/C-suite
Performance Level Status Action Required >110% of target Exceeding Consider raising target 100-110% of target Meeting Maintain current approach 90-100% of target Approaching Monitor closely 80-90% of target Below Improvement plan required <80% of target Critical Immediate intervention
See: references/quality-kpi-framework.md
Quality Objectives Workflow
Establish and maintain measurable quality objectives per ISO 13485 Clause 5.4.1.
Workflow: Annual Quality Objectives Setting
Review prior year objective achievement
Analyze quality performance trends and gaps
Align with organizational strategic plan
Draft objectives with measurable targets
Validate resource availability for achievement
Obtain executive approval
Communicate objectives organization-wide
Validation: Each objective is measurable, has owner, target, and timeline
Quality Objective Structure
QUALITY OBJECTIVE [Number]
Objective Statement: [Clear, measurable statement]
Aligned to Policy Element: [Quality policy section]
Target: [Specific measurable target]
Baseline: [Current performance]
Owner: [Name and title]
Due Date: [Target achievement date]
Success Criteria:
- [Criterion 1]
- [Criterion 2]
Measurement Method: [How progress is tracked]
Reporting Frequency: [Monthly/Quarterly]
Supporting Initiatives:
- [Initiative 1]
- [Initiative 2]
Resource Requirements:
- [Resource 1]
- [Resource 2]
Objective Categories
Category Example Objectives Typical Targets Customer Quality Reduce complaint rate <0.1% of units sold Process Quality Improve first pass yield >96% Compliance Maintain certification Zero major NCs Efficiency Reduce quality costs <4% of revenue Culture Increase training completion >98% on-time
Quarterly Objective Review
Review Element Assessment Action Progress vs. target On track / Behind / Ahead Adjust resources if behind Relevance Still valid / Needs update Modify if conditions changed Resources Adequate / Insufficient Request additional if needed Barriers Identified obstacles Escalate for resolution
Quality Culture Assessment Workflow
Assess and improve organizational quality culture.
Workflow: Annual Quality Culture Assessment
Design or select quality culture survey instrument
Define survey population (all employees or sample)
Communicate survey purpose and confidentiality
Administer survey with 2-week response window
Analyze results by department, role, and tenure
Identify strengths and improvement areas
Develop action plan for culture gaps
Validation: Response rate >60%; action plan addresses bottom 3 scores
Quality Culture Dimensions
Dimension Indicators Assessment Method Leadership commitment Management visible support for quality Survey, observation Quality ownership Employees feel responsible for quality Survey Communication Quality information flows effectively Survey, audit Continuous improvement Suggestions submitted and implemented Metrics Training and competence Employees feel adequately trained Survey, records Problem solving Issues addressed at root cause CAPA analysis
Culture Survey Categories
Category Sample Questions Leadership ”Management demonstrates commitment to quality” Resources ”I have the tools and training to do quality work” Communication ”Quality expectations are clearly communicated” Empowerment ”I am encouraged to report quality issues” Recognition ”Quality achievements are recognized”
Culture Improvement Actions
Gap Identified Potential Actions Low leadership visibility Quality gemba walks, all-hands quality updates Inadequate training Competency-based training program Poor communication Quality newsletters, department huddles Low reporting Anonymous reporting system, no-blame culture Lack of recognition Quality award program, team celebrations
Regulatory Compliance Oversight
Monitor and maintain regulatory compliance across jurisdictions.
Multi-Jurisdictional Compliance Matrix
Jurisdiction Regulation Requirement Status Tracking EU MDR 2017/745 CE marking, Notified Body Technical file, annual review USA 21 CFR 820 FDA registration, QSR compliance Annual registration, inspections International ISO 13485 QMS certification Surveillance audits Germany MPG/MPDG National implementation Competent authority filings
Compliance Monitoring Workflow
Maintain regulatory requirement register
Subscribe to regulatory update services
Assess impact of regulatory changes monthly
Update affected processes within 90 days of effective date
Verify training completion for regulatory changes
Document compliance status in management review
Maintain inspection readiness checklist
Validation: All applicable requirements mapped; no expired registrations
Regulatory Authority Interface
Activity QMR Role Preparation Required Notified Body audit Primary contact Audit package, personnel schedules FDA inspection Host, escort coordinator Inspection readiness review Competent Authority inquiry Response coordinator Technical file access Regulatory meeting Attendee or delegate Briefing materials
Inspection Readiness Checklist
Area Ready Action Needed Document control system current ☐ Training records complete ☐ CAPA system current, no overdue items ☐ Complaint files complete ☐ Equipment calibration current ☐ Supplier qualification files complete ☐ Management review records available ☐ Internal audit program current ☐
Decision Frameworks
Escalation Decision Tree
Issue Identified
│
▼
Is it a regulatory violation?
│
Yes─┴─No
│ │
▼ ▼
Escalate to Is it a safety issue?
Executive │
immediately Yes─┴─No
│ │
▼ ▼
Escalate to Does it affect
Safety Team multiple departments?
│
Yes─┴─No
│ │
▼ ▼
Escalate to Handle at
Executive department level
Quality Investment Prioritization
Criteria Weight Score Method Regulatory requirement 30% Required=10, Recommended=5, Optional=2 Customer impact 25% Direct=10, Indirect=5, None=0 Cost savings potential 20% >$100K=10, $50-100K=7, <$50K=3 Implementation complexity 15% Simple=10, Moderate=5, Complex=2 Strategic alignment 10% Core=10, Supporting=5, Peripheral=2
Resource Allocation Matrix
Resource Type Allocation Authority Escalation Threshold Quality personnel QMR >1 FTE addition Quality equipment QMR >$25K External consultants QMR >$50K or >30 days Quality systems Executive approval >$100K
Scripts
Management Review Tracker Features:
Track input collection status from process owners
Monitor action item completion and aging
Generate metrics summary for review
Produce recommendations for review focus areas
References
Input Source Required Feedback Customer complaints, surveys Yes Audit results Internal and external audits Yes Process performance Process metrics Yes Product conformity Inspection, NC data Yes CAPA status CAPA system Yes Previous actions Prior review records Yes Changes Regulatory, organizational Yes Recommendations All sources Yes
Quick Reference: Management Review Outputs (ISO 13485 Clause 5.6.3)
Output Documentation Required Improvement to QMS and processes Action items with owners Improvement to product Project initiation if needed Resource needs Resource plan updates
Management Review Guide
ISO 13485 Clause 5.6 management review requirements, inputs, outputs, and action tracking.
Table of Contents
Review Requirements
ISO 13485:2016 Clause 5.6
Requirement
Specification
Frequency
Planned intervals (typically quarterly or semi-annually)
Participants
Top management involvement required
Documentation
Records must be maintained
Inputs
All required inputs must be reviewed
Outputs
Decisions and actions documented
Review Schedule
Review Type
Frequency
Focus
Participants
Full Management Review
Semi-annual or Annual
Complete QMS performance
CEO, QMR, all department heads
Quarterly Quality Review
Quarterly
Key metrics and actions
QMR, Quality team, affected managers
Monthly Quality Update
Monthly
Operational metrics
QMR, Quality team leads
Planning Checklist
Required Inputs
ISO 13485 Required Input Topics
Input
Source
Data Period
Responsible
Audit results
Internal and external audits
Since last review
QA Manager
Customer feedback
Complaints, surveys, returns
Since last review
Customer Quality
Process performance
Process metrics, yields
Since last review
Process owners
Product conformity
Inspection data, NCRs
Since last review
QC Manager
CAPA status
Open/closed CAPAs
Current status
CAPA Officer
Previous review actions
Action item tracker
Since last review
QMR
Changes to QMS
Regulatory, standard changes
Since last review
RA Manager
Recommendations
Improvement opportunities
Ongoing collection
All managers
Input Data Collection Template
MANAGEMENT REVIEW INPUT SUMMARY
Review Period: [Start Date] to [End Date]
Prepared By: [Name]
Date Prepared: [Date]
1. AUDIT RESULTS
Internal Audits Completed: [Number]
External Audits Completed: [Number]
Major Findings: [Number] | Minor Findings: [Number]
Open Audit Actions: [Number]
Summary: [Brief narrative]
2. CUSTOMER FEEDBACK
Total Complaints: [Number]
Complaint Rate: [X per 1000 units]
Customer Satisfaction Score: [Score]
Top Complaint Categories:
- [Category 1]: [Count]
- [Category 2]: [Count]
Trend: [Improving/Stable/Declining]
3. PROCESS PERFORMANCE
| Process | Target | Actual | Status |
|---------|--------|--------|--------|
| [Process 1] | [Target] | [Actual] | [Met/Not Met] |
4. PRODUCT CONFORMITY
First Pass Yield: [%]
Nonconformance Rate: [%]
Reject/Scrap Cost: [$]
Top NC Categories:
- [Category 1]: [Count]
5. CAPA STATUS
Open CAPAs: [Number]
Overdue CAPAs: [Number]
Effectiveness Rate: [%]
Average Closure Time: [Days]
6. PREVIOUS ACTIONS
Total Actions from Last Review: [Number]
Completed: [Number] | In Progress: [Number] | Overdue: [Number]
7. QMS CHANGES
Regulatory Changes: [List]
Standard Updates: [List]
Internal Changes: [List]
8. RECOMMENDATIONS
[List improvement opportunities collected] Data Analysis Guidelines
Input
Analysis Required
Red Flags
Audit results
Trend by area, repeat findings
Major NC in same area twice
Complaints
Pareto analysis, rate trending
Increasing rate, safety issues
Process performance
Control charts, capability
Out of control, Cpk <1.33
Product conformity
Defect Pareto, yield trending
Declining yield, new defect types
CAPA
Aging analysis, effectiveness
>10% overdue, <80% effective
Review Agenda
Standard Agenda Template
MANAGEMENT REVIEW AGENDA
Date: [Date]
Time: [Start] - [End]
Location: [Room/Virtual Link]
Chair: [QMR Name]
1. OPENING (10 min)
- Call to order and attendance
- Approval of previous meeting minutes
- Review of previous action items
2. QMS PERFORMANCE (30 min)
- Audit results summary
- Process performance metrics
- Product conformity data
- Customer feedback analysis
3. COMPLIANCE STATUS (20 min)
- Regulatory compliance status
- Certification status
- Changes affecting QMS
4. CAPA AND IMPROVEMENT (20 min)
- CAPA status and trends
- Improvement initiatives status
- Recommendations for improvement
5. RESOURCE REVIEW (15 min)
- Resource adequacy assessment
- Training and competency status
- Infrastructure needs
6. STRATEGIC ITEMS (15 min)
- Quality objectives progress
- Quality policy adequacy
- Strategic quality initiatives
7. DECISIONS AND ACTIONS (15 min)
- Decisions required
- New action items
- Next review planning
8. CLOSING (5 min)
- Summary of decisions
- Action item review
- Adjournment Time Allocation by Review Type
Review Type
Duration
Focus Areas
Full Annual Review
3-4 hours
All inputs, strategic planning
Semi-annual Review
2-3 hours
All inputs, trend analysis
Quarterly Review
1.5-2 hours
Key metrics, action tracking
Required Outputs
ISO 13485 Required Output Topics
Output
Description
Documentation
Improvement decisions
QMS and process improvements
Action items with owners
Resource decisions
Changes to resource allocation
Resource plan updates
Quality objectives
Changes to objectives or targets
Updated objectives document
QMS changes
Decisions on system modifications
Change requests initiated
Output Documentation Template
MANAGEMENT REVIEW OUTPUTS
Review Date: [Date]
Review Type: [Annual/Semi-annual/Quarterly]
DECISIONS MADE:
1. QMS IMPROVEMENT DECISIONS
| Decision | Rationale | Owner | Due Date |
|----------|-----------|-------|----------|
| [Decision 1] | [Why] | [Who] | [When] |
2. RESOURCE DECISIONS
| Decision | Resources Required | Budget Impact | Owner |
|----------|-------------------|----------------|-------|
| [Decision 1] | [What needed] | [$] | [Who] |
3. QUALITY OBJECTIVES
| Objective | Current | Target | Change | Rationale |
|-----------|---------|--------|--------|-----------|
| [Objective 1] | [Current target] | [New target] | [+/-] | [Why] |
4. QMS CHANGES APPROVED
| Change | Scope | Implementation Date | Owner |
|--------|-------|---------------------|-------|
| [Change 1] | [Affected areas] | [Date] | [Who] |
CONCLUSIONS:
- Overall QMS effectiveness: [Effective/Needs Improvement]
- Quality policy adequacy: [Adequate/Needs Update]
- Quality objectives progress: [On Track/Behind/Ahead]
NEXT REVIEW:
Date: [Date]
Special Focus Areas: [Areas requiring attention]
Action Tracking
Action Item Format
ACTION ITEM
ID: MR-[Year]-[Number]
Source: Management Review [Date]
Category: [ ] Improvement [ ] Resource [ ] Compliance [ ] Other
Description: [Specific action to be taken]
Owner: [Name, Title]
Due Date: [Date]
Priority: [ ] High [ ] Medium [ ] Low
Success Criteria: [How completion will be verified]
Resources Required: [People, budget, equipment]
Dependencies: [Other actions or conditions]
Status Updates:
| Date | Update | Updated By |
|------|--------|------------|
| [Date] | [Progress note] | [Name] |
Completion:
Completed Date: [Date]
Evidence: [Reference to evidence of completion]
Verified By: [Name, Date] Action Status Categories
Status
Definition
Color Code
Not Started
Assigned but work not begun
Gray
In Progress
Work underway
Blue
On Hold
Blocked, awaiting dependency
Yellow
Overdue
Past due date, not complete
Red
Complete
Finished, pending verification
Green
Verified
Completion verified
Dark Green
Cancelled
No longer required
Strikethrough
Action Tracking Dashboard
MANAGEMENT REVIEW ACTION TRACKER
Review: [Date]
Last Updated: [Date]
SUMMARY:
Total Actions: [Number]
| Status | Count | % |
|--------|-------|---|
| Complete/Verified | [N] | [%] |
| In Progress | [N] | [%] |
| Not Started | [N] | [%] |
| Overdue | [N] | [%] |
| On Hold | [N] | [%] |
OVERDUE ACTIONS (Requires Escalation):
| ID | Description | Owner | Due Date | Days Overdue |
|----|-------------|-------|----------|--------------|
| [ID] | [Brief] | [Name] | [Date] | [Days] |
UPCOMING DUE (Next 30 Days):
| ID | Description | Owner | Due Date |
|----|-------------|-------|----------|
| [ID] | [Brief] | [Name] | [Date] |
Documentation Templates
Meeting Minutes Template
MANAGEMENT REVIEW MEETING MINUTES
Date: [Date]
Time: [Start] - [End]
Location: [Location]
Chair: [Name]
Recorder: [Name]
ATTENDEES:
| Name | Title | Present |
|------|-------|---------|
| [Name] | [Title] | ☑ Yes / ☐ No |
AGENDA ITEMS REVIEWED:
1. [Topic]
Discussion: [Summary of discussion]
Decision: [Decision made, if any]
Action: [Action assigned, if any]
2. [Topic]
...
DECISIONS SUMMARY:
1. [Decision 1]
2. [Decision 2]
ACTIONS ASSIGNED:
| ID | Action | Owner | Due Date |
|----|--------|-------|----------|
| MR-XX-01 | [Action] | [Name] | [Date] |
NEXT MEETING:
Date: [Date]
Preliminary Agenda Items: [Topics to cover]
APPROVAL:
Chair: _________________ Date: _______
QMR: _________________ Date: _______ Review Effectiveness Metrics
Metric
Target
Calculation
Action completion rate
>90%
Completed on time / Total actions
Review attendance
100% required
Required attendees present / Required
Input completeness
100%
Inputs provided / Required inputs
Decision documentation
100%
Documented decisions / Decisions made
Time to complete review
Per schedule
Actual date - Planned date
Quality KPI Framework
Quality performance indicators, targets, and monitoring guidelines for QMS effectiveness.
Table of Contents
KPI Categories
KPI Hierarchy
Level
Audience
Update Frequency
Example
Strategic
Board, C-suite
Quarterly
Quality cost ratio
Tactical
Department heads
Monthly
CAPA closure rate
Operational
Team leads
Weekly/Daily
First pass yield
KPI Selection Criteria
Criterion
Requirement
Measurable
Quantifiable with available data
Actionable
Team can influence the metric
Relevant
Aligned to quality objectives
Timely
Can be measured at useful frequency
Owned
Clear accountability assigned
Core Quality KPIs
Process Performance
KPI
Definition
Target
Calculation
First Pass Yield
% units passing without rework
>95%
(Units passed first time / Total units) × 100
Process Capability (Cpk)
Process performance vs. spec
>1.33
min((USL-μ)/(3σ), (μ-LSL)/(3σ))
Nonconformance Rate
NC events per production volume
<1%
(NC count / Total units) × 100
Right First Time
% activities completed correctly first time
>98%
(Correct completions / Total attempts) × 100
CAPA Effectiveness
KPI
Definition
Target
Calculation
CAPA Closure Rate
% CAPAs closed on time
>90%
(On-time closures / Due closures) × 100
CAPA Effectiveness Rate
% CAPAs effective at verification
>85%
(Effective CAPAs / Verified CAPAs) × 100
Average CAPA Age
Mean days from open to close
<60 days
Sum(Close date - Open date) / Count
Overdue CAPA Rate
% CAPAs past due date
<10%
(Overdue CAPAs / Open CAPAs) × 100
Recurrence Rate
% issues recurring after CAPA
<5%
(Recurred issues / Closed CAPAs) × 100
Audit Performance
KPI
Definition
Target
Calculation
Audit Schedule Compliance
% audits completed per schedule
>95%
(Audits completed / Audits scheduled) × 100
Finding Closure Rate
% findings closed on time
>90%
(On-time closures / Due closures) × 100
Repeat Finding Rate
% findings recurring from prior audits
<10%
(Repeat findings / Total findings) × 100
Major NC Rate
Major NCs per audit
<1
Total major NCs / Total audits
Document Control
KPI
Definition
Target
Calculation
Document Review Compliance
% documents reviewed on schedule
>95%
(On-time reviews / Due reviews) × 100
Change Request Cycle Time
Days from request to implementation
<30 days
Average(Implementation - Request date)
Obsolete Document Incidents
Uses of obsolete documents
0
Count of incidents
Customer Quality KPIs
Complaint Management
KPI
Definition
Target
Calculation
Complaint Rate
Complaints per units sold
<0.1%
(Complaints / Units sold) × 100
Complaint Response Time
Days to acknowledge complaint
<24 hours
Average(Response date - Receipt date)
Complaint Investigation Time
Days to complete investigation
<30 days
Average(Close date - Receipt date)
Complaint Closure Rate
% complaints closed on time
>90%
(On-time closures / Due closures) × 100
Customer Satisfaction
KPI
Definition
Target
Calculation
Customer Satisfaction Score
Survey-based satisfaction rating
>4.0/5.0
Average of survey scores
Net Promoter Score (NPS)
Customer loyalty indicator
>50
% Promoters - % Detractors
Return Rate
% units returned by customers
<1%
(Units returned / Units sold) × 100
Warranty Claim Rate
Warranty claims per units sold
<0.5%
(Claims / Units under warranty) × 100
Field Quality
KPI
Definition
Target
Calculation
Field Failure Rate
Failures in customer use
<0.1%
(Field failures / Units in field) × 100
Mean Time Between Failures
Average operating time before failure
Varies
Total operating hours / Number of failures
Service Call Rate
Service calls per installed base
<5%/year
(Service calls / Installed units) × 100
Compliance KPIs
Regulatory Compliance
KPI
Definition
Target
Calculation
Regulatory Submission Success
% submissions accepted first time
>90%
(Accepted submissions / Total submissions) × 100
Inspection Readiness Score
Self-assessment compliance score
>90%
(Compliant items / Total items) × 100
Reportable Event Timeliness
% events reported within required time
100%
(On-time reports / Required reports) × 100
Registration Currency
% registrations current
100%
(Current registrations / Required registrations) × 100
Certification Status
KPI
Definition
Target
Calculation
Certification Maintenance
Active certifications vs. required
100%
(Active certs / Required certs) × 100
Surveillance Audit Outcomes
Pass rate on surveillance audits
100%
(Passed audits / Conducted audits) × 100
Certification NC Rate
NCs per certification audit
<3 minor, 0 major
Count per audit
Training Compliance
KPI
Definition
Target
Calculation
Training Completion Rate
% required training completed
>95%
(Completed / Required) × 100
Training Currency
% employees with current training
>98%
(Current / Total requiring) × 100
Training Effectiveness
% passing competency assessments
>90%
(Passed / Assessed) × 100
Cost of Quality
Cost Categories
Category
Definition
Examples
Prevention
Costs to prevent defects
Training, quality planning, process validation
Appraisal
Costs to detect defects
Inspection, testing, audits, calibration
Internal Failure
Costs of defects found internally
Rework, scrap, re-inspection, downgrading
External Failure
Costs of defects found by customer
Returns, complaints, warranty, recalls
Cost of Quality KPIs
KPI
Definition
Target
Calculation
Total Cost of Quality
Sum of all quality costs
<5% of revenue
Prevention + Appraisal + Failure costs
Prevention/Appraisal Ratio
Prevention vs. detection investment
>1.0
Prevention costs / Appraisal costs
Failure Cost Ratio
Failure costs as % of CoQ
<30%
(Internal + External failure) / Total CoQ
Quality Cost Trend
Change in CoQ over time
Decreasing
(Current CoQ - Prior CoQ) / Prior CoQ
Cost Collection Categories
COST OF QUALITY WORKSHEET
Period: [Start] to [End]
PREVENTION COSTS:
| Category | Description | Amount |
|----------|-------------|--------|
| Quality planning | QMS development, quality planning | $ |
| Training | Quality training programs | $ |
| Process validation | Validation activities | $ |
| Supplier qualification | Supplier quality programs | $ |
| Preventive maintenance | Equipment maintenance | $ |
| SUBTOTAL PREVENTION | | $ |
APPRAISAL COSTS:
| Category | Description | Amount |
|----------|-------------|--------|
| Incoming inspection | Supplier material inspection | $ |
| In-process inspection | Production quality checks | $ |
| Final inspection | Finished goods testing | $ |
| Audit costs | Internal and external audits | $ |
| Calibration | Equipment calibration | $ |
| SUBTOTAL APPRAISAL | | $ |
INTERNAL FAILURE COSTS:
| Category | Description | Amount |
|----------|-------------|--------|
| Scrap | Scrapped materials and product | $ |
| Rework | Labor and materials to correct | $ |
| Re-inspection | Repeat inspection costs | $ |
| Downgrading | Revenue loss from downgrading | $ |
| Root cause analysis | Investigation costs | $ |
| SUBTOTAL INTERNAL FAILURE | | $ |
EXTERNAL FAILURE COSTS:
| Category | Description | Amount |
|----------|-------------|--------|
| Returns processing | Handling returned product | $ |
| Warranty costs | Warranty claims and repairs | $ |
| Complaint handling | Investigation and resolution | $ |
| Recalls | Recall execution costs | $ |
| Liability | Legal and settlement costs | $ |
| SUBTOTAL EXTERNAL FAILURE | | $ |
TOTAL COST OF QUALITY: $
AS % OF REVENUE: %
Dashboard Templates
Executive Quality Dashboard
EXECUTIVE QUALITY DASHBOARD
Period: [Month/Quarter]
KEY METRICS AT A GLANCE:
┌─────────────────┬─────────┬─────────┬─────────┐
│ Metric │ Target │ Actual │ Trend │
├─────────────────┼─────────┼─────────┼─────────┤
│ Customer Sat │ >4.0 │ [X.X] │ [↑/↓/→] │
│ Complaint Rate │ <0.1% │ [X.XX%] │ [↑/↓/→] │
│ First Pass Yield│ >95% │ [XX%] │ [↑/↓/→] │
│ CAPA Closure │ >90% │ [XX%] │ [↑/↓/→] │
│ Audit Findings │ <3/audit│ [X.X] │ [↑/↓/→] │
│ Quality Cost │ <5% │ [X.X%] │ [↑/↓/→] │
└─────────────────┴─────────┴─────────┴─────────┘
ALERTS:
[ ] Critical: [Any critical issues requiring immediate attention]
[ ] Warning: [Issues approaching threshold]
[ ] Info: [Notable improvements or changes]
QUALITY OBJECTIVES PROGRESS:
| Objective | Target | YTD | Status |
|-----------|--------|-----|--------|
| [Obj 1] | [Target] | [Actual] | [On Track/Behind] | Operational Quality Dashboard
OPERATIONAL QUALITY DASHBOARD
Week/Month: [Period]
PRODUCTION QUALITY:
├── First Pass Yield: [XX%] (Target: 95%)
├── Rework Rate: [X.X%] (Target: <2%)
├── Scrap Rate: [X.X%] (Target: <1%)
└── NC Count: [XX] (Prior: [XX])
CAPA STATUS:
├── Open CAPAs: [XX]
│ ├── Critical: [X]
│ ├── Major: [XX]
│ └── Minor: [XX]
├── Overdue: [X] [!ALERT if >0]
├── Avg Age: [XX] days
└── Closed This Period: [XX]
AUDIT STATUS:
├── Audits Completed: [X] of [X] scheduled
├── Open Findings: [XX]
│ ├── Major: [X]
│ └── Minor: [XX]
└── Overdue Actions: [X]
COMPLAINTS:
├── Received: [XX]
├── Open: [XX]
├── Avg Response Time: [X.X] days
└── Top Category: [Category] KPI Target Setting Guidelines
Performance Level
Action
>110% of target
Consider raising target
100-110% of target
Maintain current target
90-100% of target
Monitor closely
80-90% of target
Improvement plan required
<80% of target
Immediate intervention
Review Frequency by KPI Type
KPI Type
Review Frequency
Trend Period
Safety/Compliance
Daily monitoring
Weekly
Production
Daily/Weekly
Monthly
Customer
Weekly/Monthly
Quarterly
Strategic
Monthly/Quarterly
Annual
Cost
Monthly
Quarterly
#!/usr/bin/env python3
"""
Management Review Tracker - QMS Management Review Preparation and Tracking
Tracks management review inputs, action items, and generates review reports
for ISO 13485 compliance.
Usage:
python management_review_tracker.py --data review_data.json
python management_review_tracker.py --interactive
python management_review_tracker.py --data review_data.json --output json
"""
import argparse
import json
import sys
from dataclasses import dataclass, field, asdict
from datetime import datetime, timedelta
from typing import List, Dict, Optional
from enum import Enum
class ActionStatus ( Enum ):
NOT_STARTED = "Not Started"
IN_PROGRESS = "In Progress"
ON_HOLD = "On Hold"
OVERDUE = "Overdue"
COMPLETE = "Complete"
VERIFIED = "Verified"
class ActionPriority ( Enum ):
HIGH = "High"
MEDIUM = "Medium"
LOW = "Low"
class InputStatus ( Enum ):
NOT_COLLECTED = "Not Collected"
IN_PROGRESS = "In Progress"
COMPLETE = "Complete"
REVIEWED = "Reviewed"
@dataclass
class ReviewInput :
topic: str
responsible: str
status: InputStatus
data_period: str
summary: str = ""
concerns: List[ str ] = field( default_factory = list )
@dataclass
class ActionItem :
action_id: str
description: str
owner: str
due_date: str
priority: ActionPriority
status: ActionStatus
source_review: str
category: str = "Improvement"
completion_date: Optional[ str ] = None
notes: str = ""
@dataclass
class ReviewMetrics :
complaint_rate: float = 0.0
complaint_count: int = 0
capa_open: int = 0
capa_overdue: int = 0
capa_effectiveness: float = 0.0
audit_findings_open: int = 0
audit_findings_major: int = 0
first_pass_yield: float = 0.0
customer_satisfaction: float = 0.0
training_compliance: float = 0.0
@dataclass
class ManagementReview :
review_date: str
review_type: str
period_start: str
period_end: str
inputs: List[ReviewInput]
actions: List[ActionItem]
metrics: ReviewMetrics
decisions: List[ str ] = field( default_factory = list )
attendees: List[ str ] = field( default_factory = list )
class ManagementReviewTracker :
"""Tracks and reports management review status."""
# Required ISO 13485 inputs
REQUIRED_INPUTS = [
( "Audit Results" , "QA Manager" ),
( "Customer Feedback" , "Customer Quality" ),
( "Process Performance" , "Operations" ),
( "Product Conformity" , "QC Manager" ),
( "CAPA Status" , "CAPA Officer" ),
( "Previous Actions" , "QMR" ),
( "QMS Changes" , "RA Manager" ),
( "Recommendations" , "All Managers" ),
]
def __init__ (self, review: ManagementReview):
self .review = review
self .today = datetime.now()
def check_input_readiness (self) -> Dict:
"""Check readiness of all required inputs."""
readiness = {
"total_required" : len ( self . REQUIRED_INPUTS ),
"complete" : 0 ,
"in_progress" : 0 ,
"not_started" : 0 ,
"missing_topics" : [],
"readiness_score" : 0.0
}
input_topics = {inp.topic: inp for inp in self .review.inputs}
for topic, responsible in self . REQUIRED_INPUTS :
if topic in input_topics:
inp = input_topics[topic]
if inp.status in [InputStatus. COMPLETE , InputStatus. REVIEWED ]:
readiness[ "complete" ] += 1
elif inp.status == InputStatus. IN_PROGRESS :
readiness[ "in_progress" ] += 1
else :
readiness[ "not_started" ] += 1
else :
readiness[ "missing_topics" ].append(topic)
readiness[ "not_started" ] += 1
readiness[ "readiness_score" ] = round (
(readiness[ "complete" ] / readiness[ "total_required" ]) * 100 , 1
)
return readiness
def analyze_actions (self) -> Dict:
"""Analyze action item status."""
analysis = {
"total" : len ( self .review.actions),
"by_status" : {},
"by_priority" : {},
"overdue" : [],
"due_soon" : [],
"completion_rate" : 0.0
}
completed = 0
for action in self .review.actions:
# Count by status
status = action.status.value
analysis[ "by_status" ][status] = analysis[ "by_status" ].get(status, 0 ) + 1
# Count by priority
priority = action.priority.value
analysis[ "by_priority" ][priority] = analysis[ "by_priority" ].get(priority, 0 ) + 1
# Check completion
if action.status in [ActionStatus. COMPLETE , ActionStatus. VERIFIED ]:
completed += 1
# Check overdue
if action.due_date:
due = datetime.strptime(action.due_date, "%Y-%m- %d " )
if due < self .today and action.status not in [
ActionStatus. COMPLETE , ActionStatus. VERIFIED
]:
days_overdue = ( self .today - due).days
analysis[ "overdue" ].append({
"action_id" : action.action_id,
"description" : action.description[: 50 ],
"owner" : action.owner,
"days_overdue" : days_overdue
})
elif due <= self .today + timedelta( days = 14 ) and action.status not in [
ActionStatus. COMPLETE , ActionStatus. VERIFIED
]:
days_until = (due - self .today).days
analysis[ "due_soon" ].append({
"action_id" : action.action_id,
"description" : action.description[: 50 ],
"owner" : action.owner,
"days_until_due" : days_until
})
if analysis[ "total" ] > 0 :
analysis[ "completion_rate" ] = round ((completed / analysis[ "total" ]) * 100 , 1 )
return analysis
def assess_metrics (self) -> Dict:
"""Assess quality metrics against targets."""
metrics = self .review.metrics
assessment = {
"metrics" : [],
"alerts" : [],
"overall_status" : "On Track"
}
# Define targets and assess
checks = [
( "Complaint Rate" , metrics.complaint_rate, 0.1 , "lower" ),
( "CAPA Overdue" , metrics.capa_overdue, 0 , "lower" ),
( "CAPA Effectiveness" , metrics.capa_effectiveness, 85.0 , "higher" ),
( "First Pass Yield" , metrics.first_pass_yield, 95.0 , "higher" ),
( "Customer Satisfaction" , metrics.customer_satisfaction, 4.0 , "higher" ),
( "Training Compliance" , metrics.training_compliance, 95.0 , "higher" ),
]
warnings = 0
critical = 0
for name, value, target, direction in checks:
if direction == "lower" :
status = "Pass" if value <= target else "Fail"
threshold = target * 1.2
warning = value > target and value <= threshold
else :
status = "Pass" if value >= target else "Fail"
threshold = target * 0.9
warning = value < target and value >= threshold
metric_result = {
"name" : name,
"value" : value,
"target" : target,
"status" : status
}
assessment[ "metrics" ].append(metric_result)
if status == "Fail" :
if warning:
warnings += 1
assessment[ "alerts" ].append( f "WARNING: { name } at { value } (target: { target } )" )
else :
critical += 1
assessment[ "alerts" ].append( f "CRITICAL: { name } at { value } (target: { target } )" )
if critical > 0 :
assessment[ "overall_status" ] = "Critical"
elif warnings > 0 :
assessment[ "overall_status" ] = "Needs Attention"
return assessment
def generate_recommendations (self) -> List[ str ]:
"""Generate recommendations based on analysis."""
recommendations = []
# Check input readiness
readiness = self .check_input_readiness()
if readiness[ "readiness_score" ] < 100 :
recommendations.append(
f "Complete remaining review inputs: { ', ' .join(readiness[ 'missing_topics' ]) } "
)
# Check actions
action_analysis = self .analyze_actions()
if action_analysis[ "overdue" ]:
recommendations.append(
f "Address {len (action_analysis[ 'overdue' ]) } overdue action(s) immediately"
)
# Check metrics
metrics_assessment = self .assess_metrics()
if metrics_assessment[ "overall_status" ] == "Critical" :
recommendations.append(
"Escalate critical metric failures to senior management"
)
# CAPA specific
if self .review.metrics.capa_overdue > 0 :
recommendations.append(
f "Expedite closure of {self .review.metrics.capa_overdue } overdue CAPA(s)"
)
if self .review.metrics.capa_effectiveness < 85 :
recommendations.append(
"Review root cause analysis quality for ineffective CAPAs"
)
# Audit findings
if self .review.metrics.audit_findings_major > 0 :
recommendations.append(
f "Prioritize resolution of {self .review.metrics.audit_findings_major } major audit finding(s)"
)
if not recommendations:
recommendations.append( "Quality system performing within targets. Maintain monitoring." )
return recommendations
def generate_report (self) -> Dict:
"""Generate complete review status report."""
return {
"review_date" : self .review.review_date,
"review_type" : self .review.review_type,
"period" : f " {self .review.period_start } to {self .review.period_end } " ,
"input_readiness" : self .check_input_readiness(),
"action_analysis" : self .analyze_actions(),
"metrics_assessment" : self .assess_metrics(),
"recommendations" : self .generate_recommendations()
}
def format_text_report (report: Dict) -> str :
"""Format report as text output."""
lines = [
"=" * 70 ,
"MANAGEMENT REVIEW STATUS REPORT" ,
"=" * 70 ,
f "Review Date: { report[ 'review_date' ] } " ,
f "Review Type: { report[ 'review_type' ] } " ,
f "Period: { report[ 'period' ] } " ,
"" ,
"INPUT READINESS" ,
"-" * 40 ,
f "Readiness Score: { report[ 'input_readiness' ][ 'readiness_score' ] } %" ,
f "Complete: { report[ 'input_readiness' ][ 'complete' ] } / { report[ 'input_readiness' ][ 'total_required' ] } " ,
]
if report[ 'input_readiness' ][ 'missing_topics' ]:
lines.append( f "Missing: { ', ' .join(report[ 'input_readiness' ][ 'missing_topics' ]) } " )
lines.extend([
"" ,
"ACTION STATUS" ,
"-" * 40 ,
f "Total Actions: { report[ 'action_analysis' ][ 'total' ] } " ,
f "Completion Rate: { report[ 'action_analysis' ][ 'completion_rate' ] } %" ,
])
for status, count in report[ 'action_analysis' ][ 'by_status' ].items():
lines.append( f " { status } : { count } " )
if report[ 'action_analysis' ][ 'overdue' ]:
lines.extend([
"" ,
"OVERDUE ACTIONS:" ,
])
for item in report[ 'action_analysis' ][ 'overdue' ]:
lines.append( f " [ { item[ 'action_id' ] } ] { item[ 'description' ] } - { item[ 'days_overdue' ] } days overdue" )
lines.extend([
"" ,
"METRICS ASSESSMENT" ,
"-" * 40 ,
f "Overall Status: { report[ 'metrics_assessment' ][ 'overall_status' ] } " ,
"" ,
f " { 'Metric' :<25 } { 'Value' :<10 } { 'Target' :<10 } { 'Status' :<10 } " ,
"-" * 55 ,
])
for metric in report[ 'metrics_assessment' ][ 'metrics' ]:
lines.append(
f " { metric[ 'name' ] :<25 } { metric[ 'value' ] :<10 } { metric[ 'target' ] :<10 } { metric[ 'status' ] :<10 } "
)
if report[ 'metrics_assessment' ][ 'alerts' ]:
lines.extend([
"" ,
"ALERTS:" ,
])
for alert in report[ 'metrics_assessment' ][ 'alerts' ]:
lines.append( f " ! { alert } " )
lines.extend([
"" ,
"RECOMMENDATIONS" ,
"-" * 40 ,
])
for i, rec in enumerate (report[ 'recommendations' ], 1 ):
lines.append( f " { i } . { rec } " )
lines.append( "=" * 70 )
return " \n " .join(lines)
def interactive_mode ():
"""Run interactive review data entry."""
print ( "=" * 60 )
print ( "Management Review Tracker - Interactive Mode" )
print ( "=" * 60 )
review_date = input ( " \n Review Date (YYYY-MM-DD): " ).strip()
review_type = input ( "Review Type (Annual/Semi-annual/Quarterly): " ).strip()
period_start = input ( "Period Start (YYYY-MM-DD): " ).strip()
period_end = input ( "Period End (YYYY-MM-DD): " ).strip()
print ( " \n Enter Quality Metrics:" )
metrics = ReviewMetrics(
complaint_rate = float ( input ( "Complaint Rate (%): " ) or 0 ),
complaint_count = int ( input ( "Complaint Count: " ) or 0 ),
capa_open = int ( input ( "Open CAPAs: " ) or 0 ),
capa_overdue = int ( input ( "Overdue CAPAs: " ) or 0 ),
capa_effectiveness = float ( input ( "CAPA Effectiveness (%): " ) or 0 ),
audit_findings_open = int ( input ( "Open Audit Findings: " ) or 0 ),
audit_findings_major = int ( input ( "Major Audit Findings: " ) or 0 ),
first_pass_yield = float ( input ( "First Pass Yield (%): " ) or 0 ),
customer_satisfaction = float ( input ( "Customer Satisfaction (1-5): " ) or 0 ),
training_compliance = float ( input ( "Training Compliance (%): " ) or 0 )
)
# Create review with sample inputs
inputs = [
ReviewInput( topic = topic, responsible = resp, status = InputStatus. COMPLETE , data_period = f " { period_start } to { period_end } " )
for topic, resp in ManagementReviewTracker. REQUIRED_INPUTS
]
review = ManagementReview(
review_date = review_date,
review_type = review_type,
period_start = period_start,
period_end = period_end,
inputs = inputs,
actions = [],
metrics = metrics
)
tracker = ManagementReviewTracker(review)
report = tracker.generate_report()
print ( " \n " + format_text_report(report))
def main ():
parser = argparse.ArgumentParser(
description = "Management Review Tracker"
)
parser.add_argument(
"--data" ,
type = str ,
help = "JSON file with review data"
)
parser.add_argument(
"--output" ,
choices = [ "text" , "json" ],
default = "text" ,
help = "Output format"
)
parser.add_argument(
"--interactive" ,
action = "store_true" ,
help = "Run in interactive mode"
)
parser.add_argument(
"--sample" ,
action = "store_true" ,
help = "Generate sample review data"
)
args = parser.parse_args()
if args.interactive:
interactive_mode()
return
if args.sample:
sample = {
"review_date" : "2024-06-30" ,
"review_type" : "Semi-annual" ,
"period_start" : "2024-01-01" ,
"period_end" : "2024-06-30" ,
"inputs" : [
{ "topic" : "Audit Results" , "responsible" : "QA Manager" , "status" : "Complete" , "data_period" : "H1 2024" },
{ "topic" : "Customer Feedback" , "responsible" : "Customer Quality" , "status" : "Complete" , "data_period" : "H1 2024" },
{ "topic" : "Process Performance" , "responsible" : "Operations" , "status" : "In Progress" , "data_period" : "H1 2024" },
{ "topic" : "CAPA Status" , "responsible" : "CAPA Officer" , "status" : "Complete" , "data_period" : "Current" }
],
"actions" : [
{
"action_id" : "MR-2024-001" ,
"description" : "Implement enhanced CAPA tracking system" ,
"owner" : "QA Manager" ,
"due_date" : "2024-09-30" ,
"priority" : "High" ,
"status" : "In Progress" ,
"source_review" : "2024-Q1"
}
],
"metrics" : {
"complaint_rate" : 0.08 ,
"complaint_count" : 12 ,
"capa_open" : 8 ,
"capa_overdue" : 2 ,
"capa_effectiveness" : 88.0 ,
"audit_findings_open" : 5 ,
"audit_findings_major" : 1 ,
"first_pass_yield" : 96.5 ,
"customer_satisfaction" : 4.2 ,
"training_compliance" : 97.0
}
}
print (json.dumps(sample, indent = 2 ))
return
# Create sample review if no data provided
if args.data:
with open (args.data, "r" ) as f:
data = json.load(f)
inputs = [
ReviewInput(
topic = inp[ "topic" ],
responsible = inp[ "responsible" ],
status = InputStatus[inp[ "status" ].upper().replace( " " , "_" )],
data_period = inp.get( "data_period" , "" )
)
for inp in data.get( "inputs" , [])
]
actions = [
ActionItem(
action_id = act[ "action_id" ],
description = act[ "description" ],
owner = act[ "owner" ],
due_date = act[ "due_date" ],
priority = ActionPriority[act[ "priority" ].upper()],
status = ActionStatus[act[ "status" ].upper().replace( " " , "_" )],
source_review = act.get( "source_review" , "" )
)
for act in data.get( "actions" , [])
]
metrics_data = data.get( "metrics" , {})
metrics = ReviewMetrics( ** metrics_data)
review = ManagementReview(
review_date = data[ "review_date" ],
review_type = data[ "review_type" ],
period_start = data[ "period_start" ],
period_end = data[ "period_end" ],
inputs = inputs,
actions = actions,
metrics = metrics
)
else :
# Demo data
review = ManagementReview(
review_date = "2024-06-30" ,
review_type = "Semi-annual" ,
period_start = "2024-01-01" ,
period_end = "2024-06-30" ,
inputs = [
ReviewInput( "Audit Results" , "QA Manager" , InputStatus. COMPLETE , "H1 2024" ),
ReviewInput( "Customer Feedback" , "Customer Quality" , InputStatus. COMPLETE , "H1 2024" ),
ReviewInput( "CAPA Status" , "CAPA Officer" , InputStatus. COMPLETE , "Current" ),
],
actions = [
ActionItem( "MR-2024-001" , "Implement CAPA tracking" , "QA Mgr" , "2024-09-30" ,
ActionPriority. HIGH , ActionStatus. IN_PROGRESS , "2024-Q1" ),
],
metrics = ReviewMetrics(
complaint_rate = 0.08 , capa_open = 8 , capa_overdue = 2 ,
capa_effectiveness = 88.0 , first_pass_yield = 96.5 ,
customer_satisfaction = 4.2 , training_compliance = 97.0
)
)
tracker = ManagementReviewTracker(review)
report = tracker.generate_report()
if args.output == "json" :
print (json.dumps(report, indent = 2 ))
else :
print (format_text_report(report))
if __name__ == "__main__" :
main()
#!/usr/bin/env python3
"""
Quality Management System Effectiveness Monitor
Quantitatively assess QMS effectiveness using leading and lagging indicators.
Tracks trends, calculates control limits, and predicts potential quality issues
before they become failures. Integrates with CAPA and management review processes.
Supports metrics:
- Complaint rates, defect rates, rework rates
- Supplier performance
- CAPA effectiveness
- Audit findings trends
- Non-conformance statistics
Usage:
python quality_effectiveness_monitor.py --metrics metrics.csv --dashboard
python quality_effectiveness_monitor.py --qms-data qms_data.json --predict
python quality_effectiveness_monitor.py --interactive
"""
import argparse
import json
import csv
import sys
from dataclasses import dataclass, field, asdict
from typing import List, Dict, Optional, Tuple
from datetime import datetime, timedelta
from statistics import mean, stdev, median
@dataclass
class QualityMetric :
"""A single quality metric data point."""
metric_id: str
metric_name: str
category: str
date: str
value: float
unit: str
target: float
upper_limit: float
lower_limit: float
trend_direction: str = "" # "up", "down", "stable"
sigma_level: float = 0.0
is_alert: bool = False
is_critical: bool = False
@dataclass
class QMSReport :
"""QMS effectiveness report."""
report_period: Tuple[ str , str ]
overall_effectiveness_score: float
metrics_count: int
metrics_in_control: int
metrics_out_of_control: int
critical_alerts: int
trends_analysis: Dict
predictive_alerts: List[Dict]
improvement_opportunities: List[Dict]
management_review_summary: str
class QMSEffectivenessMonitor :
"""Monitors and analyzes QMS effectiveness."""
SIGNAL_INDICATORS = {
"complaint_rate" : { "unit" : "per 1000 units" , "target" : 0 , "upper_limit" : 1.5 },
"defect_rate" : { "unit" : "PPM" , "target" : 100 , "upper_limit" : 500 },
"rework_rate" : { "unit" : "%" , "target" : 2.0 , "upper_limit" : 5.0 },
"on_time_delivery" : { "unit" : "%" , "target" : 98 , "lower_limit" : 95 },
"audit_findings" : { "unit" : "count/month" , "target" : 0 , "upper_limit" : 3 },
"capa_closure_rate" : { "unit" : "% within target" , "target" : 100 , "lower_limit" : 90 },
"supplier_defect_rate" : { "unit" : "PPM" , "target" : 200 , "upper_limit" : 1000 }
}
def __init__ (self):
self .metrics = []
def load_csv (self, csv_path: str ) -> List[QualityMetric]:
"""Load metrics from CSV file."""
metrics = []
with open (csv_path, 'r' , encoding = 'utf-8' ) as f:
reader = csv.DictReader(f)
for row in reader:
metric = QualityMetric(
metric_id = row.get( 'metric_id' , '' ),
metric_name = row.get( 'metric_name' , '' ),
category = row.get( 'category' , 'General' ),
date = row.get( 'date' , '' ),
value = float (row.get( 'value' , 0 )),
unit = row.get( 'unit' , '' ),
target = float (row.get( 'target' , 0 )),
upper_limit = float (row.get( 'upper_limit' , 0 )),
lower_limit = float (row.get( 'lower_limit' , 0 )),
)
metrics.append(metric)
self .metrics = metrics
return metrics
def calculate_sigma_level (self, metric: QualityMetric, historical_values: List[ float ]) -> float :
"""Calculate process sigma level based on defect rate."""
if metric.unit == "PPM" or "rate" in metric.metric_name.lower():
# For defect rates, DPMO = defects_per_million_opportunities
if historical_values:
avg_defect_rate = mean(historical_values)
if avg_defect_rate > 0 :
dpmo = avg_defect_rate
# Simplified sigma conversion (actual uses 1.5σ shift)
sigma_map = {
330000 : 1.0 , 620000 : 2.0 , 110000 : 3.0 , 27000 : 4.0 ,
6200 : 5.0 , 230 : 6.0 , 3.4 : 6.0
}
# Rough sigma calculation
sigma = 6.0 - (dpmo / 1000000 ) * 10
return max ( 0.0 , min ( 6.0 , sigma))
return 0.0
def analyze_trend (self, values: List[ float ]) -> Tuple[ str , float ]:
"""Analyze trend direction and significance."""
if len (values) < 3 :
return "insufficient_data" , 0.0
x = list ( range ( len (values)))
y = values
# Linear regression
n = len (x)
sum_x = sum (x)
sum_y = sum (y)
sum_xy = sum (x[i] * y[i] for i in range (n))
sum_x2 = sum (xi * xi for xi in x)
slope = (n * sum_xy - sum_x * sum_y) / (n * sum_x2 - sum_x * sum_x) if (n * sum_x2 - sum_x * sum_x) != 0 else 0
# Determine trend direction
if slope > 0.01 :
direction = "up"
elif slope < - 0.01 :
direction = "down"
else :
direction = "stable"
# Calculate R-squared
if slope != 0 :
intercept = (sum_y - slope * sum_x) / n
y_pred = [slope * xi + intercept for xi in x]
ss_res = sum ((y[i] - y_pred[i]) ** 2 for i in range (n))
ss_tot = sum ((y[i] - mean(y)) ** 2 for i in range (n))
r2 = 1 - (ss_res / ss_tot) if ss_tot > 0 else 0
else :
r2 = 0
return direction, r2
def detect_alerts (self, metrics: List[QualityMetric]) -> List[Dict]:
"""Detect metrics that require attention."""
alerts = []
for metric in metrics:
# Check immediate control limit violation
if metric.upper_limit and metric.value > metric.upper_limit:
alerts.append({
"metric_id" : metric.metric_id,
"metric_name" : metric.metric_name,
"issue" : "exceeds_upper_limit" ,
"value" : metric.value,
"limit" : metric.upper_limit,
"severity" : "critical" if metric.category in [ "Customer" , "Regulatory" ] else "high"
})
if metric.lower_limit and metric.value < metric.lower_limit:
alerts.append({
"metric_id" : metric.metric_id,
"metric_name" : metric.metric_name,
"issue" : "below_lower_limit" ,
"value" : metric.value,
"limit" : metric.lower_limit,
"severity" : "critical" if metric.category in [ "Customer" , "Regulatory" ] else "high"
})
# Check for adverse trend (3+ points in same direction)
# Need to group by metric_name and check historical data
# Simplified: check trend_direction flag if set
if metric.trend_direction in [ "up" , "down" ] and metric.sigma_level > 3 :
alerts.append({
"metric_id" : metric.metric_id,
"metric_name" : metric.metric_name,
"issue" : f "adverse_trend_ { metric.trend_direction } " ,
"value" : metric.value,
"severity" : "medium"
})
return alerts
def predict_failures (self, metrics: List[QualityMetric], forecast_days: int = 30 ) -> List[Dict]:
"""Predict potential failures based on trends."""
predictions = []
# Group metrics by name to get time series
grouped = {}
for m in metrics:
if m.metric_name not in grouped:
grouped[m.metric_name] = []
grouped[m.metric_name].append(m)
for metric_name, metric_list in grouped.items():
if len (metric_list) < 5 :
continue
# Sort by date
metric_list.sort( key =lambda m: m.date)
values = [m.value for m in metric_list]
# Simple linear extrapolation
x = list ( range ( len (values)))
y = values
n = len (x)
sum_x = sum (x)
sum_y = sum (y)
sum_xy = sum (x[i] * y[i] for i in range (n))
sum_x2 = sum (xi * xi for xi in x)
slope = (n * sum_xy - sum_x * sum_y) / (n * sum_x2 - sum_x * sum_x) if (n * sum_x2 - sum_x * sum_x) != 0 else 0
if slope != 0 :
# Forecast next value
next_value = y[ - 1 ] + slope
target = metric_list[ 0 ].target
upper_limit = metric_list[ 0 ].upper_limit
if (target and next_value > target * 1.2 ) or (upper_limit and next_value > upper_limit * 0.9 ):
predictions.append({
"metric" : metric_name,
"current_value" : y[ - 1 ],
"forecast_value" : round (next_value, 2 ),
"forecast_days" : forecast_days,
"trend_slope" : round (slope, 3 ),
"risk_level" : "high" if upper_limit and next_value > upper_limit else "medium"
})
return predictions
def calculate_effectiveness_score (self, metrics: List[QualityMetric]) -> float :
"""Calculate overall QMS effectiveness score (0-100)."""
if not metrics:
return 0.0
scores = []
for m in metrics:
# Score based on distance to target
if m.target != 0 :
deviation = abs (m.value - m.target) / max ( abs (m.target), 1 )
score = max ( 0 , 100 - deviation * 100 )
else :
# For metrics where lower is better (defects, etc.)
if m.upper_limit:
score = max ( 0 , 100 - (m.value / m.upper_limit) * 100 * 0.5 )
else :
score = 50 # Neutral if no target
scores.append(score)
# Penalize for alerts
alerts = self .detect_alerts(metrics)
penalty = len ([a for a in alerts if a[ "severity" ] in [ "critical" , "high" ]]) * 5
return max ( 0 , min ( 100 , mean(scores) - penalty))
def identify_improvement_opportunities (self, metrics: List[QualityMetric]) -> List[Dict]:
"""Identify metrics with highest improvement potential."""
opportunities = []
for m in metrics:
if m.upper_limit and m.value > m.upper_limit * 0.8 :
gap = m.upper_limit - m.value
if gap > 0 :
improvement_pct = (gap / m.upper_limit) * 100
opportunities.append({
"metric" : m.metric_name,
"current" : m.value,
"target" : m.upper_limit,
"gap" : round (gap, 2 ),
"improvement_potential_pct" : round (improvement_pct, 1 ),
"recommended_action" : f "Reduce { m.metric_name } by at least {round (gap, 2 ) } { m.unit } " ,
"impact" : "High" if m.category in [ "Customer" , "Regulatory" ] else "Medium"
})
# Sort by improvement potential
opportunities.sort( key =lambda x: x[ "improvement_potential_pct" ], reverse = True )
return opportunities[: 10 ]
def generate_management_review_summary (self, report: QMSReport) -> str :
"""Generate executive summary for management review."""
summary = [
f "QMS EFFECTIVENESS REVIEW - { report.report_period[ 0 ] } to { report.report_period[ 1 ] } " ,
"" ,
f "Overall Effectiveness Score: { report.overall_effectiveness_score :.1f } /100" ,
f "Metrics Tracked: { report.metrics_count } | In Control: { report.metrics_in_control } | Alerts: { report.critical_alerts } " ,
""
]
if report.critical_alerts > 0 :
summary.append( "🔴 CRITICAL ALERTS REQUIRING IMMEDIATE ATTENTION:" )
for alert in [a for a in report.predictive_alerts if a.get( "risk_level" ) == "high" ]:
summary.append( f " • { alert[ 'metric' ] } : forecast { alert[ 'forecast_value' ] } (from { alert[ 'current_value' ] } )" )
summary.append( "" )
summary.append( "📈 TOP IMPROVEMENT OPPORTUNITIES:" )
for i, opp in enumerate (report.improvement_opportunities[: 3 ], 1 ):
summary.append( f " { i } . { opp[ 'metric' ] } : { opp[ 'recommended_action' ] } (Impact: { opp[ 'impact' ] } )" )
summary.append( "" )
summary.append( "🎯 RECOMMENDED ACTIONS:" )
summary.append( " 1. Address all high-severity alerts within 30 days" )
summary.append( " 2. Launch improvement projects for top 3 opportunities" )
summary.append( " 3. Review CAPA effectiveness for recurring issues" )
summary.append( " 4. Update risk assessments based on predictive trends" )
return " \n " .join(summary)
def analyze (
self,
metrics: List[QualityMetric],
start_date: str = None ,
end_date: str = None
) -> QMSReport:
"""Perform comprehensive QMS effectiveness analysis."""
in_control = 0
for m in metrics:
if not m.is_alert and not m.is_critical:
in_control += 1
out_of_control = len (metrics) - in_control
alerts = self .detect_alerts(metrics)
critical_alerts = len ([a for a in alerts if a[ "severity" ] in [ "critical" , "high" ]])
predictions = self .predict_failures(metrics)
improvement_opps = self .identify_improvement_opportunities(metrics)
effectiveness = self .calculate_effectiveness_score(metrics)
# Trend analysis by category
trends = {}
categories = set (m.category for m in metrics)
for cat in categories:
cat_metrics = [m for m in metrics if m.category == cat]
if len (cat_metrics) >= 2 :
avg_values = [mean([m.value for m in cat_metrics])] # Simplistic - would need time series
trends[cat] = {
"metric_count" : len (cat_metrics),
"avg_value" : round (mean([m.value for m in cat_metrics]), 2 ),
"alerts" : len ([a for a in alerts if any (m.metric_name == a[ "metric_name" ] for m in cat_metrics)])
}
period = (start_date or metrics[ 0 ].date, end_date or metrics[ - 1 ].date) if metrics else ( "" , "" )
report = QMSReport(
report_period = period,
overall_effectiveness_score = effectiveness,
metrics_count = len (metrics),
metrics_in_control = in_control,
metrics_out_of_control = out_of_control,
critical_alerts = critical_alerts,
trends_analysis = trends,
predictive_alerts = predictions,
improvement_opportunities = improvement_opps,
management_review_summary = "" # Filled later
)
report.management_review_summary = self .generate_management_review_summary(report)
return report
def format_qms_report (report: QMSReport) -> str :
"""Format QMS report as text."""
lines = [
"=" * 80 ,
"QMS EFFECTIVENESS MONITORING REPORT" ,
"=" * 80 ,
f "Period: { report.report_period[ 0 ] } to { report.report_period[ 1 ] } " ,
f "Overall Score: { report.overall_effectiveness_score :.1f } /100" ,
"" ,
"METRIC STATUS" ,
"-" * 40 ,
f " Total Metrics: { report.metrics_count } " ,
f " In Control: { report.metrics_in_control } " ,
f " Out of Control: { report.metrics_out_of_control } " ,
f " Critical Alerts: { report.critical_alerts } " ,
"" ,
"TREND ANALYSIS BY CATEGORY" ,
"-" * 40 ,
]
for category, data in report.trends_analysis.items():
lines.append( f " { category } : { data[ 'avg_value' ] } (alerts: { data[ 'alerts' ] } )" )
if report.predictive_alerts:
lines.extend([
"" ,
"PREDICTIVE ALERTS (Next 30 days)" ,
"-" * 40 ,
])
for alert in report.predictive_alerts[: 5 ]:
lines.append( f " ⚠ { alert[ 'metric' ] } : { alert[ 'current_value' ] } → { alert[ 'forecast_value' ] } ( { alert[ 'risk_level' ] } )" )
if report.improvement_opportunities:
lines.extend([
"" ,
"TOP IMPROVEMENT OPPORTUNITIES" ,
"-" * 40 ,
])
for i, opp in enumerate (report.improvement_opportunities[: 5 ], 1 ):
lines.append( f " { i } . { opp[ 'metric' ] } : { opp[ 'recommended_action' ] } " )
lines.extend([
"" ,
"MANAGEMENT REVIEW SUMMARY" ,
"-" * 40 ,
report.management_review_summary,
"=" * 80
])
return " \n " .join(lines)
def main ():
parser = argparse.ArgumentParser( description = "QMS Effectiveness Monitor" )
parser.add_argument( "--metrics" , type = str , help = "CSV file with quality metrics" )
parser.add_argument( "--qms-data" , type = str , help = "JSON file with QMS data" )
parser.add_argument( "--dashboard" , action = "store_true" , help = "Generate dashboard summary" )
parser.add_argument( "--predict" , action = "store_true" , help = "Include predictive analytics" )
parser.add_argument( "--output" , choices = [ "text" , "json" ], default = "text" )
parser.add_argument( "--interactive" , action = "store_true" , help = "Interactive mode" )
args = parser.parse_args()
monitor = QMSEffectivenessMonitor()
if args.metrics:
metrics = monitor.load_csv(args.metrics)
report = monitor.analyze(metrics)
elif args.qms_data:
with open (args.qms_data) as f:
data = json.load(f)
# Convert to QualityMetric objects
metrics = [QualityMetric( ** m) for m in data.get( "metrics" , [])]
report = monitor.analyze(metrics)
else :
# Demo data
demo_metrics = [
QualityMetric( "M001" , "Customer Complaint Rate" , "Customer" , "2026-03-01" , 0.8 , "per 1000" , 1.0 , 1.5 , 0.5 ),
QualityMetric( "M002" , "Defect Rate PPM" , "Quality" , "2026-03-01" , 125 , "PPM" , 100 , 500 , 0 , trend_direction = "down" , sigma_level = 4.2 ),
QualityMetric( "M003" , "On-Time Delivery" , "Operations" , "2026-03-01" , 96.5 , "%" , 98 , 0 , 95 , trend_direction = "down" ),
QualityMetric( "M004" , "CAPA Closure Rate" , "Quality" , "2026-03-01" , 92.0 , "%" , 100 , 0 , 90 , is_alert = True ),
QualityMetric( "M005" , "Supplier Defect Rate" , "Supplier" , "2026-03-01" , 450 , "PPM" , 200 , 1000 , 0 , is_critical = True ),
]
# Simulate time series
all_metrics = []
for i in range ( 30 ):
for dm in demo_metrics:
new_metric = QualityMetric(
metric_id = dm.metric_id,
metric_name = dm.metric_name,
category = dm.category,
date = f "2026-03- { i + 1 :02d } " ,
value = dm.value + (i * 0.1 ) if dm.metric_name == "Customer Complaint Rate" else dm.value,
unit = dm.unit,
target = dm.target,
upper_limit = dm.upper_limit,
lower_limit = dm.lower_limit
)
all_metrics.append(new_metric)
report = monitor.analyze(all_metrics)
if args.output == "json" :
result = asdict(report)
print (json.dumps(result, indent = 2 ))
else :
print (format_qms_report(report))
if __name__ == "__main__" :
main()