Sales Engineer
Technical sales support — RFP analysis, feature comparison matrices, POC planning, and translating technical capabilities into business value for enterprise deals.
What this skill does
Win more enterprise deals by automating RFP responses, competitive feature comparisons, and proof-of-concept planning to produce tailored technical proposals that clearly translate product capabilities into business value. Use this skill when responding to bid requests, preparing sales demos, or analyzing competitor strengths and weaknesses.
name: “sales-engineer” description: Analyzes RFP/RFI responses for coverage gaps, builds competitive feature comparison matrices, and plans proof-of-concept (POC) engagements for pre-sales engineering. Use when responding to RFPs, bids, or proposal requests; comparing product features against competitors; planning or scoring a customer POC or sales demo; preparing a technical proposal; or performing win/loss competitor analysis. Handles tasks described as ‘RFP response’, ‘bid response’, ‘proposal response’, ‘competitor comparison’, ‘feature matrix’, ‘POC planning’, ‘sales demo prep’, or ‘pre-sales engineering’.
Sales Engineer Skill
5-Phase Workflow
Phase 1: Discovery & Research
Objective: Understand customer requirements, technical environment, and business drivers.
Checklist:
- Conduct technical discovery calls with stakeholders
- Map customer’s current architecture and pain points
- Identify integration requirements and constraints
- Document security and compliance requirements
- Assess competitive landscape for this opportunity
Tools: Run rfp_response_analyzer.py to score initial requirement alignment.
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json > phase1_rfp_results.json
Output: Technical discovery document, requirement map, initial coverage assessment.
Validation checkpoint: Coverage score must be >50% and must-have gaps ≤3 before proceeding to Phase 2. Check with:
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json | python -c "import sys,json; r=json.load(sys.stdin); print('PROCEED' if r['coverage_score']>50 and r['must_have_gaps']<=3 else 'REVIEW')"
Phase 2: Solution Design
Objective: Design a solution architecture that addresses customer requirements.
Checklist:
- Map product capabilities to customer requirements
- Design integration architecture
- Identify customization needs and development effort
- Build competitive differentiation strategy
- Create solution architecture diagrams
Tools: Run competitive_matrix_builder.py using Phase 1 data to identify differentiators and vulnerabilities.
python scripts/competitive_matrix_builder.py competitive_data.json --format json > phase2_competitive.json
python -c "import json; d=json.load(open('phase2_competitive.json')); print('Differentiators:', d['differentiators']); print('Vulnerabilities:', d['vulnerabilities'])"
Output: Solution architecture, competitive positioning, technical differentiation strategy.
Validation checkpoint: Confirm at least one strong differentiator exists per customer priority before proceeding to Phase 3. If no differentiators found, escalate to Product Team (see Integration Points).
Phase 3: Demo Preparation & Delivery
Objective: Deliver compelling technical demonstrations tailored to stakeholder priorities.
Checklist:
- Build demo environment matching customer’s use case
- Create demo script with talking points per stakeholder role
- Prepare objection handling responses
- Rehearse failure scenarios and recovery paths
- Collect feedback and adjust approach
Templates: Use assets/demo_script_template.md for structured demo preparation.
Output: Customized demo, stakeholder-specific talking points, feedback capture.
Validation checkpoint: Demo script must cover every must-have requirement flagged in phase1_rfp_results.json before delivery. Cross-reference with:
python -c "import json; rfp=json.load(open('phase1_rfp_results.json')); [print('UNCOVERED:', r) for r in rfp['must_have_requirements'] if r['coverage']=='Gap']"
Phase 4: POC & Evaluation
Objective: Execute a structured proof-of-concept that validates the solution.
Checklist:
- Define POC scope, success criteria, and timeline
- Allocate resources and set up environment
- Execute phased testing (core, advanced, edge cases)
- Track progress against success criteria
- Generate evaluation scorecard
Tools: Run poc_planner.py to generate the complete POC plan.
python scripts/poc_planner.py poc_data.json --format json > phase4_poc_plan.json
python -c "import json; p=json.load(open('phase4_poc_plan.json')); print('Go/No-Go:', p['recommendation'])"
Templates: Use assets/poc_scorecard_template.md for evaluation tracking.
Output: POC plan, evaluation scorecard, go/no-go recommendation.
Validation checkpoint: POC conversion requires scorecard score >60% across all evaluation dimensions (functionality, performance, integration, usability, support). If score <60%, document gaps and loop back to Phase 2 for solution redesign.
Phase 5: Proposal & Closing
Objective: Deliver a technical proposal that supports the commercial close.
Checklist:
- Compile POC results and success metrics
- Create technical proposal with implementation plan
- Address outstanding objections with evidence
- Support pricing and packaging discussions
- Conduct win/loss analysis post-decision
Templates: Use assets/technical_proposal_template.md for the proposal document.
Output: Technical proposal, implementation timeline, risk mitigation plan.
Python Automation Tools
1. RFP Response Analyzer
Script: scripts/rfp_response_analyzer.py
Purpose: Parse RFP/RFI requirements, score coverage, identify gaps, and generate bid/no-bid recommendations.
Coverage Categories: Full (100%), Partial (50%), Planned (25%), Gap (0%).
Priority Weighting: Must-Have 3×, Should-Have 2×, Nice-to-Have 1×.
Bid/No-Bid Logic:
- Bid: Coverage >70% AND must-have gaps ≤3
- Conditional Bid: Coverage 50–70% OR must-have gaps 2–3
- No-Bid: Coverage <50% OR must-have gaps >3
Usage:
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json # human-readable
python scripts/rfp_response_analyzer.py assets/sample_rfp_data.json --format json # JSON output
python scripts/rfp_response_analyzer.py --help
Input Format: See assets/sample_rfp_data.json for the complete schema.
2. Competitive Matrix Builder
Script: scripts/competitive_matrix_builder.py
Purpose: Generate feature comparison matrices, calculate competitive scores, identify differentiators and vulnerabilities.
Feature Scoring: Full (3), Partial (2), Limited (1), None (0).
Usage:
python scripts/competitive_matrix_builder.py competitive_data.json # human-readable
python scripts/competitive_matrix_builder.py competitive_data.json --format json # JSON output
Output Includes: Feature comparison matrix, weighted competitive scores, differentiators, vulnerabilities, and win themes.
3. POC Planner
Script: scripts/poc_planner.py
Purpose: Generate structured POC plans with timeline, resource allocation, success criteria, and evaluation scorecards.
Default Phase Breakdown:
- Week 1: Setup — environment provisioning, data migration, configuration
- Weeks 2–3: Core Testing — primary use cases, integration testing
- Week 4: Advanced Testing — edge cases, performance, security
- Week 5: Evaluation — scorecard completion, stakeholder review, go/no-go
Usage:
python scripts/poc_planner.py poc_data.json # human-readable
python scripts/poc_planner.py poc_data.json --format json # JSON output
Output Includes: Phased POC plan, resource allocation, success criteria, evaluation scorecard, risk register, and go/no-go recommendation framework.
Reference Knowledge Bases
| Reference | Description |
|---|---|
references/rfp-response-guide.md | RFP/RFI response best practices, compliance matrix, bid/no-bid framework |
references/competitive-positioning-framework.md | Competitive analysis methodology, battlecard creation, objection handling |
references/poc-best-practices.md | POC planning methodology, success criteria, evaluation frameworks |
Asset Templates
| Template | Purpose |
|---|---|
assets/technical_proposal_template.md | Technical proposal with executive summary, solution architecture, implementation plan |
assets/demo_script_template.md | Demo script with agenda, talking points, objection handling |
assets/poc_scorecard_template.md | POC evaluation scorecard with weighted scoring |
assets/sample_rfp_data.json | Sample RFP data for testing the analyzer |
assets/expected_output.json | Expected output from rfp_response_analyzer.py |
Integration Points
- Marketing Skills - Leverage competitive intelligence and messaging frameworks from
../../marketing-skill/ - Product Team - Coordinate on roadmap items flagged as “Planned” in RFP analysis from
../../product-team/ - C-Level Advisory - Escalate strategic deals requiring executive engagement from
../../c-level-advisor/ - Customer Success - Hand off POC results and success criteria to CSM from
../customer-success-manager/
Last Updated: February 2026 Status: Production-ready Tools: 3 Python automation scripts References: 3 knowledge base documents Templates: 5 asset files
Demo Script Template
Demo Information
| Field | Value |
|---|---|
| Customer | [Customer Name] |
| Date/Time | [Date and Time] |
| Duration | [XX minutes] |
| Demo Environment | [Environment URL/Details] |
| Presenter | [Sales Engineer Name] |
| AE/Account Executive | [AE Name] |
Pre-Demo Checklist
- Demo environment tested and confirmed working
- Sample data loaded and validated
- Backup demo environment prepared
- Screen sharing tested with correct resolution
- Browser tabs pre-loaded with key screens
- Recording setup confirmed (if applicable)
- Customer-specific branding applied (if applicable)
- Network and VPN connectivity verified
- All integrations connected and tested
- Backup slides prepared in case of technical issues
Attendees and Roles
| Name | Title | Role in Evaluation | Key Interest |
|---|---|---|---|
| [Name] | [CTO/VP Eng] | Decision Maker | ROI, strategic fit |
| [Name] | [Director] | Champion | Solving [specific problem] |
| [Name] | [Manager] | Technical Evaluator | Architecture, integrations |
| [Name] | [Analyst] | End User | Day-to-day usability |
Agenda
| Time | Duration | Topic | Lead |
|---|---|---|---|
| 0:00 | 5 min | Welcome and introductions | AE |
| 0:05 | 5 min | Agenda and objectives | SE |
| 0:10 | 20 min | Core demo (Use Cases 1-3) | SE |
| 0:30 | 10 min | Integration demo | SE |
| 0:40 | 5 min | Admin and security overview | SE |
| 0:45 | 10 min | Q&A | SE + AE |
| 0:55 | 5 min | Next steps and wrap-up | AE |
Demo Flow
Opening (5 minutes)
Talking Points:
- Thank attendees for their time
- Recap what we learned in discovery: "[Summarize 2-3 key challenges]"
- Set expectations: "Today I'll show you how we address [Challenge 1], [Challenge 2], and [Challenge 3]"
- Frame the demo: "I'll be using [data type] similar to what you described in our earlier conversations"
Transition: "Let me start with the challenge you mentioned is most pressing: [Challenge 1]."
Use Case 1: [Name] (7 minutes)
Business Context: [1-2 sentences on why this matters to the customer]
Demo Steps:
Step 1: [Navigate to / Click on / Show...]
- What to say: "[Explain what they're seeing and why it matters]"
- Highlight: [Specific feature or capability to emphasize]
Step 2: [Navigate to / Click on / Show...]
- What to say: "[Connect this to their specific pain point]"
- Highlight: [Differentiator from competitor]
Step 3: [Navigate to / Click on / Show...]
- What to say: "[Quantify the value - time saved, errors reduced, etc.]"
- Highlight: [Ease of use or power of the feature]
Key Message: "[One sentence summarizing the value demonstrated]"
Transition: "Now that you've seen how we handle [Use Case 1], let me show you [Use Case 2]."
Use Case 2: [Name] (7 minutes)
Business Context: [1-2 sentences on why this matters to the customer]
Demo Steps:
Step 1: [Navigate to / Click on / Show...]
- What to say: "[Explanation]"
- Highlight: [Key capability]
Step 2: [Navigate to / Click on / Show...]
- What to say: "[Explanation]"
- Highlight: [Key capability]
Step 3: [Navigate to / Click on / Show...]
- What to say: "[Explanation]"
- Highlight: [Key capability]
Key Message: "[One sentence summarizing the value demonstrated]"
Transition: "[Transition statement to next section]"
Use Case 3: [Name] (6 minutes)
Business Context: [1-2 sentences on why this matters to the customer]
Demo Steps:
Step 1: [Description]
- What to say: "[Explanation]"
- Highlight: [Key capability]
Step 2: [Description]
- What to say: "[Explanation]"
- Highlight: [Key capability]
Key Message: "[One sentence summarizing the value demonstrated]"
Integration Demo (10 minutes)
Context: "You mentioned that integration with [System X] and [System Y] is critical. Let me show you how that works."
Demo Steps:
Show integration configuration:
- What to say: "Setting up the connection takes [X minutes/clicks]"
- Highlight: Native connector, no custom code required
Show data flow:
- What to say: "Data syncs in [real-time/X minute intervals]"
- Highlight: Reliability, error handling, monitoring
Show end-to-end workflow:
- What to say: "Here's the complete flow from [source] to [destination]"
- Highlight: Automation, reduced manual effort
Admin and Security (5 minutes)
Demo Steps:
Show RBAC configuration:
- What to say: "Administrators can define roles and permissions at [granularity level]"
Show audit log:
- What to say: "Every action is logged for compliance and security review"
Show SSO setup:
- What to say: "Single sign-on integrates with your existing identity provider"
Objection Handling
Anticipated Objections
| Objection | Response |
|---|---|
| "[Feature X] looks limited compared to [Competitor]" | "Great observation. Our approach to [Feature X] focuses on [benefit]. What specific aspect of [Feature X] is most important to your workflow? [Then demonstrate or explain how we address the specific need]" |
| "How does this handle [edge case]?" | "That's an important scenario. [If supported: Let me show you how that works.] [If not directly: Here's how our customers typically handle that use case...]" |
| "What about performance at our scale?" | "Excellent question. Our platform handles [benchmark data]. For your specific scale of [X], we'd recommend [architecture approach]. We can validate this in a POC." |
| "The implementation timeline seems long" | "The timeline I shared is for the full solution. We can phase the rollout to deliver value sooner. Phase 1 would give you [core capability] within [X weeks]." |
| "What happens if we outgrow this?" | "Our architecture is designed for growth. [Describe scaling approach]. We have customers who have scaled from [X] to [Y] without re-architecture." |
Recovery Strategies
If the demo breaks:
- Stay calm: "Let me switch to [backup environment / backup approach]"
- Explain what they would have seen
- Offer to follow up with a recorded walkthrough
- Pivot to the next demo section
If an unexpected question derails the flow:
- Acknowledge: "That's an excellent question"
- Briefly answer or note it for follow-up
- Return to the demo flow: "Let me continue with [next section] and we can dive deeper into that during Q&A"
If the audience seems disengaged:
- Pause and ask: "Before I continue, is this addressing what you're looking for?"
- Adjust focus based on their response
- Skip ahead to the section most relevant to their interests
Post-Demo Actions
- Send thank-you email with recording link (if recorded)
- Share demo environment access credentials (if applicable)
- Send follow-up document addressing unanswered questions
- Schedule next meeting (POC kickoff, technical deep-dive, etc.)
- Update CRM with demo notes and next steps
- Debrief with AE on stakeholder reactions and concerns
- Log key objections and responses for battlecard updates
Notes
[Space for real-time notes during the demo]
Questions Raised
- [Question] - [Answer / Follow-up needed]
- [Question] - [Answer / Follow-up needed]
Feedback Received
- [Positive feedback]
- [Concerns raised]
Next Steps Agreed
- [Action item] - [Owner] - [Date]
- [Action item] - [Owner] - [Date]
{
"rfp_info": {
"rfp_name": "Enterprise Data Analytics Platform RFP",
"customer": "Acme Financial Services",
"due_date": "2026-03-15",
"strategic_value": "high",
"deal_value": "$450,000 ARR"
},
"coverage_summary": {
"overall_coverage_percentage": 84.5,
"total_requirements": 21,
"full": 14,
"partial": 3,
"planned": 2,
"gap": 2,
"must_have_gaps": 0
},
"category_scores": {
"Data Integration": {
"coverage_percentage": 90.0,
"requirements_count": 4,
"full": 3,
"partial": 1,
"planned": 0,
"gap": 0,
"effort_hours": 34
},
"Analytics & Visualization": {
"coverage_percentage": 77.8,
"requirements_count": 4,
"full": 2,
"partial": 1,
"planned": 1,
"gap": 0,
"effort_hours": 56
},
"Security & Compliance": {
"coverage_percentage": 81.8,
"requirements_count": 4,
"full": 3,
"partial": 0,
"planned": 0,
"gap": 1,
"effort_hours": 50
},
"Performance & Scalability": {
"coverage_percentage": 87.5,
"requirements_count": 3,
"full": 2,
"partial": 1,
"planned": 0,
"gap": 0,
"effort_hours": 32
},
"API & Extensibility": {
"coverage_percentage": 87.5,
"requirements_count": 3,
"full": 2,
"partial": 0,
"planned": 1,
"gap": 0,
"effort_hours": 38
},
"Support & SLA": {
"coverage_percentage": 100.0,
"requirements_count": 2,
"full": 2,
"partial": 0,
"planned": 0,
"gap": 0,
"effort_hours": 4
},
"Deployment": {
"coverage_percentage": 0.0,
"requirements_count": 1,
"full": 0,
"partial": 0,
"planned": 0,
"gap": 1,
"effort_hours": 80
}
},
"bid_recommendation": {
"decision": "BID",
"confidence": "high",
"overall_coverage_percentage": 84.5,
"must_have_gaps": 0,
"strategic_value": "high",
"reasons": [
"Coverage score 84.5% exceeds 70% threshold"
]
},
"gap_analysis": [
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"severity": "high",
"effort_hours": 16,
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"severity": "high",
"effort_hours": 24,
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"severity": "high",
"effort_hours": 40,
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"severity": "high",
"effort_hours": 20,
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"severity": "low",
"effort_hours": 20,
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"severity": "low",
"effort_hours": 30,
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"severity": "low",
"effort_hours": 80,
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
],
"risk_assessment": [
{
"risk": "High customization effort",
"impact": "high",
"description": "230 hours estimated for non-full requirements",
"mitigation": "Evaluate resource availability and timeline feasibility before committing"
}
],
"effort_estimate": {
"total_hours": 294,
"gap_closure_hours": 230,
"full_coverage_hours": 64
},
"requirements_detail": [
{
"id": "R-001",
"requirement": "Real-time data ingestion from multiple sources (APIs, databases, streaming)",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Native connectors for 200+ data sources",
"mitigation": ""
},
{
"id": "R-002",
"requirement": "Support for SQL and NoSQL data sources",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Supports PostgreSQL, MySQL, MongoDB, Cassandra, and more",
"mitigation": ""
},
{
"id": "R-003",
"requirement": "Automated ETL pipeline creation with visual designer",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 6,
"notes": "Drag-and-drop pipeline builder included",
"mitigation": ""
},
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 2.0,
"weighted_score": 1.0,
"max_weighted": 2.0,
"effort_hours": 16,
"notes": "CDC supported for major databases; some require custom configuration",
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-005",
"requirement": "Interactive dashboard creation with drag-and-drop",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Full drag-and-drop dashboard builder with 50+ chart types",
"mitigation": ""
},
{
"id": "R-006",
"requirement": "Embedded analytics with white-labeling support",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Full embedding SDK with CSS customization",
"mitigation": ""
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"coverage_score": 0.25,
"weight": 2.0,
"weighted_score": 0.5,
"max_weighted": 2.0,
"effort_hours": 24,
"notes": "NLQ feature on roadmap for Q3 2026",
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 1.0,
"weighted_score": 0.5,
"max_weighted": 1.0,
"effort_hours": 20,
"notes": "Python/R integration available; no built-in ML models",
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-009",
"requirement": "Role-based access control (RBAC) with row-level security",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 6,
"notes": "Granular RBAC with row-level and column-level security",
"mitigation": ""
},
{
"id": "R-010",
"requirement": "SOC 2 Type II certification",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "Current SOC 2 Type II report available upon NDA",
"mitigation": ""
},
{
"id": "R-011",
"requirement": "Data encryption at rest and in transit (AES-256, TLS 1.3)",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "AES-256 at rest, TLS 1.3 in transit, customer-managed keys supported",
"mitigation": ""
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"coverage_score": 0.0,
"weight": 2.0,
"weighted_score": 0.0,
"max_weighted": 2.0,
"effort_hours": 40,
"notes": "HIPAA BAA not currently offered",
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-013",
"requirement": "Horizontal scaling to handle 10B+ rows",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 8,
"notes": "Distributed query engine scales to 50B+ rows",
"mitigation": ""
},
{
"id": "R-014",
"requirement": "Sub-second query response for cached dashboards",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Intelligent caching layer with <500ms p95 for cached queries",
"mitigation": ""
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"coverage_score": 0.5,
"weight": 2.0,
"weighted_score": 1.0,
"max_weighted": 2.0,
"effort_hours": 20,
"notes": "US and EU regions available; APAC region in beta",
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-016",
"requirement": "RESTful API with comprehensive documentation",
"category": "API & Extensibility",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 4,
"notes": "Full REST API with OpenAPI spec and interactive documentation",
"mitigation": ""
},
{
"id": "R-017",
"requirement": "Webhook support for event-driven workflows",
"category": "API & Extensibility",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 4,
"notes": "Webhook support for 30+ event types",
"mitigation": ""
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"coverage_score": 0.25,
"weight": 1.0,
"weighted_score": 0.25,
"max_weighted": 1.0,
"effort_hours": 30,
"notes": "Plugin framework on roadmap for Q4 2026",
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-019",
"requirement": "24/7 enterprise support with 1-hour critical response time",
"category": "Support & SLA",
"priority": "must-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 3.0,
"weighted_score": 3.0,
"max_weighted": 3.0,
"effort_hours": 2,
"notes": "Premium support tier includes 24/7 coverage with 30-min critical response SLA",
"mitigation": ""
},
{
"id": "R-020",
"requirement": "Dedicated customer success manager",
"category": "Support & SLA",
"priority": "should-have",
"coverage_status": "full",
"coverage_score": 1.0,
"weight": 2.0,
"weighted_score": 2.0,
"max_weighted": 2.0,
"effort_hours": 2,
"notes": "Included in Enterprise tier",
"mitigation": ""
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"coverage_score": 0.0,
"weight": 1.0,
"weighted_score": 0.0,
"max_weighted": 1.0,
"effort_hours": 80,
"notes": "Cloud-only platform; no on-premise offering",
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
]
}
POC Evaluation Scorecard
Scorecard Information
| Field | Value |
|---|---|
| POC Name | [POC Name] |
| Customer | [Customer Name] |
| Vendor/Product | [Product Name] |
| Evaluation Period | [Start Date] - [End Date] |
| Evaluated By | [Names and Roles] |
| Date Completed | [Date] |
Scoring Scale
| Score | Label | Definition |
|---|---|---|
| 5 | Exceeds | Superior capability; exceeds requirements with notable strengths |
| 4 | Meets | Full capability; meets all requirements with no significant gaps |
| 3 | Partial | Acceptable capability; minor gaps that can be addressed |
| 2 | Below | Below expectations; significant gaps that impact value |
| 1 | Fails | Does not meet requirements; critical gaps |
| N/A | Not Evaluated | Not tested during this POC |
Evaluation Categories
1. Functionality (Weight: 30%)
| Criterion | Score (1-5) | Evidence / Notes |
|---|---|---|
| Core feature completeness | ||
| Use case coverage | ||
| Customization flexibility | ||
| Workflow automation | ||
| Data handling and transformation | ||
| Reporting and analytics |
Category Score: ___/5.0 Category Notes: [Summary of functionality evaluation, key strengths and gaps]
2. Performance (Weight: 20%)
| Criterion | Score (1-5) | Evidence / Notes |
|---|---|---|
| Response time under expected load | ||
| Response time under peak load | ||
| Throughput capacity | ||
| Scalability characteristics | ||
| Resource utilization | ||
| Batch processing performance |
Category Score: ___/5.0 Category Notes: [Summary of performance evaluation, benchmark results]
3. Integration (Weight: 20%)
| Criterion | Score (1-5) | Evidence / Notes |
|---|---|---|
| API completeness and documentation | ||
| Data migration ease | ||
| Third-party connector availability | ||
| Authentication/SSO integration | ||
| Real-time sync reliability | ||
| Error handling and recovery |
Category Score: ___/5.0 Category Notes: [Summary of integration evaluation, systems tested]
4. Usability (Weight: 15%)
| Criterion | Score (1-5) | Evidence / Notes |
|---|---|---|
| User interface intuitiveness | ||
| Learning curve assessment | ||
| Documentation quality | ||
| Admin console functionality | ||
| Mobile experience | ||
| Accessibility compliance |
Category Score: ___/5.0 Category Notes: [Summary of usability evaluation, user feedback]
5. Support (Weight: 15%)
| Criterion | Score (1-5) | Evidence / Notes |
|---|---|---|
| Technical support responsiveness | ||
| Knowledge base quality | ||
| Training resources availability | ||
| Community and ecosystem | ||
| Issue resolution speed | ||
| Proactive engagement quality |
Category Score: ___/5.0 Category Notes: [Summary of support evaluation during POC]
Score Summary
| Category | Weight | Score | Weighted Score |
|---|---|---|---|
| Functionality | 30% | ___/5.0 | ___ |
| Performance | 20% | ___/5.0 | ___ |
| Integration | 20% | ___/5.0 | ___ |
| Usability | 15% | ___/5.0 | ___ |
| Support | 15% | ___/5.0 | ___ |
| Overall | 100% | ___/5.0 |
Decision Thresholds
| Weighted Average | Decision |
|---|---|
| >= 4.0 | Strong Pass - Proceed to procurement |
| 3.5 - 3.9 | Pass - Proceed with noted conditions |
| 3.0 - 3.4 | Conditional - Requires further evaluation |
| < 3.0 | Fail - Does not meet requirements |
Success Criteria Results
| # | Criterion | Priority | Target | Actual | Pass/Fail |
|---|---|---|---|---|---|
| 1 | [Criterion 1] | Must-Have | [Target] | [Result] | [ ] |
| 2 | [Criterion 2] | Must-Have | [Target] | [Result] | [ ] |
| 3 | [Criterion 3] | Must-Have | [Target] | [Result] | [ ] |
| 4 | [Criterion 4] | Should-Have | [Target] | [Result] | [ ] |
| 5 | [Criterion 5] | Should-Have | [Target] | [Result] | [ ] |
| 6 | [Criterion 6] | Nice-to-Have | [Target] | [Result] | [ ] |
Must-Have Pass Rate: ___/% Overall Pass Rate: ___/%
Issues Log
| # | Issue | Severity | Status | Resolution | Impact on Score |
|---|---|---|---|---|---|
| 1 | [Issue] | [Critical/High/Medium/Low] | [Open/Resolved] | [Resolution] | [Category affected] |
| 2 | [Issue] | [Critical/High/Medium/Low] | [Open/Resolved] | [Resolution] | [Category affected] |
Stakeholder Feedback
[Stakeholder Name 1] - [Role]
Rating: ___/5 Comments: [Feedback]
[Stakeholder Name 2] - [Role]
Rating: ___/5 Comments: [Feedback]
[Stakeholder Name 3] - [Role]
Rating: ___/5 Comments: [Feedback]
Recommendation
Decision: [ ] GO / [ ] CONDITIONAL GO / [ ] NO-GO
Rationale: [2-3 paragraphs explaining the recommendation based on scorecard results, success criteria outcomes, stakeholder feedback, and overall evaluation]
Conditions (if Conditional GO):
- [Condition 1 that must be met before proceeding]
- [Condition 2 that must be met before proceeding]
Key Strengths:
- [Strength 1]
- [Strength 2]
- [Strength 3]
Key Concerns:
- [Concern 1 with proposed mitigation]
- [Concern 2 with proposed mitigation]
Next Steps:
- [Action item] - [Owner] - [Date]
- [Action item] - [Owner] - [Date]
- [Action item] - [Owner] - [Date]
Sign-Off
| Role | Name | Signature | Date |
|---|---|---|---|
| Technical Evaluator | |||
| Business Sponsor | |||
| Decision Maker | |||
| Sales Engineer |
{
"rfp_name": "Enterprise Data Analytics Platform RFP",
"customer": "Acme Financial Services",
"due_date": "2026-03-15",
"deal_value": "$450,000 ARR",
"strategic_value": "high",
"requirements": [
{
"id": "R-001",
"requirement": "Real-time data ingestion from multiple sources (APIs, databases, streaming)",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Native connectors for 200+ data sources",
"mitigation": ""
},
{
"id": "R-002",
"requirement": "Support for SQL and NoSQL data sources",
"category": "Data Integration",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Supports PostgreSQL, MySQL, MongoDB, Cassandra, and more",
"mitigation": ""
},
{
"id": "R-003",
"requirement": "Automated ETL pipeline creation with visual designer",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 6,
"notes": "Drag-and-drop pipeline builder included",
"mitigation": ""
},
{
"id": "R-004",
"requirement": "Change data capture (CDC) for real-time sync",
"category": "Data Integration",
"priority": "should-have",
"coverage_status": "partial",
"effort_hours": 16,
"notes": "CDC supported for major databases; some require custom configuration",
"mitigation": "Document supported CDC sources; provide configuration guide for non-standard sources"
},
{
"id": "R-005",
"requirement": "Interactive dashboard creation with drag-and-drop",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Full drag-and-drop dashboard builder with 50+ chart types",
"mitigation": ""
},
{
"id": "R-006",
"requirement": "Embedded analytics with white-labeling support",
"category": "Analytics & Visualization",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Full embedding SDK with CSS customization",
"mitigation": ""
},
{
"id": "R-007",
"requirement": "Natural language query interface for business users",
"category": "Analytics & Visualization",
"priority": "should-have",
"coverage_status": "planned",
"effort_hours": 24,
"notes": "NLQ feature on roadmap for Q3 2026",
"mitigation": "Share roadmap timeline; offer guided query builder as interim solution"
},
{
"id": "R-008",
"requirement": "Predictive analytics and ML model integration",
"category": "Analytics & Visualization",
"priority": "nice-to-have",
"coverage_status": "partial",
"effort_hours": 20,
"notes": "Python/R integration available; no built-in ML models",
"mitigation": "Demonstrate Python integration for custom models; provide example notebooks"
},
{
"id": "R-009",
"requirement": "Role-based access control (RBAC) with row-level security",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 6,
"notes": "Granular RBAC with row-level and column-level security",
"mitigation": ""
},
{
"id": "R-010",
"requirement": "SOC 2 Type II certification",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Current SOC 2 Type II report available upon NDA",
"mitigation": ""
},
{
"id": "R-011",
"requirement": "Data encryption at rest and in transit (AES-256, TLS 1.3)",
"category": "Security & Compliance",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "AES-256 at rest, TLS 1.3 in transit, customer-managed keys supported",
"mitigation": ""
},
{
"id": "R-012",
"requirement": "HIPAA compliance for healthcare data handling",
"category": "Security & Compliance",
"priority": "should-have",
"coverage_status": "gap",
"effort_hours": 40,
"notes": "HIPAA BAA not currently offered",
"mitigation": "Evaluate HIPAA certification timeline with compliance team; consider data masking as interim"
},
{
"id": "R-013",
"requirement": "Horizontal scaling to handle 10B+ rows",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 8,
"notes": "Distributed query engine scales to 50B+ rows",
"mitigation": ""
},
{
"id": "R-014",
"requirement": "Sub-second query response for cached dashboards",
"category": "Performance & Scalability",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Intelligent caching layer with <500ms p95 for cached queries",
"mitigation": ""
},
{
"id": "R-015",
"requirement": "Multi-region deployment with data residency controls",
"category": "Performance & Scalability",
"priority": "should-have",
"coverage_status": "partial",
"effort_hours": 20,
"notes": "US and EU regions available; APAC region in beta",
"mitigation": "Confirm customer region requirements; provide APAC beta access if needed"
},
{
"id": "R-016",
"requirement": "RESTful API with comprehensive documentation",
"category": "API & Extensibility",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Full REST API with OpenAPI spec and interactive documentation",
"mitigation": ""
},
{
"id": "R-017",
"requirement": "Webhook support for event-driven workflows",
"category": "API & Extensibility",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 4,
"notes": "Webhook support for 30+ event types",
"mitigation": ""
},
{
"id": "R-018",
"requirement": "Custom plugin/extension framework",
"category": "API & Extensibility",
"priority": "nice-to-have",
"coverage_status": "planned",
"effort_hours": 30,
"notes": "Plugin framework on roadmap for Q4 2026",
"mitigation": "Current API extensibility covers most use cases; plugin framework will expand options"
},
{
"id": "R-019",
"requirement": "24/7 enterprise support with 1-hour critical response time",
"category": "Support & SLA",
"priority": "must-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Premium support tier includes 24/7 coverage with 30-min critical response SLA",
"mitigation": ""
},
{
"id": "R-020",
"requirement": "Dedicated customer success manager",
"category": "Support & SLA",
"priority": "should-have",
"coverage_status": "full",
"effort_hours": 2,
"notes": "Included in Enterprise tier",
"mitigation": ""
},
{
"id": "R-021",
"requirement": "On-premise deployment option",
"category": "Deployment",
"priority": "nice-to-have",
"coverage_status": "gap",
"effort_hours": 80,
"notes": "Cloud-only platform; no on-premise offering",
"mitigation": "Position cloud-first architecture benefits; offer VPC deployment as alternative"
}
]
}
Technical Proposal Template
Document Information
| Field | Value |
|---|---|
| Customer | [Customer Name] |
| Opportunity | [Opportunity Name / RFP Reference] |
| Prepared By | [Sales Engineer Name] |
| Date | [Date] |
| Version | [Version Number] |
| Classification | [Confidential / Internal] |
1. Executive Summary
Business Context
[2-3 paragraphs summarizing the customer's business challenges and strategic objectives that this solution addresses. Focus on business outcomes, not technical features.]
Proposed Solution
[1-2 paragraphs describing the solution at a high level, emphasizing how it addresses the specific challenges identified above.]
Key Value Propositions
- [Value 1]: [Quantified benefit, e.g., "Reduce reporting time by 60%"]
- [Value 2]: [Quantified benefit]
- [Value 3]: [Quantified benefit]
Recommended Approach
[Brief overview of the implementation approach, timeline, and key milestones.]
2. Requirements Summary
Coverage Overview
| Category | Requirements | Full | Partial | Planned | Gap | Coverage |
|---|---|---|---|---|---|---|
| [Category 1] | [N] | [N] | [N] | [N] | [N] | [X%] |
| [Category 2] | [N] | [N] | [N] | [N] | [N] | [X%] |
| Total | [N] | [N] | [N] | [N] | [N] | [X%] |
Key Differentiators
- [Differentiator 1 with brief explanation]
- [Differentiator 2 with brief explanation]
- [Differentiator 3 with brief explanation]
Gap Mitigation Plan
| Gap | Priority | Mitigation Strategy | Timeline |
|---|---|---|---|
| [Gap 1] | [Must/Should/Nice] | [Strategy] | [Date] |
| [Gap 2] | [Must/Should/Nice] | [Strategy] | [Date] |
3. Solution Architecture
Architecture Overview
[High-level architecture description. Include or reference an architecture diagram.]
[ASCII architecture diagram or reference to attached diagram]
Example:
+------------------+ +------------------+ +------------------+
| Data Sources | --> | Our Platform | --> | Delivery |
| - System A | | - Ingestion | | - Dashboards |
| - System B | | - Processing | | - API |
| - System C | | - Analytics | | - Exports |
+------------------+ +------------------+ +------------------+
|
+------------------+
| Management |
| - Security |
| - Monitoring |
| - Admin |
+------------------+Component Details
[Component 1]
- Purpose: [What this component does]
- Technology: [Underlying technology]
- Scaling: [How it scales]
- Availability: [HA/DR approach]
[Component 2]
- Purpose: [What this component does]
- Technology: [Underlying technology]
- Scaling: [How it scales]
- Availability: [HA/DR approach]
Integration Architecture
| Integration Point | Protocol | Direction | Frequency | Authentication |
|---|---|---|---|---|
| [System A] | REST API | Inbound | Real-time | OAuth 2.0 |
| [System B] | JDBC | Inbound | Batch (hourly) | Service Account |
| [System C] | Webhook | Outbound | Event-driven | API Key |
Security Architecture
- Authentication: [SSO, SAML, OAuth, etc.]
- Authorization: [RBAC, row-level security, etc.]
- Encryption: [At rest, in transit, key management]
- Compliance: [SOC 2, GDPR, HIPAA, etc.]
- Network: [VPC, firewall, IP restrictions]
4. Implementation Plan
Phase Overview
| Phase | Duration | Focus | Deliverables |
|---|---|---|---|
| Phase 1: Foundation | [X weeks] | Environment setup, core configuration | Working environment, admin access |
| Phase 2: Core Implementation | [X weeks] | Primary use cases, integrations | [Deliverables] |
| Phase 3: Advanced Features | [X weeks] | Advanced scenarios, optimization | [Deliverables] |
| Phase 4: Go-Live | [X weeks] | Testing, training, cutover | Production deployment |
Detailed Timeline
Week 1-2: [Phase 1 - Foundation]
- Environment provisioning
- Security configuration
- Data source connectivity
Week 3-6: [Phase 2 - Core Implementation]
- Use case 1 implementation
- Use case 2 implementation
- Integration testing
Week 7-8: [Phase 3 - Advanced Features]
- Advanced analytics
- Custom workflows
- Performance optimization
Week 9-10: [Phase 4 - Go-Live]
- User acceptance testing
- Training sessions
- Production cutover
- Post-launch supportResource Requirements
| Role | Hours | Phase(s) | Provider |
|---|---|---|---|
| Solutions Architect | [X] | All | [Vendor] |
| Implementation Engineer | [X] | 1-3 | [Vendor] |
| Project Manager | [X] | All | [Vendor] |
| Customer IT Admin | [X] | 1, 4 | [Customer] |
| Customer Business Lead | [X] | 2-4 | [Customer] |
Training Plan
| Audience | Format | Duration | Content |
|---|---|---|---|
| Administrators | Workshop | [X hours] | Configuration, security, monitoring |
| Power Users | Workshop | [X hours] | Advanced features, reporting, automation |
| End Users | Webinar | [X hours] | Core workflows, self-service analytics |
5. Risk Mitigation
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| [Risk 1] | [H/M/L] | [H/M/L] | [Strategy] |
| [Risk 2] | [H/M/L] | [H/M/L] | [Strategy] |
| [Risk 3] | [H/M/L] | [H/M/L] | [Strategy] |
6. Commercial Summary
Pricing Overview
| Component | Annual Cost |
|---|---|
| Platform License | $[X] |
| Implementation Services | $[X] |
| Training | $[X] |
| Premium Support | $[X] |
| Total Year 1 | $[X] |
| Annual Renewal | $[X] |
ROI Projection
| Metric | Current State | With Solution | Improvement |
|---|---|---|---|
| [Metric 1] | [Value] | [Value] | [%] |
| [Metric 2] | [Value] | [Value] | [%] |
| [Metric 3] | [Value] | [Value] | [%] |
Estimated payback period: [X months]
7. Next Steps
- [Next step 1 with owner and date]
- [Next step 2 with owner and date]
- [Next step 3 with owner and date]
Appendices
A. Detailed Compliance Matrix
[Reference to full requirement-by-requirement response]
B. Reference Customers
[2-3 relevant customer references with industry, use case, and outcomes]
C. Architecture Diagrams
[Detailed architecture diagrams]
D. Product Roadmap (Relevant Items)
[Roadmap items relevant to this proposal with estimated delivery dates]
Competitive Positioning Framework
A comprehensive guide for Sales Engineers to analyze competitors, build battlecards, handle objections, and position for wins.
Competitive Analysis Methodology
1. Intelligence Gathering
Primary Sources:
- Competitor product documentation and release notes
- Analyst reports (Gartner, Forrester, IDC)
- Customer feedback from win/loss reviews
- Industry conferences and webinars
- Public case studies and testimonials
- Open-source repositories and API documentation
Secondary Sources:
- Glassdoor reviews (engineering culture, product direction)
- Job postings (technology stack, expansion areas)
- Patent filings (future direction signals)
- Social media and community forums
- Partner ecosystem announcements
2. Feature Comparison Best Practices
Feature Scoring Scale:
| Score | Label | Definition |
|---|---|---|
| 3 | Full | Complete, production-ready feature support |
| 2 | Partial | Feature exists but with limitations or caveats |
| 1 | Limited | Minimal implementation, significant gaps |
| 0 | None | Feature not available |
Comparison Categories:
Organize features into weighted categories that reflect customer priorities:
| Category | Typical Weight | What to Evaluate |
|---|---|---|
| Core Functionality | 25-35% | Primary use case coverage |
| Integration & API | 15-25% | Ecosystem connectivity |
| Security & Compliance | 15-20% | Enterprise readiness |
| Scalability & Performance | 10-20% | Growth capacity |
| Usability & UX | 10-15% | Time to value |
| Support & Services | 5-10% | Vendor partnership quality |
Weighting Guidelines:
- Adjust weights based on the specific customer's priorities
- Security-sensitive industries (healthcare, finance) should weight compliance higher
- High-growth companies should weight scalability higher
- Enterprise deals should weight integration and support higher
3. Differentiator Identification
A differentiator is a feature or capability where your product scores highest among all compared products. Strong differentiators have these properties:
- Unique: Only your product offers this capability
- Valuable: Customers care about this capability
- Defensible: Not easily replicated by competitors
- Demonstrable: Can be shown in a demo or POC
Differentiator Categories:
| Type | Description | Example |
|---|---|---|
| Feature Differentiator | Unique product capability | Native ML-powered anomaly detection |
| Architecture Differentiator | Fundamental design advantage | Multi-tenant with data isolation |
| Ecosystem Differentiator | Partner or integration advantage | 200+ native integrations |
| Service Differentiator | Support or engagement model | Dedicated SE throughout contract |
| Economic Differentiator | Pricing or TCO advantage | Usage-based pricing with no minimums |
4. Vulnerability Assessment
Vulnerabilities are features where competitors score higher than your product. Address vulnerabilities proactively:
Vulnerability Response Strategies:
- Acknowledge and redirect: Confirm the gap, then pivot to your strength areas
- Reframe the requirement: Show why the customer's real need is better met differently
- Demonstrate workaround: Show how existing capabilities address the underlying need
- Commit to roadmap: Provide a credible timeline for native support
- Partner solution: Identify an integration partner that fills the gap
Objection Handling
Common Technical Objections
"Your product lacks [Feature X]"
Response Framework:
- Acknowledge: "You're right that [Feature X] is not a standalone feature today."
- Explore: "Help me understand the specific use case you need [Feature X] for."
- Redirect: "Our approach to solving that is [alternative], which actually provides [benefit]."
- Evidence: "Customer [reference] had the same concern and found [outcome]."
"Competitor [Y] has better [Capability]"
Response Framework:
- Acknowledge: "I understand [Competitor Y] has invested in [Capability]."
- Qualify: "Can you share what specific aspects of [Capability] are most important?"
- Differentiate: "While they focus on [approach], we take a different approach with [our method] because [reason]."
- Quantify: "The practical difference in real-world usage is [metric/evidence]."
"Your product is too expensive"
Response Framework:
- Acknowledge: "I appreciate you sharing that concern."
- Reframe: "Let's look at total cost of ownership rather than license cost alone."
- Quantify: "When you factor in [implementation, training, maintenance, time-to-value], the TCO comparison shows..."
- Value: "Based on our analysis, the ROI timeline is [X months], delivering [Y value]."
"We're concerned about vendor lock-in"
Response Framework:
- Acknowledge: "That's a smart concern for any technology investment."
- Evidence: "Our architecture uses [open standards, APIs, data portability features]."
- Demonstrate: "Here's how data export and migration work [show the feature]."
- Reference: "We can connect you with customers who evaluated this exact concern."
Objection Handling Principles
- Never disparage competitors. Focus on your strengths, not their weaknesses.
- Ask questions first. Understand the real concern behind the objection.
- Use evidence. Reference customers, benchmarks, and demonstrations.
- Be honest about gaps. Credibility is your most valuable asset.
- Redirect to value. Connect every response back to business outcomes.
Win/Loss Analysis
Post-Decision Review Process
Timing: Conduct within 2 weeks of the decision for accurate recall.
Interview Questions (for wins):
- What was the deciding factor in choosing us?
- Which features or capabilities were most compelling?
- How did our demo/POC compare to alternatives?
- What concerns did you have that were resolved during the process?
- What could we have done better in the evaluation process?
Interview Questions (for losses):
- What was the primary reason for choosing the competitor?
- Were there specific requirements we did not meet?
- How did our demo/POC compare to the winning vendor?
- What would have changed your decision?
- Would you consider us for future evaluations?
Win/Loss Data Tracking
| Data Point | Purpose |
|---|---|
| Deal size | Pattern analysis by segment |
| Industry | Vertical-specific insights |
| Competitor | Head-to-head record |
| Decision factors | Feature priority validation |
| Sales cycle length | Process efficiency |
| Stakeholder roles | Engagement strategy |
| Technical requirements | Capability gap tracking |
| POC outcome | POC process improvement |
Analysis Dimensions
- By Competitor: Win rate per competitor, common objections, feature gaps
- By Segment: Enterprise vs mid-market vs SMB patterns
- By Industry: Vertical-specific win factors
- By Deal Size: Large vs small deal dynamics
- By Feature Category: Which capabilities drive wins vs losses
Battlecard Creation
Battlecard Structure
Page 1: Quick Reference
- Competitor overview (company size, funding, market position)
- Key strengths (top 3)
- Key weaknesses (top 3)
- Ideal customer profile for the competitor
- Our win rate against this competitor
Page 2: Feature Comparison
- Category-by-category comparison (summary view)
- Top differentiators (features where we lead)
- Top vulnerabilities (features where they lead)
- Parity features (features at same level)
Page 3: Talk Track
- Opening positioning statement
- Discovery questions that expose competitor weaknesses
- Objection responses for their key strengths
- Proof points (customer references, benchmarks, case studies)
- Trap-setting questions for demos and POCs
Page 4: Win Strategies
- Recommended evaluation criteria that favor our strengths
- Demo scenarios that highlight our differentiators
- POC success criteria that align with our capabilities
- Pricing and packaging positioning
- Stakeholder engagement strategy
Battlecard Maintenance
- Monthly review: Update feature scores based on new releases
- Quarterly refresh: Incorporate win/loss analysis findings
- Trigger-based update: Major competitor release, pricing change, or acquisition
Competitive Positioning During Evaluations
Evaluation Stage Tactics
| Stage | Tactic |
|---|---|
| Discovery | Ask questions that expose competitor weaknesses |
| Demo | Lead with differentiators, show end-to-end workflows |
| POC | Define success criteria aligned with your strengths |
| Proposal | Quantify TCO advantage, emphasize implementation risk |
| Negotiation | Leverage competitive urgency, offer migration assistance |
Influencing Evaluation Criteria
The sales engineer's most impactful opportunity is shaping the evaluation criteria before the formal process begins:
- Map criteria to strengths: Propose evaluation categories where you excel
- Weight appropriately: Ensure critical categories (where you lead) carry higher weight
- Define metrics: Specific, measurable criteria favor the more capable product
- Include non-obvious criteria: Total cost of ownership, time-to-value, ecosystem breadth
Last Updated: February 2026
Proof of Concept (POC) Best Practices
A comprehensive guide for Sales Engineers planning, executing, and evaluating proof-of-concept engagements.
POC Planning Methodology
1. Pre-POC Qualification
Not every deal warrants a POC. Qualify before committing resources:
POC-Worthy Indicators:
- Deal value justifies 80-200+ hours of SE and engineering time
- Customer has an identified champion who will actively participate
- Clear decision timeline with POC as a defined evaluation step
- Budget is allocated or allocation process is underway
- Technical stakeholders are available for the evaluation period
POC Red Flags:
- "Free trial" request with no commitment to evaluate
- No identified decision-maker or budget owner
- Competitor has already been selected; POC is for validation only
- Customer expects production-grade environment for extended period
- No defined success criteria or evaluation framework
2. Scope Definition
The most critical success factor is a well-defined scope. An uncontrolled scope leads to extended timelines, unmet expectations, and lost deals.
Scope Elements:
- Use cases: 3-5 specific scenarios to validate (not "everything")
- Integrations: Which systems must connect during the POC
- Data: What data will be used (sample, synthetic, production subset)
- Users: Who will access the POC environment and in what roles
- Duration: Fixed timeline with clear milestones
- Success criteria: Measurable, objective criteria for each use case
Scope Control Tactics:
- Document scope in writing with customer sign-off
- Define what is explicitly out of scope
- Create a change request process for scope additions
- Set a maximum number of use cases per complexity tier
3. Timeline Planning
Standard 5-Week Framework:
| Week | Phase | Focus | Key Activities |
|---|---|---|---|
| 1 | Setup | Foundation | Environment, data, access, kickoff |
| 2-3 | Core Testing | Validation | Primary use cases, integrations, workflows |
| 4 | Advanced Testing | Edge cases | Performance, security, scale, administration |
| 5 | Evaluation | Decision | Scorecard, review, recommendation |
Timeline Adjustments by Complexity:
| Complexity | Duration | Use Cases | Integrations |
|---|---|---|---|
| Low | 3 weeks | 2-3 | 0-1 |
| Medium | 5 weeks | 3-5 | 2-3 |
| High | 6-8 weeks | 5-8 | 4+ |
Timeline Rules:
- Never exceed 8 weeks. Longer POCs lose momentum and stakeholder attention.
- Front-load the most impressive capabilities to build early momentum.
- Schedule stakeholder checkpoints at the end of each phase.
- Build 20% buffer into each phase for unexpected issues.
4. Resource Planning
SE Allocation:
| Activity | Hours/Week (Medium Complexity) |
|---|---|
| Environment setup and configuration | 15-20 (Week 1 only) |
| Use case execution and testing | 20-25 |
| Stakeholder communication | 3-5 |
| Documentation and reporting | 3-5 |
| Issue resolution | 5-8 |
Engineering Support:
- Allocate dedicated engineering support for complex integrations
- Establish an escalation path for blocking issues
- Pre-schedule engineering availability during Core Testing phase
- Request customer IT support for integration access and credentials
Customer Resources:
- Technical sponsor for daily communication
- Business stakeholders for use case validation
- IT/Security for environment access and compliance review
- End users for usability feedback (if applicable)
Success Criteria Definition
Writing Effective Success Criteria
Each criterion must be:
- Specific: Clearly defined with no ambiguity
- Measurable: Quantifiable metric or clear pass/fail
- Agreed: Documented and signed off by both parties
- Relevant: Tied to a business outcome or technical requirement
- Time-bound: Evaluated within the POC timeline
Success Criteria Categories
Functionality Criteria:
- "System processes [X] transactions per hour without errors"
- "Workflow automation reduces manual steps from [Y] to [Z]"
- "Report generation completes within [N] seconds for [M] records"
- "All [X] defined use cases completed successfully"
Performance Criteria:
- "API response time <200ms at p95 under [N] concurrent users"
- "Batch processing completes [X] records in under [Y] minutes"
- "System maintains performance with [N]x expected data volume"
Integration Criteria:
- "Bidirectional sync with [System X] operates within [Y] minute latency"
- "SSO integration with [IdP] supports all required authentication flows"
- "Data import from [Source] completes with <1% error rate"
Usability Criteria:
- "New users complete [task] within [N] minutes without assistance"
- "Admin configuration for [scenario] requires fewer than [N] steps"
- "Stakeholder satisfaction rating >= 4.0/5.0"
Anti-Patterns in Success Criteria
- Too vague: "System performs well" (what is "well"?)
- Too many: More than 15 criteria dilutes focus and extends timeline
- Unmeasurable: "Users like the interface" (how do you measure "like"?)
- Biased toward feature count: "Must have Feature X" instead of "Must solve Problem Y"
- Moving target: Criteria that change mid-POC without formal agreement
Stakeholder Management
Stakeholder Map
| Role | Priority | Engagement Strategy |
|---|---|---|
| Decision Maker | High | Executive briefings, ROI summaries |
| Champion | Critical | Daily communication, progress updates |
| Technical Evaluator | High | Hands-on access, deep-dive sessions |
| End User | Medium | Usability testing, feedback sessions |
| IT/Security | High | Compliance reviews, architecture sessions |
| Procurement | Low-Medium | TCO documentation, reference connections |
Engagement Cadence
- Daily: Champion check-in (10 min, Slack/email)
- Weekly: Progress report to all stakeholders (written summary)
- Phase transitions: Formal review meeting with demo of progress
- Final: Executive presentation with scorecard results and recommendation
Managing Stakeholder Expectations
- Set clear boundaries: Define what will and will not be demonstrated
- Communicate early and often: No surprises; surface issues immediately
- Document everything: Meeting notes, decisions, change requests
- Celebrate wins: Highlight successful milestones to maintain momentum
- Address concerns immediately: Delays in resolution erode confidence
Evaluation Frameworks
Weighted Scorecard Model
The evaluation scorecard provides an objective, comparable assessment:
| Category | Weight | Score (1-5) | Weighted Score |
|---|---|---|---|
| Functionality | 30% | ||
| Performance | 20% | ||
| Integration | 20% | ||
| Usability | 15% | ||
| Support | 15% | ||
| Total | 100% |
Scoring Scale:
- 5: Exceeds requirements - superior capability demonstrated
- 4: Meets requirements - full capability with minor enhancements possible
- 3: Partially meets - acceptable but notable gaps remain
- 2: Below expectations - significant gaps that impact value
- 1: Does not meet - critical failure for this category
Decision Thresholds:
- Weighted average >= 4.0: Strong Pass - proceed to procurement
- Weighted average 3.5-3.9: Pass - proceed with noted conditions
- Weighted average 3.0-3.4: Conditional - requires further evaluation or negotiation
- Weighted average < 3.0: Fail - does not meet requirements
Go/No-Go Decision Framework
The go/no-go decision should be based on multiple factors, not just the scorecard:
Go Indicators:
- Scorecard score >= 3.5
- All must-have success criteria met
- Champion and decision-maker both express positive sentiment
- No unresolved critical technical blockers
- Clear implementation path identified
No-Go Indicators:
- Scorecard score < 3.0
- Critical success criteria failed without clear resolution
- Decision-maker expresses significant concerns
- Multiple unresolved technical blockers
- Competitive alternative clearly preferred by evaluators
Conditional Go Indicators:
- Scorecard score 3.0-3.5 with clear path to improvement
- 1-2 minor success criteria not met but with workarounds
- Mixed stakeholder sentiment that can be addressed
- Blockers identified but resolution path confirmed with engineering
Common POC Failure Modes
1. Scope Creep
Symptom: Customer continuously adds requirements during the POC. Prevention: Written scope agreement with change request process. Recovery: Renegotiate timeline or defer additions to Phase 2.
2. Champion Absence
Symptom: Champion becomes unavailable or disengaged mid-POC. Prevention: Identify a backup champion. Schedule regular touchpoints. Recovery: Escalate to decision-maker. Demonstrate value already achieved.
3. Data Issues
Symptom: Customer data is unavailable, poor quality, or incompatible. Prevention: Request sample data before kickoff. Prepare synthetic data. Recovery: Use synthetic data for core testing. Document data requirements for implementation.
4. Environment Problems
Symptom: POC environment is unstable, slow, or inaccessible. Prevention: Use a dedicated, pre-configured environment. Test before kickoff. Recovery: Have a backup environment. Communicate honestly about delays.
5. Moving Goalposts
Symptom: Evaluation criteria change mid-POC, often influenced by competitor demos. Prevention: Get written sign-off on criteria before starting. Reference agreement when changes arise. Recovery: Agree to evaluate new criteria as addendum, not replacement. Highlight what has already been validated.
6. Extended Timeline
Symptom: POC drags beyond planned duration without clear progress. Prevention: Set hard deadlines in the agreement. Schedule decision meetings in advance. Recovery: Force a checkpoint. Present results to date and ask for a go/no-go with current evidence.
7. Technical Blockers
Symptom: Unexpected technical issues prevent completion of key use cases. Prevention: Conduct technical discovery before committing to POC. Have engineering on standby. Recovery: Escalate immediately. Provide transparent status updates. Offer alternative approaches.
POC Documentation
Required Artifacts
| Document | When | Owner |
|---|---|---|
| Scope agreement | Pre-POC | SE + Customer |
| Environment setup guide | Week 1 | SE |
| Progress reports | Weekly | SE |
| Phase review presentations | Phase transitions | SE |
| Issue log | Ongoing | SE |
| Final evaluation report | Week 5 | SE + Customer |
| Lessons learned | Post-POC | SE |
Final Report Template
- Executive Summary - POC objectives, approach, and outcome
- Scope and Success Criteria - What was tested and how
- Results Summary - Success criteria outcomes with evidence
- Evaluation Scorecard - Weighted scores across all categories
- Issues and Resolutions - Problems encountered and how they were addressed
- Recommendation - Go/No-Go with rationale
- Implementation Considerations - Next steps, timeline, and resource needs
Last Updated: February 2026
RFP/RFI Response Guide
A comprehensive reference for Sales Engineers responding to Requests for Proposal (RFP) and Requests for Information (RFI).
RFP Response Best Practices
1. Pre-Response Assessment
Before investing time in a response, conduct a thorough bid/no-bid assessment:
Bid Criteria Checklist:
- Do we have a pre-existing relationship with the customer?
- Is there an identified champion or sponsor?
- Do our capabilities align with >70% of requirements?
- Is the deal size justified against the response effort?
- Do we understand the competitive landscape?
- Is the timeline realistic for our solution?
Red Flags for No-Bid:
- No prior customer engagement (blind RFP)
- Requirement language mirrors a competitor's product
- Timeline is unrealistically short
- Must-have requirements fall outside our platform
- Budget is undefined or misaligned with our pricing
2. Response Organization
Executive Summary (1-2 pages):
- Lead with business outcomes, not features
- Reference the customer's specific challenges
- Quantify value proposition with relevant metrics
- State confidence level and key differentiators
Solution Overview:
- Map directly to the customer's stated requirements
- Use the customer's language and terminology
- Include architecture diagrams for technical sections
- Address integration with existing systems
Compliance Matrix:
- Mirror the RFP's requirement numbering exactly
- Use consistent coverage categories: Full, Partial, Planned, Gap
- Provide clear explanations for each response
- Include roadmap dates for "Planned" items
3. Coverage Classification
| Status | Score | Definition | Response Approach |
|---|---|---|---|
| Full | 100% | Current product fully meets requirement | Describe capability with evidence |
| Partial | 50% | Met with configuration or workaround | Explain approach and any limitations |
| Planned | 25% | On product roadmap | Provide timeline and interim solution |
| Gap | 0% | Not currently supported | Acknowledge gap and propose alternatives |
4. Priority-Weighted Scoring
Not all requirements are equal. Weight them by business impact:
- Must-Have (3x weight): Core requirements that are deal-breakers. Gaps here typically result in disqualification.
- Should-Have (2x weight): Important requirements that influence the decision significantly.
- Nice-to-Have (1x weight): Desirable but not critical. Often used as tie-breakers.
5. Response Writing Tips
Do:
- Answer the question directly before elaborating
- Use the customer's terminology, not internal jargon
- Provide specific examples, case studies, and metrics
- Include screenshots or architecture diagrams where relevant
- Cross-reference related answers to avoid redundancy
- Proofread for consistency across sections (multiple authors)
Avoid:
- Marketing fluff or vague language ("best-in-class", "world-class")
- Answering a question you were not asked
- Contradictions between sections
- Overselling capabilities you do not have
- Ignoring the question format (tables vs. narrative)
Bid/No-Bid Decision Framework
Decision Matrix
| Factor | Weight | Score (1-5) | Weighted |
|---|---|---|---|
| Technical fit | 25% | ||
| Relationship strength | 20% | ||
| Competitive position | 20% | ||
| Deal value vs effort | 15% | ||
| Strategic importance | 10% | ||
| Win probability | 10% | ||
| Total | 100% |
Scoring Guide:
- 5: Strong advantage
- 4: Slight advantage
- 3: Neutral / competitive parity
- 2: Slight disadvantage
- 1: Significant disadvantage
Decision Thresholds:
- Score >= 3.5: Bid - proceed with full response
- Score 2.5 - 3.4: Conditional Bid - proceed with executive approval
- Score < 2.5: No-Bid - decline or submit information-only response
Effort Estimation
Estimate the total effort required and compare against deal value:
| Response Component | Typical Effort (hours) |
|---|---|
| Requirements analysis | 4-8 |
| Technical writing | 16-40 |
| Architecture diagrams | 4-8 |
| Demo preparation | 8-16 |
| Internal review | 4-8 |
| Final formatting | 2-4 |
| Total | 38-84 hours |
Rule of thumb: The response effort should not exceed 2% of the deal value.
Compliance Matrix Structure
Standard Format
| Req ID | Requirement Description | Priority | Compliance | Response | Evidence |
|--------|------------------------|----------|------------|----------|----------|
| R-001 | SSO via SAML 2.0 | Must | Full | Native SAML 2.0 support... | Config guide |
| R-002 | Custom reporting | Should | Partial | Standard reports + API... | API docs |Section Organization
Organize requirements by category for clarity:
- Functional Requirements - Core features and capabilities
- Technical Requirements - Architecture, APIs, performance
- Security & Compliance - Authentication, encryption, certifications
- Integration Requirements - Third-party systems, data flows
- Support & SLA - Support tiers, response times, uptime
- Vendor Qualifications - Company size, financials, references
Common Pitfalls
1. The Wired RFP
Symptom: Requirements language matches a competitor's product feature list. Response: Focus on outcomes over features. Highlight areas of differentiation. Ask clarifying questions that expose broader needs.
2. Feature Checklist Syndrome
Symptom: RFP is a massive feature checklist with no context about business problems. Response: Group features by business outcome. Add context in your response that demonstrates understanding of the underlying need.
3. Scope Creep in Response
Symptom: Team keeps adding content that was not requested. Response: Assign a response manager to enforce scope. Answer what was asked, provide references for additional information.
4. Inconsistent Messaging
Symptom: Multiple authors provide contradictory information. Response: Assign a single editor for final review. Create a response style guide. Use consistent terminology throughout.
5. Overcommitting on Gaps
Symptom: Marking "Planned" items as "Full" to improve scores. Response: Never misrepresent coverage. Planned items with firm timelines and interim workarounds are better than lies discovered during POC.
RFP Response Timeline Management
Typical Response Timeline
| Day | Activity |
|---|---|
| Day 1 | Receive RFP, conduct initial review, assign team |
| Day 2-3 | Bid/no-bid decision, questions submission |
| Day 4-7 | Requirements analysis, coverage assessment |
| Day 8-14 | Draft responses, architecture diagrams |
| Day 15-17 | Internal review, quality check |
| Day 18-19 | Final edits, formatting, executive review |
| Day 20 | Submission |
Time-Saving Strategies
- Maintain a response library - Reusable answers for common requirements
- Pre-built architecture diagrams - Template diagrams for common integration patterns
- Standardized compliance language - Pre-approved language for security and compliance sections
- Question templates - Standard clarifying questions for common ambiguities
Last Updated: February 2026
#!/usr/bin/env python3
"""Competitive Matrix Builder - Generate feature comparison matrices and positioning analysis.
Builds feature-by-feature comparison matrices, calculates weighted competitive
scores, identifies differentiators and vulnerabilities, and generates win themes.
Usage:
python competitive_matrix_builder.py competitive_data.json
python competitive_matrix_builder.py competitive_data.json --format json
python competitive_matrix_builder.py competitive_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Feature scoring levels
FEATURE_SCORES: dict[str, int] = {
"full": 3,
"partial": 2,
"limited": 1,
"none": 0,
}
FEATURE_LABELS: dict[int, str] = {
3: "Full",
2: "Partial",
1: "Limited",
0: "None",
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_competitive_data(filepath: str) -> dict[str, Any]:
"""Load and validate competitive data from a JSON file.
Args:
filepath: Path to the JSON file containing competitive data.
Returns:
Parsed competitive data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "categories" not in data:
print("Error: JSON must contain a 'categories' array.", file=sys.stderr)
sys.exit(1)
if "our_product" not in data:
print("Error: JSON must contain 'our_product' name.", file=sys.stderr)
sys.exit(1)
if "competitors" not in data or not data["competitors"]:
print("Error: JSON must contain a non-empty 'competitors' array.", file=sys.stderr)
sys.exit(1)
return data
def normalize_score(score_value: Any) -> int:
"""Normalize a score value to an integer.
Args:
score_value: Score as string label or integer.
Returns:
Normalized integer score (0-3).
"""
if isinstance(score_value, str):
return FEATURE_SCORES.get(score_value.lower(), 0)
if isinstance(score_value, (int, float)):
return max(0, min(3, int(score_value)))
return 0
def build_comparison_matrix(data: dict[str, Any]) -> dict[str, Any]:
"""Build the feature comparison matrix from input data.
Args:
data: Competitive data with categories, features, and scores.
Returns:
Comparison matrix with per-feature and per-category scores.
"""
our_product = data["our_product"]
competitors = data["competitors"]
all_products = [our_product] + competitors
matrix: list[dict[str, Any]] = []
category_summaries: dict[str, dict[str, Any]] = {}
for category in data["categories"]:
cat_name = category["name"]
cat_weight = category.get("weight", 1.0)
cat_features = category.get("features", [])
cat_scores: dict[str, list[int]] = {p: [] for p in all_products}
for feature in cat_features:
feature_name = feature["name"]
scores: dict[str, int] = {}
for product in all_products:
raw_score = feature.get("scores", {}).get(product, 0)
scores[product] = normalize_score(raw_score)
cat_scores[product].append(scores[product])
# Determine leader for this feature
max_score = max(scores.values())
leaders = [p for p, s in scores.items() if s == max_score]
matrix.append({
"category": cat_name,
"feature": feature_name,
"scores": scores,
"leaders": leaders,
"our_score": scores[our_product],
"max_score": max_score,
"we_lead": our_product in leaders and len(leaders) == 1,
"we_trail": scores[our_product] < max_score,
})
# Category summary
cat_product_scores = {}
for product in all_products:
product_scores = cat_scores[product]
total = sum(product_scores)
max_possible = len(product_scores) * 3
pct = safe_divide(total, max_possible) * 100
cat_product_scores[product] = {
"total_score": total,
"max_possible": max_possible,
"percentage": round(pct, 1),
}
category_summaries[cat_name] = {
"weight": cat_weight,
"feature_count": len(cat_features),
"product_scores": cat_product_scores,
}
return {
"our_product": our_product,
"competitors": competitors,
"all_products": all_products,
"matrix": matrix,
"category_summaries": category_summaries,
}
def compute_competitive_scores(
comparison: dict[str, Any],
) -> dict[str, dict[str, Any]]:
"""Compute weighted competitive scores for each product.
Args:
comparison: Comparison matrix data.
Returns:
Product scores with weighted and unweighted totals.
"""
all_products = comparison["all_products"]
category_summaries = comparison["category_summaries"]
product_scores: dict[str, dict[str, float]] = {
p: {"weighted_total": 0.0, "max_weighted": 0.0, "unweighted_total": 0, "max_unweighted": 0}
for p in all_products
}
for cat_name, cat_data in category_summaries.items():
weight = cat_data["weight"]
for product in all_products:
p_data = cat_data["product_scores"][product]
product_scores[product]["weighted_total"] += p_data["total_score"] * weight
product_scores[product]["max_weighted"] += p_data["max_possible"] * weight
product_scores[product]["unweighted_total"] += p_data["total_score"]
product_scores[product]["max_unweighted"] += p_data["max_possible"]
result = {}
for product in all_products:
ps = product_scores[product]
weighted_pct = safe_divide(ps["weighted_total"], ps["max_weighted"]) * 100
unweighted_pct = safe_divide(ps["unweighted_total"], ps["max_unweighted"]) * 100
result[product] = {
"weighted_score": round(weighted_pct, 1),
"unweighted_score": round(unweighted_pct, 1),
"weighted_total": round(ps["weighted_total"], 2),
"max_weighted": round(ps["max_weighted"], 2),
}
return result
def identify_differentiators(comparison: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify features where our product leads all competitors.
Args:
comparison: Comparison matrix data.
Returns:
List of differentiator features with details.
"""
differentiators = []
for entry in comparison["matrix"]:
if entry["we_lead"] and entry["our_score"] >= 2:
# Calculate gap from nearest competitor
competitor_scores = [
entry["scores"][c] for c in comparison["competitors"]
]
max_competitor = max(competitor_scores) if competitor_scores else 0
gap = entry["our_score"] - max_competitor
differentiators.append({
"feature": entry["feature"],
"category": entry["category"],
"our_score": entry["our_score"],
"our_label": FEATURE_LABELS.get(entry["our_score"], "Unknown"),
"best_competitor_score": max_competitor,
"gap": gap,
})
# Sort by gap size descending
differentiators.sort(key=lambda d: d["gap"], reverse=True)
return differentiators
def identify_vulnerabilities(comparison: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify features where competitors lead our product.
Args:
comparison: Comparison matrix data.
Returns:
List of vulnerability features with details.
"""
vulnerabilities = []
for entry in comparison["matrix"]:
if entry["we_trail"]:
# Find which competitor leads
leader_scores = {
p: entry["scores"][p]
for p in comparison["competitors"]
if entry["scores"][p] == entry["max_score"]
}
gap = entry["max_score"] - entry["our_score"]
vulnerabilities.append({
"feature": entry["feature"],
"category": entry["category"],
"our_score": entry["our_score"],
"our_label": FEATURE_LABELS.get(entry["our_score"], "Unknown"),
"leading_competitors": leader_scores,
"gap": gap,
})
# Sort by gap size descending
vulnerabilities.sort(key=lambda v: v["gap"], reverse=True)
return vulnerabilities
def generate_win_themes(
differentiators: list[dict[str, Any]],
competitive_scores: dict[str, dict[str, Any]],
our_product: str,
) -> list[str]:
"""Generate win themes based on differentiators and competitive position.
Args:
differentiators: List of differentiator features.
competitive_scores: Product competitive scores.
our_product: Our product name.
Returns:
List of win theme strings.
"""
themes = []
# Theme from top differentiators
if differentiators:
top_diff_categories = list({d["category"] for d in differentiators[:5]})
for cat in top_diff_categories[:3]:
cat_diffs = [d for d in differentiators if d["category"] == cat]
feature_names = [d["feature"] for d in cat_diffs[:3]]
themes.append(
f"Superior {cat} capabilities: {', '.join(feature_names)}"
)
# Theme from overall competitive position
our_score = competitive_scores.get(our_product, {}).get("weighted_score", 0)
competitor_scores = [
(p, s["weighted_score"])
for p, s in competitive_scores.items()
if p != our_product
]
if competitor_scores:
best_competitor_name, best_competitor_score = max(
competitor_scores, key=lambda x: x[1]
)
if our_score > best_competitor_score:
themes.append(
f"Overall strongest solution ({our_score:.1f}% vs {best_competitor_name} at {best_competitor_score:.1f}%)"
)
# Theme from breadth of coverage
strong_diffs = [d for d in differentiators if d["gap"] >= 2]
if len(strong_diffs) >= 3:
themes.append(
f"Clear technical leadership across {len(strong_diffs)} key features with significant competitive gaps"
)
if not themes:
themes.append("Competitive parity - emphasize implementation quality, support, and total cost of ownership")
return themes
def analyze_competitive(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete competitive analysis pipeline.
Args:
data: Parsed competitive data dictionary.
Returns:
Complete analysis results dictionary.
"""
comparison = build_comparison_matrix(data)
competitive_scores = compute_competitive_scores(comparison)
differentiators = identify_differentiators(comparison)
vulnerabilities = identify_vulnerabilities(comparison)
win_themes = generate_win_themes(
differentiators, competitive_scores, comparison["our_product"]
)
return {
"analysis_info": {
"our_product": comparison["our_product"],
"competitors": comparison["competitors"],
"total_features": len(comparison["matrix"]),
"total_categories": len(comparison["category_summaries"]),
},
"competitive_scores": competitive_scores,
"category_breakdown": comparison["category_summaries"],
"comparison_matrix": comparison["matrix"],
"differentiators": differentiators,
"vulnerabilities": vulnerabilities,
"win_themes": win_themes,
}
def format_text(result: dict[str, Any]) -> str:
"""Format analysis results as human-readable text.
Args:
result: Complete analysis results dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["analysis_info"]
all_products = [info["our_product"]] + info["competitors"]
lines.append("=" * 80)
lines.append("COMPETITIVE MATRIX ANALYSIS")
lines.append("=" * 80)
lines.append(f"Our Product: {info['our_product']}")
lines.append(f"Competitors: {', '.join(info['competitors'])}")
lines.append(f"Features: {info['total_features']}")
lines.append(f"Categories: {info['total_categories']}")
lines.append("")
# Competitive scores
lines.append("-" * 80)
lines.append("COMPETITIVE SCORES")
lines.append("-" * 80)
lines.append(f"{'Product':<25} {'Weighted':>10} {'Unweighted':>12}")
lines.append("-" * 80)
# Sort by weighted score descending
sorted_scores = sorted(
result["competitive_scores"].items(),
key=lambda x: x[1]["weighted_score"],
reverse=True,
)
for product, scores in sorted_scores:
marker = " <-- US" if product == info["our_product"] else ""
lines.append(
f"{product:<25} {scores['weighted_score']:>9.1f}% {scores['unweighted_score']:>11.1f}%{marker}"
)
lines.append("")
# Feature matrix
lines.append("-" * 80)
lines.append("FEATURE COMPARISON MATRIX")
lines.append("-" * 80)
# Build header
product_cols = " ".join(f"{p[:10]:>10}" for p in all_products)
lines.append(f"{'Feature':<30} {product_cols}")
lines.append("-" * 80)
current_category = ""
for entry in result["comparison_matrix"]:
if entry["category"] != current_category:
current_category = entry["category"]
cat_data = result["category_breakdown"].get(current_category, {})
weight = cat_data.get("weight", 1.0)
lines.append(f"\n [{current_category}] (weight: {weight}x)")
score_cols = " ".join(
f"{FEATURE_LABELS.get(entry['scores'].get(p, 0), 'N/A'):>10}"
for p in all_products
)
lead_marker = " *" if entry["we_lead"] else (" !" if entry["we_trail"] else "")
feature_display = entry["feature"][:28]
lines.append(f" {feature_display:<28} {score_cols}{lead_marker}")
lines.append("")
lines.append(" * = We lead | ! = We trail")
lines.append("")
# Differentiators
diffs = result["differentiators"]
if diffs:
lines.append("-" * 80)
lines.append(f"DIFFERENTIATORS ({len(diffs)} features where we lead)")
lines.append("-" * 80)
for d in diffs:
lines.append(
f" + {d['feature']} [{d['category']}] "
f"- Us: {d['our_label']} vs Best Competitor: {FEATURE_LABELS.get(d['best_competitor_score'], 'N/A')} "
f"(gap: +{d['gap']})"
)
lines.append("")
# Vulnerabilities
vulns = result["vulnerabilities"]
if vulns:
lines.append("-" * 80)
lines.append(f"VULNERABILITIES ({len(vulns)} features where competitors lead)")
lines.append("-" * 80)
for v in vulns:
leaders = ", ".join(
f"{p}: {FEATURE_LABELS.get(s, 'N/A')}"
for p, s in v["leading_competitors"].items()
)
lines.append(
f" - {v['feature']} [{v['category']}] "
f"- Us: {v['our_label']} vs {leaders} "
f"(gap: -{v['gap']})"
)
lines.append("")
# Win themes
themes = result["win_themes"]
lines.append("-" * 80)
lines.append("WIN THEMES")
lines.append("-" * 80)
for i, theme in enumerate(themes, 1):
lines.append(f" {i}. {theme}")
lines.append("")
lines.append("=" * 80)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the Competitive Matrix Builder."""
parser = argparse.ArgumentParser(
description="Build competitive feature comparison matrices and positioning analysis.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Feature Scoring:\n"
" Full (3) - Complete feature support\n"
" Partial (2) - Partial or limited support\n"
" Limited (1) - Minimal or basic support\n"
" None (0) - Feature not available\n"
"\n"
"Example:\n"
" python competitive_matrix_builder.py competitive_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing competitive data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_competitive_data(args.input_file)
result = analyze_competitive(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""POC Planner - Plan proof-of-concept engagements with timeline, resources, and scorecards.
Generates structured POC plans including phased timelines, resource allocation,
success criteria with measurable metrics, evaluation scorecards, risk identification,
and go/no-go recommendation frameworks.
Usage:
python poc_planner.py poc_data.json
python poc_planner.py poc_data.json --format json
python poc_planner.py poc_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Default phase definitions
DEFAULT_PHASES = [
{
"name": "Setup",
"duration_weeks": 1,
"description": "Environment provisioning, data migration, initial configuration",
"activities": [
"Provision POC environment",
"Configure authentication and access",
"Migrate sample data sets",
"Set up monitoring and logging",
"Conduct kickoff meeting with stakeholders",
],
},
{
"name": "Core Testing",
"duration_weeks": 2,
"description": "Primary use case validation and integration testing",
"activities": [
"Execute primary use case scenarios",
"Test core integrations",
"Validate data flow and transformations",
"Conduct mid-point review with stakeholders",
"Document findings and adjust test plan",
],
},
{
"name": "Advanced Testing",
"duration_weeks": 1,
"description": "Edge cases, performance testing, and security validation",
"activities": [
"Execute edge case scenarios",
"Run performance and load tests",
"Validate security controls and compliance",
"Test disaster recovery and failover",
"Test administrative workflows",
],
},
{
"name": "Evaluation",
"duration_weeks": 1,
"description": "Scorecard completion, stakeholder review, and go/no-go decision",
"activities": [
"Complete evaluation scorecard",
"Compile POC results documentation",
"Conduct final stakeholder review",
"Present go/no-go recommendation",
"Gather lessons learned",
],
},
]
# Evaluation categories with default weights
DEFAULT_EVAL_CATEGORIES = {
"Functionality": {
"weight": 0.30,
"criteria": [
"Core feature completeness",
"Use case coverage",
"Customization flexibility",
"Workflow automation",
],
},
"Performance": {
"weight": 0.20,
"criteria": [
"Response time under load",
"Throughput capacity",
"Scalability characteristics",
"Resource utilization",
],
},
"Integration": {
"weight": 0.20,
"criteria": [
"API completeness and documentation",
"Data migration ease",
"Third-party connector availability",
"Authentication/SSO integration",
],
},
"Usability": {
"weight": 0.15,
"criteria": [
"User interface intuitiveness",
"Learning curve assessment",
"Documentation quality",
"Admin console functionality",
],
},
"Support": {
"weight": 0.15,
"criteria": [
"Technical support responsiveness",
"Knowledge base quality",
"Training resources availability",
"Community and ecosystem",
],
},
}
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_poc_data(filepath: str) -> dict[str, Any]:
"""Load and validate POC data from a JSON file.
Args:
filepath: Path to the JSON file containing POC data.
Returns:
Parsed POC data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "poc_name" not in data:
print("Error: JSON must contain 'poc_name' field.", file=sys.stderr)
sys.exit(1)
return data
def estimate_resources(data: dict[str, Any], phases: list[dict[str, Any]]) -> dict[str, Any]:
"""Estimate resource requirements for the POC.
Args:
data: POC data with scope and requirements.
phases: List of phase definitions.
Returns:
Resource allocation dictionary.
"""
total_weeks = sum(p["duration_weeks"] for p in phases)
complexity = data.get("complexity", "medium").lower()
scope_items = data.get("scope_items", [])
num_integrations = data.get("num_integrations", 0)
# Base SE hours per week by complexity
se_hours_per_week = {"low": 15, "medium": 25, "high": 35}.get(complexity, 25)
# Engineering support hours
eng_base = {"low": 5, "medium": 10, "high": 20}.get(complexity, 10)
eng_integration_hours = num_integrations * 8
# Customer resource hours
customer_hours_per_week = {"low": 5, "medium": 8, "high": 12}.get(complexity, 8)
se_total = se_hours_per_week * total_weeks
eng_total = (eng_base * total_weeks) + eng_integration_hours
customer_total = customer_hours_per_week * total_weeks
# Phase-level breakdown
phase_resources = []
for phase in phases:
weeks = phase["duration_weeks"]
# Setup phase has higher SE and eng effort
se_multiplier = 1.3 if phase["name"] == "Setup" else (
1.0 if phase["name"] in ("Core Testing", "Advanced Testing") else 0.7
)
eng_multiplier = 1.5 if phase["name"] == "Setup" else (
1.0 if phase["name"] == "Core Testing" else (
1.2 if phase["name"] == "Advanced Testing" else 0.5
)
)
phase_resources.append({
"phase": phase["name"],
"duration_weeks": weeks,
"se_hours": round(se_hours_per_week * weeks * se_multiplier),
"engineering_hours": round(eng_base * weeks * eng_multiplier),
"customer_hours": round(customer_hours_per_week * weeks),
})
return {
"total_duration_weeks": total_weeks,
"complexity": complexity,
"totals": {
"se_hours": se_total,
"engineering_hours": eng_total,
"customer_hours": customer_total,
"total_hours": se_total + eng_total + customer_total,
},
"phase_breakdown": phase_resources,
"additional_resources": {
"integration_hours": eng_integration_hours,
"num_integrations": num_integrations,
},
}
def generate_success_criteria(data: dict[str, Any]) -> list[dict[str, Any]]:
"""Generate success criteria based on POC scope and requirements.
Args:
data: POC data with scope and requirements.
Returns:
List of success criteria with metrics.
"""
criteria = []
# Custom criteria from input
custom_criteria = data.get("success_criteria", [])
for cc in custom_criteria:
criteria.append({
"criterion": cc.get("criterion", "Unnamed criterion"),
"metric": cc.get("metric", "Pass/Fail"),
"target": cc.get("target", "Met"),
"category": cc.get("category", "Functionality"),
"priority": cc.get("priority", "must-have"),
})
# Auto-generated criteria based on scope
scope_items = data.get("scope_items", [])
for item in scope_items:
if isinstance(item, str):
criteria.append({
"criterion": f"Validate: {item}",
"metric": "Pass/Fail",
"target": "Pass",
"category": "Functionality",
"priority": "must-have",
})
elif isinstance(item, dict):
criteria.append({
"criterion": item.get("name", "Unnamed scope item"),
"metric": item.get("metric", "Pass/Fail"),
"target": item.get("target", "Pass"),
"category": item.get("category", "Functionality"),
"priority": item.get("priority", "must-have"),
})
# Default criteria if none provided
if not criteria:
criteria = [
{
"criterion": "Core use case validation",
"metric": "Percentage of use cases successfully demonstrated",
"target": ">90%",
"category": "Functionality",
"priority": "must-have",
},
{
"criterion": "Performance under expected load",
"metric": "Response time at target concurrency",
"target": "<2 seconds p95",
"category": "Performance",
"priority": "must-have",
},
{
"criterion": "Integration with existing systems",
"metric": "Number of integrations successfully tested",
"target": "All planned integrations",
"category": "Integration",
"priority": "must-have",
},
{
"criterion": "User acceptance",
"metric": "Stakeholder satisfaction score",
"target": ">4.0/5.0",
"category": "Usability",
"priority": "should-have",
},
]
return criteria
def generate_evaluation_scorecard(data: dict[str, Any]) -> dict[str, Any]:
"""Generate the POC evaluation scorecard template.
Args:
data: POC data.
Returns:
Evaluation scorecard structure.
"""
custom_categories = data.get("evaluation_categories", {})
# Merge custom categories with defaults
categories = {}
for cat_name, cat_data in DEFAULT_EVAL_CATEGORIES.items():
if cat_name in custom_categories:
custom = custom_categories[cat_name]
categories[cat_name] = {
"weight": custom.get("weight", cat_data["weight"]),
"criteria": custom.get("criteria", cat_data["criteria"]),
"score": None,
"notes": "",
}
else:
categories[cat_name] = {
"weight": cat_data["weight"],
"criteria": cat_data["criteria"],
"score": None,
"notes": "",
}
# Normalize weights to sum to 1.0
total_weight = sum(c["weight"] for c in categories.values())
if total_weight > 0 and abs(total_weight - 1.0) > 0.01:
for cat in categories.values():
cat["weight"] = round(safe_divide(cat["weight"], total_weight), 2)
return {
"scoring_scale": {
"5": "Exceeds requirements - superior capability",
"4": "Meets requirements - full capability",
"3": "Partially meets - acceptable with minor gaps",
"2": "Below expectations - significant gaps",
"1": "Does not meet - critical gaps",
},
"categories": categories,
"pass_threshold": 3.5,
"strong_pass_threshold": 4.0,
}
def identify_risks(data: dict[str, Any], resources: dict[str, Any]) -> list[dict[str, Any]]:
"""Identify POC risks and generate mitigation strategies.
Args:
data: POC data.
resources: Resource allocation data.
Returns:
List of risk entries with probability, impact, and mitigation.
"""
risks = []
complexity = data.get("complexity", "medium").lower()
num_integrations = data.get("num_integrations", 0)
total_weeks = resources["total_duration_weeks"]
stakeholders = data.get("stakeholders", [])
# Timeline risk
if total_weeks > 6:
risks.append({
"risk": "Extended timeline may lose stakeholder attention",
"probability": "high",
"impact": "high",
"mitigation": "Schedule weekly progress checkpoints; deliver early wins in week 2",
"category": "Timeline",
})
elif total_weeks >= 4:
risks.append({
"risk": "Timeline may slip due to unforeseen technical issues",
"probability": "medium",
"impact": "medium",
"mitigation": "Build 20% buffer into each phase; identify critical path early",
"category": "Timeline",
})
# Integration risks
if num_integrations > 3:
risks.append({
"risk": "Multiple integrations increase complexity and failure points",
"probability": "high",
"impact": "high",
"mitigation": "Prioritize integrations by business value; test incrementally; have fallback demo data",
"category": "Technical",
})
elif num_integrations > 0:
risks.append({
"risk": "Integration dependencies may cause delays",
"probability": "medium",
"impact": "medium",
"mitigation": "Engage customer IT early; confirm API access and credentials in setup phase",
"category": "Technical",
})
# Data risks
risks.append({
"risk": "Customer data quality or availability issues",
"probability": "medium",
"impact": "high",
"mitigation": "Request sample data early; prepare synthetic data as fallback; validate data format in setup",
"category": "Data",
})
# Stakeholder risks
if len(stakeholders) > 5:
risks.append({
"risk": "Too many stakeholders may slow decision-making",
"probability": "medium",
"impact": "medium",
"mitigation": "Identify decision-maker and champion; schedule focused reviews per stakeholder group",
"category": "Stakeholder",
})
if not stakeholders:
risks.append({
"risk": "Undefined stakeholder map may lead to misaligned evaluation",
"probability": "high",
"impact": "high",
"mitigation": "Confirm stakeholder list, roles, and evaluation criteria before setup phase",
"category": "Stakeholder",
})
# Resource risks
if complexity == "high":
risks.append({
"risk": "High complexity may require additional engineering resources",
"probability": "medium",
"impact": "high",
"mitigation": "Secure engineering commitment upfront; identify escalation path for blockers",
"category": "Resource",
})
# Competitive risk
risks.append({
"risk": "Competitor POC running in parallel may shift evaluation criteria",
"probability": "medium",
"impact": "medium",
"mitigation": "Stay close to champion; align success criteria early; differentiate on unique strengths",
"category": "Competitive",
})
return risks
def generate_go_no_go_framework(data: dict[str, Any]) -> dict[str, Any]:
"""Generate the go/no-go decision framework.
Args:
data: POC data.
Returns:
Go/no-go framework with criteria and thresholds.
"""
return {
"decision_criteria": [
{
"criterion": "Overall scorecard score",
"go_threshold": ">=3.5 weighted average",
"no_go_threshold": "<3.0 weighted average",
"conditional_range": "3.0 - 3.5",
},
{
"criterion": "Must-have success criteria met",
"go_threshold": "100% of must-have criteria pass",
"no_go_threshold": "<80% of must-have criteria pass",
"conditional_range": "80-99% with mitigation plan",
},
{
"criterion": "Stakeholder satisfaction",
"go_threshold": "Champion and decision-maker both positive",
"no_go_threshold": "Decision-maker negative",
"conditional_range": "Mixed signals - needs follow-up",
},
{
"criterion": "Technical blockers",
"go_threshold": "No unresolved critical blockers",
"no_go_threshold": ">2 unresolved critical blockers",
"conditional_range": "1-2 blockers with clear resolution path",
},
],
"recommendation_logic": {
"GO": "All criteria meet go thresholds, or majority go with no no-go triggers",
"CONDITIONAL_GO": "Some criteria in conditional range, but no no-go triggers and clear resolution plan",
"NO_GO": "Any criterion triggers no-go threshold without clear mitigation",
},
}
def plan_poc(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete POC planning pipeline.
Args:
data: Parsed POC data dictionary.
Returns:
Complete POC plan dictionary.
"""
poc_info = {
"poc_name": data.get("poc_name", "Unnamed POC"),
"customer": data.get("customer", "Unknown Customer"),
"opportunity_value": data.get("opportunity_value", "Not specified"),
"complexity": data.get("complexity", "medium"),
"start_date": data.get("start_date", "TBD"),
"champion": data.get("champion", "Not identified"),
"decision_maker": data.get("decision_maker", "Not identified"),
}
# Use custom phases if provided, otherwise defaults
phases = data.get("phases", DEFAULT_PHASES)
# Resource estimation
resources = estimate_resources(data, phases)
# Success criteria
success_criteria = generate_success_criteria(data)
# Evaluation scorecard
scorecard = generate_evaluation_scorecard(data)
# Risk identification
risks = identify_risks(data, resources)
# Go/No-Go framework
go_no_go = generate_go_no_go_framework(data)
# Timeline with phase details
timeline = []
current_week = 1
for phase in phases:
end_week = current_week + phase["duration_weeks"] - 1
timeline.append({
"phase": phase["name"],
"start_week": current_week,
"end_week": end_week,
"duration_weeks": phase["duration_weeks"],
"description": phase["description"],
"activities": phase["activities"],
})
current_week = end_week + 1
# Stakeholder plan
stakeholders = data.get("stakeholders", [])
stakeholder_plan = []
for s in stakeholders:
if isinstance(s, str):
stakeholder_plan.append({
"name": s,
"role": "Evaluator",
"engagement": "Weekly updates, phase reviews",
})
elif isinstance(s, dict):
stakeholder_plan.append({
"name": s.get("name", "Unknown"),
"role": s.get("role", "Evaluator"),
"engagement": s.get("engagement", "Weekly updates, phase reviews"),
})
return {
"poc_info": poc_info,
"timeline": timeline,
"resource_allocation": resources,
"success_criteria": success_criteria,
"evaluation_scorecard": scorecard,
"risk_register": risks,
"go_no_go_framework": go_no_go,
"stakeholder_plan": stakeholder_plan,
}
def format_text(result: dict[str, Any]) -> str:
"""Format POC plan as human-readable text.
Args:
result: Complete POC plan dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["poc_info"]
lines.append("=" * 70)
lines.append("PROOF OF CONCEPT PLAN")
lines.append("=" * 70)
lines.append(f"POC Name: {info['poc_name']}")
lines.append(f"Customer: {info['customer']}")
lines.append(f"Opportunity Value: {info['opportunity_value']}")
lines.append(f"Complexity: {info['complexity'].upper()}")
lines.append(f"Start Date: {info['start_date']}")
lines.append(f"Champion: {info['champion']}")
lines.append(f"Decision Maker: {info['decision_maker']}")
lines.append("")
# Timeline
lines.append("-" * 70)
lines.append("TIMELINE")
lines.append("-" * 70)
for phase in result["timeline"]:
week_range = (
f"Week {phase['start_week']}"
if phase["start_week"] == phase["end_week"]
else f"Weeks {phase['start_week']}-{phase['end_week']}"
)
lines.append(f"\n Phase: {phase['phase']} ({week_range})")
lines.append(f" {phase['description']}")
lines.append(" Activities:")
for activity in phase["activities"]:
lines.append(f" - {activity}")
lines.append("")
# Resource allocation
res = result["resource_allocation"]
lines.append("-" * 70)
lines.append("RESOURCE ALLOCATION")
lines.append("-" * 70)
lines.append(f"Total Duration: {res['total_duration_weeks']} weeks")
lines.append(f"Complexity: {res['complexity'].upper()}")
lines.append("")
lines.append(" Totals:")
lines.append(f" SE Hours: {res['totals']['se_hours']}")
lines.append(f" Engineering Hours: {res['totals']['engineering_hours']}")
lines.append(f" Customer Hours: {res['totals']['customer_hours']}")
lines.append(f" Total Hours: {res['totals']['total_hours']}")
lines.append("")
lines.append(" Phase Breakdown:")
lines.append(f" {'Phase':<20} {'Weeks':>5} {'SE':>6} {'Eng':>6} {'Cust':>6}")
lines.append(" " + "-" * 45)
for pr in res["phase_breakdown"]:
lines.append(
f" {pr['phase']:<20} {pr['duration_weeks']:>5} "
f"{pr['se_hours']:>5}h {pr['engineering_hours']:>5}h {pr['customer_hours']:>5}h"
)
lines.append("")
# Success criteria
criteria = result["success_criteria"]
lines.append("-" * 70)
lines.append("SUCCESS CRITERIA")
lines.append("-" * 70)
for i, sc in enumerate(criteria, 1):
priority_marker = "[MUST]" if sc["priority"] == "must-have" else (
"[SHOULD]" if sc["priority"] == "should-have" else "[NICE]"
)
lines.append(f" {i}. {priority_marker} {sc['criterion']}")
lines.append(f" Metric: {sc['metric']}")
lines.append(f" Target: {sc['target']}")
lines.append(f" Category: {sc['category']}")
lines.append("")
# Evaluation scorecard
scorecard = result["evaluation_scorecard"]
lines.append("-" * 70)
lines.append("EVALUATION SCORECARD")
lines.append("-" * 70)
lines.append(f" Pass Threshold: {scorecard['pass_threshold']}/5.0")
lines.append(f" Strong Pass Threshold: {scorecard['strong_pass_threshold']}/5.0")
lines.append("")
lines.append(" Scoring Scale:")
for score, desc in scorecard["scoring_scale"].items():
lines.append(f" {score} = {desc}")
lines.append("")
lines.append(" Categories:")
for cat_name, cat_data in scorecard["categories"].items():
lines.append(f"\n {cat_name} (weight: {cat_data['weight']:.0%})")
for criterion in cat_data["criteria"]:
lines.append(f" [ ] {criterion}")
lines.append("")
# Risk register
risks = result["risk_register"]
lines.append("-" * 70)
lines.append("RISK REGISTER")
lines.append("-" * 70)
for risk in risks:
lines.append(f" [{risk['impact'].upper()}] {risk['risk']}")
lines.append(f" Probability: {risk['probability']} | Impact: {risk['impact']}")
lines.append(f" Category: {risk['category']}")
lines.append(f" Mitigation: {risk['mitigation']}")
lines.append("")
# Go/No-Go framework
framework = result["go_no_go_framework"]
lines.append("-" * 70)
lines.append("GO / NO-GO DECISION FRAMEWORK")
lines.append("-" * 70)
for dc in framework["decision_criteria"]:
lines.append(f" {dc['criterion']}:")
lines.append(f" GO: {dc['go_threshold']}")
lines.append(f" CONDITIONAL: {dc['conditional_range']}")
lines.append(f" NO-GO: {dc['no_go_threshold']}")
lines.append("")
lines.append(" Recommendation Logic:")
for decision, logic in framework["recommendation_logic"].items():
lines.append(f" {decision}: {logic}")
lines.append("")
# Stakeholder plan
stakeholders = result["stakeholder_plan"]
if stakeholders:
lines.append("-" * 70)
lines.append("STAKEHOLDER PLAN")
lines.append("-" * 70)
for s in stakeholders:
lines.append(f" {s['name']} ({s['role']})")
lines.append(f" Engagement: {s['engagement']}")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the POC Planner."""
parser = argparse.ArgumentParser(
description="Plan proof-of-concept engagements with timeline, resources, and evaluation scorecards.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Default Phases:\n"
" Week 1: Setup - Environment provisioning, configuration\n"
" Weeks 2-3: Core Testing - Primary use cases, integrations\n"
" Week 4: Advanced Testing - Edge cases, performance, security\n"
" Week 5: Evaluation - Scorecard, stakeholder review, go/no-go\n"
"\n"
"Example:\n"
" python poc_planner.py poc_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing POC scope and requirements",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_poc_data(args.input_file)
result = plan_poc(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
"""RFP/RFI Response Analyzer - Score coverage, identify gaps, and recommend bid/no-bid.
Parses RFP/RFI requirements and scores coverage using Full/Partial/Planned/Gap
categories. Generates weighted coverage scores, gap analysis with mitigation
strategies, effort estimation, and bid/no-bid recommendations.
Usage:
python rfp_response_analyzer.py rfp_data.json
python rfp_response_analyzer.py rfp_data.json --format json
python rfp_response_analyzer.py rfp_data.json --format text
"""
import argparse
import json
import sys
from typing import Any
# Coverage status to score mapping
COVERAGE_SCORES: dict[str, float] = {
"full": 1.0,
"partial": 0.5,
"planned": 0.25,
"gap": 0.0,
}
# Priority to weight mapping
PRIORITY_WEIGHTS: dict[str, float] = {
"must-have": 3.0,
"should-have": 2.0,
"nice-to-have": 1.0,
}
# Bid thresholds
BID_THRESHOLD = 0.70
CONDITIONAL_THRESHOLD = 0.50
MAX_MUST_HAVE_GAPS_FOR_BID = 3
def safe_divide(numerator: float, denominator: float, default: float = 0.0) -> float:
"""Safely divide two numbers, returning default if denominator is zero."""
if denominator == 0:
return default
return numerator / denominator
def load_rfp_data(filepath: str) -> dict[str, Any]:
"""Load and validate RFP data from a JSON file.
Args:
filepath: Path to the JSON file containing RFP data.
Returns:
Parsed RFP data dictionary.
Raises:
SystemExit: If the file cannot be read or parsed.
"""
try:
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
except FileNotFoundError:
print(f"Error: File not found: {filepath}", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in {filepath}: {e}", file=sys.stderr)
sys.exit(1)
if "requirements" not in data:
print("Error: JSON must contain a 'requirements' array.", file=sys.stderr)
sys.exit(1)
return data
def analyze_requirement(req: dict[str, Any]) -> dict[str, Any]:
"""Analyze a single requirement and compute its score.
Args:
req: Requirement dictionary with category, priority, coverage_status, etc.
Returns:
Enriched requirement with computed score and weight.
"""
coverage_status = req.get("coverage_status", "gap").lower()
priority = req.get("priority", "nice-to-have").lower()
coverage_score = COVERAGE_SCORES.get(coverage_status, 0.0)
weight = PRIORITY_WEIGHTS.get(priority, 1.0)
weighted_score = coverage_score * weight
max_weighted = weight
effort_hours = req.get("effort_hours", 0)
result = {
"id": req.get("id", "unknown"),
"requirement": req.get("requirement", "Unnamed requirement"),
"category": req.get("category", "Uncategorized"),
"priority": priority,
"coverage_status": coverage_status,
"coverage_score": coverage_score,
"weight": weight,
"weighted_score": weighted_score,
"max_weighted": max_weighted,
"effort_hours": effort_hours,
"notes": req.get("notes", ""),
"mitigation": req.get("mitigation", ""),
}
return result
def generate_gap_analysis(analyzed_reqs: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""Generate gap analysis for requirements not fully covered.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
Returns:
List of gap entries with mitigation strategies.
"""
gaps = []
for req in analyzed_reqs:
if req["coverage_status"] in ("gap", "partial", "planned"):
severity = "critical" if req["priority"] == "must-have" else (
"high" if req["priority"] == "should-have" else "low"
)
mitigation = req["mitigation"]
if not mitigation:
if req["coverage_status"] == "partial":
mitigation = "Enhance existing capability to achieve full coverage"
elif req["coverage_status"] == "planned":
mitigation = "Communicate roadmap timeline and interim workaround"
else:
mitigation = "Evaluate build vs. partner vs. no-bid for this requirement"
gaps.append({
"id": req["id"],
"requirement": req["requirement"],
"category": req["category"],
"priority": req["priority"],
"coverage_status": req["coverage_status"],
"severity": severity,
"effort_hours": req["effort_hours"],
"mitigation": mitigation,
})
# Sort by severity: critical > high > low
severity_order = {"critical": 0, "high": 1, "low": 2}
gaps.sort(key=lambda g: severity_order.get(g["severity"], 3))
return gaps
def compute_category_scores(analyzed_reqs: list[dict[str, Any]]) -> dict[str, dict[str, Any]]:
"""Compute coverage scores grouped by requirement category.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
Returns:
Dictionary of category names to score summaries.
"""
categories: dict[str, dict[str, float]] = {}
for req in analyzed_reqs:
cat = req["category"]
if cat not in categories:
categories[cat] = {
"weighted_score": 0.0,
"max_weighted": 0.0,
"count": 0,
"full_count": 0,
"partial_count": 0,
"planned_count": 0,
"gap_count": 0,
"effort_hours": 0,
}
categories[cat]["weighted_score"] += req["weighted_score"]
categories[cat]["max_weighted"] += req["max_weighted"]
categories[cat]["count"] += 1
categories[cat]["effort_hours"] += req["effort_hours"]
status_key = f"{req['coverage_status']}_count"
if status_key in categories[cat]:
categories[cat][status_key] += 1
result = {}
for cat, scores in categories.items():
coverage_pct = safe_divide(scores["weighted_score"], scores["max_weighted"]) * 100
result[cat] = {
"coverage_percentage": round(coverage_pct, 1),
"requirements_count": int(scores["count"]),
"full": int(scores["full_count"]),
"partial": int(scores["partial_count"]),
"planned": int(scores["planned_count"]),
"gap": int(scores["gap_count"]),
"effort_hours": int(scores["effort_hours"]),
}
return result
def determine_bid_recommendation(
overall_coverage: float,
must_have_gaps: int,
strategic_value: str,
) -> dict[str, Any]:
"""Determine bid/no-bid recommendation based on coverage and gaps.
Args:
overall_coverage: Overall weighted coverage percentage (0-100).
must_have_gaps: Number of must-have requirements with gap status.
strategic_value: Strategic value assessment (high, medium, low).
Returns:
Recommendation dictionary with decision and rationale.
"""
coverage_ratio = overall_coverage / 100.0
reasons = []
# Primary decision logic
if coverage_ratio >= BID_THRESHOLD and must_have_gaps <= MAX_MUST_HAVE_GAPS_FOR_BID:
decision = "BID"
reasons.append(f"Coverage score {overall_coverage:.1f}% exceeds {BID_THRESHOLD*100:.0f}% threshold")
if must_have_gaps > 0:
reasons.append(f"{must_have_gaps} must-have gap(s) within acceptable range (max {MAX_MUST_HAVE_GAPS_FOR_BID})")
elif coverage_ratio >= CONDITIONAL_THRESHOLD or (
must_have_gaps <= MAX_MUST_HAVE_GAPS_FOR_BID and coverage_ratio >= 0.4
):
decision = "CONDITIONAL BID"
reasons.append(f"Coverage score {overall_coverage:.1f}% in conditional range ({CONDITIONAL_THRESHOLD*100:.0f}%-{BID_THRESHOLD*100:.0f}%)")
if must_have_gaps > 0:
reasons.append(f"{must_have_gaps} must-have gap(s) require mitigation plan")
else:
decision = "NO-BID"
if coverage_ratio < CONDITIONAL_THRESHOLD:
reasons.append(f"Coverage score {overall_coverage:.1f}% below {CONDITIONAL_THRESHOLD*100:.0f}% minimum")
if must_have_gaps > MAX_MUST_HAVE_GAPS_FOR_BID:
reasons.append(f"{must_have_gaps} must-have gaps exceed maximum of {MAX_MUST_HAVE_GAPS_FOR_BID}")
# Strategic value adjustment
if strategic_value.lower() == "high" and decision == "CONDITIONAL BID":
reasons.append("High strategic value supports pursuing despite coverage gaps")
elif strategic_value.lower() == "low" and decision == "CONDITIONAL BID":
decision = "NO-BID"
reasons.append("Low strategic value does not justify investment for conditional coverage")
confidence = "high" if coverage_ratio >= 0.80 else (
"medium" if coverage_ratio >= 0.60 else "low"
)
return {
"decision": decision,
"confidence": confidence,
"overall_coverage_percentage": round(overall_coverage, 1),
"must_have_gaps": must_have_gaps,
"strategic_value": strategic_value,
"reasons": reasons,
}
def generate_risk_assessment(
analyzed_reqs: list[dict[str, Any]],
gaps: list[dict[str, Any]],
) -> list[dict[str, str]]:
"""Generate risk assessment based on gaps and coverage patterns.
Args:
analyzed_reqs: List of analyzed requirement dictionaries.
gaps: List of gap analysis entries.
Returns:
List of risk entries with impact and mitigation.
"""
risks = []
critical_gaps = [g for g in gaps if g["severity"] == "critical"]
if critical_gaps:
risks.append({
"risk": "Critical requirement gaps",
"impact": "high",
"description": f"{len(critical_gaps)} must-have requirements not fully met",
"mitigation": "Prioritize engineering effort or partner integration for gap closure",
})
total_effort = sum(r["effort_hours"] for r in analyzed_reqs if r["coverage_status"] != "full")
if total_effort > 200:
risks.append({
"risk": "High customization effort",
"impact": "high",
"description": f"{total_effort} hours estimated for non-full requirements",
"mitigation": "Evaluate resource availability and timeline feasibility before committing",
})
elif total_effort > 80:
risks.append({
"risk": "Moderate customization effort",
"impact": "medium",
"description": f"{total_effort} hours estimated for non-full requirements",
"mitigation": "Phase implementation and set clear expectations on delivery timeline",
})
planned_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "planned")
if planned_count > 3:
risks.append({
"risk": "Roadmap dependency",
"impact": "medium",
"description": f"{planned_count} requirements depend on planned product features",
"mitigation": "Confirm roadmap timelines with product team; include contractual commitments if needed",
})
partial_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "partial")
if partial_count > 5:
risks.append({
"risk": "Workaround complexity",
"impact": "medium",
"description": f"{partial_count} requirements need workarounds or configuration",
"mitigation": "Document workarounds clearly; plan for native support in future releases",
})
if not risks:
risks.append({
"risk": "No significant risks identified",
"impact": "low",
"description": "Strong coverage across all requirement categories",
"mitigation": "Maintain standard engagement process",
})
return risks
def analyze_rfp(data: dict[str, Any]) -> dict[str, Any]:
"""Run the complete RFP analysis pipeline.
Args:
data: Parsed RFP data with requirements array.
Returns:
Complete analysis results dictionary.
"""
rfp_info = {
"rfp_name": data.get("rfp_name", "Unnamed RFP"),
"customer": data.get("customer", "Unknown Customer"),
"due_date": data.get("due_date", "Not specified"),
"strategic_value": data.get("strategic_value", "medium"),
"deal_value": data.get("deal_value", "Not specified"),
}
# Analyze each requirement
analyzed_reqs = [analyze_requirement(req) for req in data["requirements"]]
# Compute overall scores
total_weighted = sum(r["weighted_score"] for r in analyzed_reqs)
total_max = sum(r["max_weighted"] for r in analyzed_reqs)
overall_coverage = safe_divide(total_weighted, total_max) * 100
# Coverage summary
total_count = len(analyzed_reqs)
full_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "full")
partial_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "partial")
planned_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "planned")
gap_count = sum(1 for r in analyzed_reqs if r["coverage_status"] == "gap")
# Must-have gap count
must_have_gaps = sum(
1 for r in analyzed_reqs
if r["priority"] == "must-have" and r["coverage_status"] == "gap"
)
# Category breakdown
category_scores = compute_category_scores(analyzed_reqs)
# Gap analysis
gaps = generate_gap_analysis(analyzed_reqs)
# Bid recommendation
bid_recommendation = determine_bid_recommendation(
overall_coverage,
must_have_gaps,
rfp_info["strategic_value"],
)
# Risk assessment
risks = generate_risk_assessment(analyzed_reqs, gaps)
# Effort summary
total_effort = sum(r["effort_hours"] for r in analyzed_reqs)
gap_effort = sum(r["effort_hours"] for r in analyzed_reqs if r["coverage_status"] != "full")
return {
"rfp_info": rfp_info,
"coverage_summary": {
"overall_coverage_percentage": round(overall_coverage, 1),
"total_requirements": total_count,
"full": full_count,
"partial": partial_count,
"planned": planned_count,
"gap": gap_count,
"must_have_gaps": must_have_gaps,
},
"category_scores": category_scores,
"bid_recommendation": bid_recommendation,
"gap_analysis": gaps,
"risk_assessment": risks,
"effort_estimate": {
"total_hours": total_effort,
"gap_closure_hours": gap_effort,
"full_coverage_hours": total_effort - gap_effort,
},
"requirements_detail": analyzed_reqs,
}
def format_text(result: dict[str, Any]) -> str:
"""Format analysis results as human-readable text.
Args:
result: Complete analysis results dictionary.
Returns:
Formatted text string.
"""
lines = []
info = result["rfp_info"]
lines.append("=" * 70)
lines.append("RFP RESPONSE ANALYSIS")
lines.append("=" * 70)
lines.append(f"RFP: {info['rfp_name']}")
lines.append(f"Customer: {info['customer']}")
lines.append(f"Due Date: {info['due_date']}")
lines.append(f"Deal Value: {info['deal_value']}")
lines.append(f"Strategic Value: {info['strategic_value'].upper()}")
lines.append("")
# Coverage summary
cs = result["coverage_summary"]
lines.append("-" * 70)
lines.append("COVERAGE SUMMARY")
lines.append("-" * 70)
lines.append(f"Overall Coverage: {cs['overall_coverage_percentage']}%")
lines.append(f"Total Requirements: {cs['total_requirements']}")
lines.append(f" Full: {cs['full']} | Partial: {cs['partial']} | Planned: {cs['planned']} | Gap: {cs['gap']}")
lines.append(f"Must-Have Gaps: {cs['must_have_gaps']}")
lines.append("")
# Bid recommendation
bid = result["bid_recommendation"]
lines.append("-" * 70)
lines.append(f"BID RECOMMENDATION: {bid['decision']}")
lines.append(f"Confidence: {bid['confidence'].upper()}")
lines.append("-" * 70)
for reason in bid["reasons"]:
lines.append(f" - {reason}")
lines.append("")
# Category scores
lines.append("-" * 70)
lines.append("CATEGORY BREAKDOWN")
lines.append("-" * 70)
lines.append(f"{'Category':<25} {'Coverage':>8} {'Full':>5} {'Part':>5} {'Plan':>5} {'Gap':>5} {'Effort':>7}")
lines.append("-" * 70)
for cat, scores in result["category_scores"].items():
lines.append(
f"{cat:<25} {scores['coverage_percentage']:>7.1f}% "
f"{scores['full']:>5} {scores['partial']:>5} "
f"{scores['planned']:>5} {scores['gap']:>5} "
f"{scores['effort_hours']:>6}h"
)
lines.append("")
# Gap analysis
gaps = result["gap_analysis"]
if gaps:
lines.append("-" * 70)
lines.append("GAP ANALYSIS")
lines.append("-" * 70)
for gap in gaps:
severity_marker = "!!!" if gap["severity"] == "critical" else (
"!!" if gap["severity"] == "high" else "!"
)
lines.append(f" [{severity_marker}] {gap['id']}: {gap['requirement']}")
lines.append(f" Category: {gap['category']} | Priority: {gap['priority']} | Status: {gap['coverage_status']}")
lines.append(f" Effort: {gap['effort_hours']}h | Mitigation: {gap['mitigation']}")
lines.append("")
# Risk assessment
risks = result["risk_assessment"]
lines.append("-" * 70)
lines.append("RISK ASSESSMENT")
lines.append("-" * 70)
for risk in risks:
lines.append(f" [{risk['impact'].upper()}] {risk['risk']}")
lines.append(f" {risk['description']}")
lines.append(f" Mitigation: {risk['mitigation']}")
lines.append("")
# Effort estimate
effort = result["effort_estimate"]
lines.append("-" * 70)
lines.append("EFFORT ESTIMATE")
lines.append("-" * 70)
lines.append(f" Total Effort: {effort['total_hours']} hours")
lines.append(f" Gap Closure Effort: {effort['gap_closure_hours']} hours")
lines.append(f" Supported Effort: {effort['full_coverage_hours']} hours")
lines.append("")
lines.append("=" * 70)
return "\n".join(lines)
def main() -> None:
"""Main entry point for the RFP Response Analyzer."""
parser = argparse.ArgumentParser(
description="Analyze RFP/RFI requirements for coverage, gaps, and bid recommendation.",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=(
"Coverage Categories:\n"
" Full (100%) - Requirement fully met\n"
" Partial (50%) - Partially met, workaround needed\n"
" Planned (25%) - On roadmap, not yet available\n"
" Gap (0%) - Not supported\n"
"\n"
"Priority Weights:\n"
" Must-Have (3x) | Should-Have (2x) | Nice-to-Have (1x)\n"
"\n"
"Example:\n"
" python rfp_response_analyzer.py rfp_data.json --format json\n"
),
)
parser.add_argument(
"input_file",
help="Path to JSON file containing RFP requirements data",
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
dest="output_format",
help="Output format: json or text (default: text)",
)
args = parser.parse_args()
data = load_rfp_data(args.input_file)
result = analyze_rfp(data)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(format_text(result))
if __name__ == "__main__":
main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.
npx skills add alirezarezvani/claude-skills --skill business-growth/sales-engineer Community skill by @alirezarezvani. Need a walkthrough? See the install guide →
Works with
Prefer no terminal? Download the ZIP and place it manually.
Details
- Category
- Marketing
- License
- MIT
- Author
- @alirezarezvani
- Source
- GitHub →
- Source file
-
show path
business-growth/sales-engineer/SKILL.md
People who install this also use
Customer Success Manager
Customer retention strategy, health score tracking, expansion playbooks, and churn prevention — a CSM toolkit for growing SaaS companies.
@alirezarezvani
Contract & Proposal Writer
Write professional contracts, RFP responses, and business proposals — clear terms, compelling narratives, and deal-winning structures for B2B engagements.
@alirezarezvani
Competitive Teardown
Deep competitor analysis with feature matrices, SWOT frameworks, and positioning maps — understand where you stand and where to move next.
@alirezarezvani