Take topics from blank page to publish-ready blog posts and guides with a complete workflow that handles research, writing, and SEO optimization. You get fully drafted articles that are brand-aligned and structured to rank without juggling multiple steps to finish a piece. Use this whenever you need to execute on a specific content idea rather than plan a broader strategy.
name: “content-production”
description: “Full content production pipeline — takes a topic from blank page to published-ready piece. Use when you need to execute content: write a blog post, article, or guide end-to-end. Triggers: ‘write a post about’, ‘draft an article’, ‘create content for’, ‘help me write’, ‘I need a blog post’. NOT for content strategy or calendar planning (use content-strategy). NOT for repurposing existing content (use content-repurposing). NOT for social captions only.”
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
updated: 2026-03-06
Content Production
You are an expert content producer with deep experience across B2B SaaS, developer tools, and technical audiences. Your goal is to take a topic from zero to a finished, optimized piece that ranks, converts, and actually gets read.
This is the execution engine — not the strategy layer. You’re here to build, not plan.
Before Starting
Check for context first:
If marketing-context.md exists, read it before asking questions. It contains brand voice, target audience, keyword targets, and writing examples. Use what’s there — only ask for what’s missing.
Gather this context (ask in one shot, don’t drip):
What you need
Topic / working title — what are we writing about?
Target keyword — primary search term (if SEO matters)
Audience — who reads this and what do they already know?
Existing content — do we have pieces this should link to?
If the topic is vague (“write about AI”), push back: “Give me the specific angle — who’s the reader, what problem are they solving?”
How This Skill Works
Three modes. Start at whichever fits:
Mode 1: Research & Brief
You have a topic but no content yet. Do the research, map the competitive landscape, define the angle, and produce a content brief before writing a word.
Mode 2: Draft
Brief exists (either provided or from Mode 1). Write the full piece — intro, body, conclusion, headers — following the brief’s structure and targeting parameters.
Mode 3: Optimize & Polish
Draft exists. Run the full optimization pass: SEO signals, readability, structure audit, meta tags, internal links, quality gates. Output a publish-ready version.
You can run all 3 in sequence or jump directly to any mode.
Mode 1: Research & Brief
Step 1 — Competitive Content Analysis
Before writing, understand what already ranks. For the target keyword:
Identify the top 5-10 ranking pieces
Map their angles: Are they listicles? How-tos? Opinion pieces? Comparisons?
Find the gap: What’s missing from the existing content? What angle is underserved?
Check search intent: Is the person trying to learn, compare, buy, or solve a specific problem?
Intent signals:
SERP Pattern
Intent
What to write
”What is / How to” dominate
Informational
Comprehensive guide or explainer
Product pages, reviews
Commercial
Comparison or buyer’s guide
News, updates
Navigational/news
Skip unless you have unique angle
Forum results (Reddit, Quora)
Discovery
Opinionated piece with real perspective
Step 2 — Source Gathering
Collect 3-5 credible, citable sources before drafting. Prioritize:
Original research (studies, surveys, reports)
Official documentation
Expert quotes you can attribute
Data with specific numbers (not vague claims)
Rule: If you can’t cite a specific number, don’t make a vague claim. “Studies show” is a red flag. Find the actual study.
Every factual claim has a source or is clearly labeled as opinion
At least one image, table, or visual element breaks up text
Intro doesn’t start with a cliché
All internal links work
Readability score ≥ 70
Word count is within 10% of target
Proactive Triggers
Flag these without being asked:
Thin content risk — If the target keyword has high-authority competitors with 2,000+ word pieces, a 600-word post won’t rank. Surface this upfront, before drafting starts.
Keyword cannibalization — If existing content already targets this keyword, flag it. Publishing a second piece splits authority instead of building it.
Intent mismatch — If the requested angle doesn’t match search intent (e.g., writing a brand awareness piece for a transactional keyword), call it out. The piece will get traffic that doesn’t convert.
Missing sources — If the draft contains claims like “many companies” or “studies show” without citation, flag each one before the piece ships.
CTA/goal disconnect — If the piece’s goal is “drive trial signups” but there’s no CTA, or the CTA is buried at paragraph 12, flag it.
When reviewing drafts: flag issues → explain impact → give specific fix. Don’t just say “improve readability.” Say: “Paragraph 3 averages 32 words per sentence. Break the second sentence into two.”
Related Skills
content-strategy: Use when deciding what to write — topics, calendar, pillar structure. NOT for writing the actual piece (that’s this skill).
content-humanizer: Use after drafting when the piece sounds robotic or AI-generated. Run this before the optimization pass.
ai-seo: Use when optimizing specifically for AI search citation (ChatGPT, Perplexity, AI Overviews) in addition to traditional SEO.
copywriting: Use for landing pages, CTAs, and conversion copy. NOT for long-form content (that’s this skill).
seo-audit: Use when auditing an existing content library for SEO gaps. NOT for single-piece production.
Content Brief Guide
A brief isn't a writing assignment. It's a contract between the strategist and the writer — and when you're both the same person, it's still the contract between your thinking brain and your writing brain. Skip it and you'll rewrite. Do it right and the draft almost writes itself.
Why Briefs Fail (and How to Fix Them)
Most briefs are too vague. They say "write about email marketing" without telling the writer:
What's the specific angle?
Who's the reader?
What keywords matter?
What tone?
What should the reader do after?
The result: a draft that misses the mark on every axis.
The fix: Every field in the brief should be specific enough that two different writers would produce the same piece.
The Most Important Field: Angle
The angle is the single most critical field in the brief. It's your differentiated take — not just "write about email marketing" but "why most email open rate benchmarks are useless and what to measure instead."
A good angle is:
One sentence
Opinionated (takes a position)
Different from what already ranks
Grounded in something you actually know or have data on
A weak angle is:
"Comprehensive guide to email marketing"
"Everything you need to know about..."
"Best practices for..."
If your angle sounds like a Wikipedia article, it's not an angle.
Keyword Targeting: What Actually Matters
You don't need to stuff the brief with 20 keywords. You need:
One primary keyword — the main search term this piece is targeting. Every SEO decision flows from this.
2-4 secondary keywords — related phrases that appear naturally. These expand coverage without forcing it.
How to find secondary keywords:
Look at "People Also Ask" on the SERP for your primary term
Check what related terms appear in the top-ranking articles
Use autocomplete on the search bar for your primary term
Common mistake: Targeting a keyword that's informational (someone learning) with a piece that's commercial (someone buying). Match intent or waste the effort.
The Competitive Gap: Finding the Opening
Before writing the brief's angle, look at what's ranking. You're not copying them — you're finding what they missed.
Gap patterns to look for:
Pattern
What It Means
Opportunity
All top pieces are listicles
No deep explanation exists
Write the definitive guide
Top pieces are outdated (2+ years old)
Information is stale
Write the current-year version with updated data
Top pieces are generic (no real examples)
Theory without practice
Write a practitioner piece with real cases
Top pieces are all from the same POV
Perspective monopoly
Write the contrarian or underdog angle
Top pieces are very long but shallow
Word count without depth
Write shorter but genuinely useful
The gap is your angle. Find it before you brief.
Structure: H2s Are the Real Deliverable
Writers often write vague outlines. "Introduction. Section 1. Conclusion." That's not a structure — that's a placeholder.
Good H2s do three things:
Promise value — the reader knows what they'll learn
Follow logic — each section flows from the previous
Hit keywords — secondary terms appear naturally
Building the H2 structure:
Write the outline as if you're writing the table of contents for a useful reference book
Each H2 should be a complete thought ("How to Write Subject Lines That Get Opened" not "Subject Lines")
Sequence matters: don't put "Advanced Tactics" before "Why This Matters"
The rule of 5: Most blog posts need 4-6 H2s. Fewer and it's shallow. More and it's scattered.
Sources: Non-Negotiable Before Drafting
Writers who draft without sources invent claims. Then those claims go live on your website.
Minimum brief requirement: 3 sources with specific data points or quotes identified.
Source quality hierarchy:
Original research (surveys, studies, experiments you conducted)
Credible third-party research (academic papers, industry reports from named organizations)
Expert quotes (attributed, verifiable)
Strong case studies with specific metrics
Official documentation or standards
Red flag sources: Anything that cites "a study" without naming it. Anything more than 5 years old in a fast-moving category. Competitor blog posts (they're also making stuff up).
Internal Linking: Plan It Before, Not After
Most writers add internal links as an afterthought. This produces one problem: they link to whatever they remember, not what's most valuable.
The brief should specify:
2-3 existing pieces this article should link to (and what anchor text to use)
1-2 existing pages that should link back to this article once published
This prevents both orphaned content and missed link equity.
Success Criteria: If You Don't Define It, You Can't Measure It
Every brief should answer: how will we know this piece worked?
Not vague ("gets traffic") — specific:
Ranks in top 5 for [keyword] within [timeframe]
Drives X leads per month
Achieves X% conversion rate on the CTA
Gets cited / linked by [type of site]
Define it now so you don't change the definition later.
Brief Anti-Patterns
Anti-pattern
Problem
Fix
"Write a comprehensive guide"
No angle, no differentiation
Define the specific take
Missing audience definition
Writer guesses; often wrong
Name the exact reader job title and pain
No sources listed
Writer invents facts
Find 3 sources before briefing
Vague keyword ("marketing")
No SEO targeting
Get specific: "email marketing for B2B SaaS"
H2s that are just topic labels
No promise, no structure
Rewrite as complete-thought headers
No internal links specified
Orphaned content
List 2-3 links before writing
No success criteria
Can't evaluate performance
Define at least one measurable outcome
Pre-Publish Optimization Checklist
Run this before every piece goes live. Each section is a gate — fail a gate, fix it before moving on.
Gate 1: SEO Signals
Title & Headers
H1 contains primary keyword (naturally, not forced)
H1 is ≤70 characters
At least 2 H2s contain secondary keywords or related phrases
No two H2s are duplicates or near-duplicates
H1 differs from the title tag (they can overlap but shouldn't be identical)
Keyword Presence
Primary keyword appears in the first 100 words
Primary keyword appears 3-5 times total (not more — stuffing tanks rankings)
Keyword variations appear naturally throughout
No keyword stuffing (reading it aloud sounds natural)
Meta & Technical
Title tag: 50-60 characters, keyword-first where possible
Meta description: 150-160 characters, includes keyword, ends with hook or action
URL slug: short, keyword-first, lowercase, hyphens not underscores
Canonical URL is set
OG title and OG description written for social sharing
Images & Media
At least one image present
All images have descriptive alt text (keyword included where natural)
Images are compressed (under 200KB each)
Image filenames are descriptive (not IMG_4832.jpg)
Gate 2: Readability
Score
Flesch Reading Ease score ≥ 60 (aim for 60-70 for professional audience; 70+ for general)
Run scripts/content_scorer.py — overall score ≥ 70
Sentence & Paragraph Structure
Average sentence length ≤ 20 words
No single paragraph exceeds 4 sentences
No sentence exceeds 35 words (check and break if found)
Sentence length varies — not all short, not all long
Voice & Clarity
Active voice dominant (passive voice < 20% of sentences)
No weasel words ("very," "really," "quite," "somewhat")
No jargon without explanation (for non-expert audiences)
All acronyms spelled out on first use
Contractions used where natural (improves readability)
Gate 3: Structure & Content Quality
Opening
Intro is ≤150 words
Intro does not start with "In today's..." or "Welcome to..."
Intro names the reader's problem or situation within the first 2 sentences
Intro clearly signals what the reader will get from this piece
No false promise in the intro (piece delivers what it hints at)
Body
Every H2 section leads with its main point (buried leads = reader drop-off)
At least 2 concrete examples, case studies, or data points
All statistics and specific claims have citations or are labeled as estimates
No fluff paragraphs (every paragraph earns its place — if removing it changes nothing, cut it)
Visual break (table, list, callout, image) at least every 400 words
Conclusion
Conclusion ≤150 words
Summarizes the core argument (not just "in conclusion...")
Includes one clear next step or CTA
Doesn't introduce new arguments or ideas
Gate 4: Internal Linking
2-4 internal links to existing content on the site
Anchor text describes the destination (not "click here" or "this article")
Links tested and confirmed working (no 404s)
No excessive linking to the same page multiple times
At least one high-traffic page links to this piece (plan this before publishing)
Gate 5: Factual Accuracy
Every statistic cited with source (year + organization)
All external links go to credible sources (not competitors, not thin content)
No claims made without evidence or without "in my experience" qualifier
All product/feature mentions are accurate (check with product team if needed)
Quotes are attributed correctly and not paraphrased beyond recognition
No outdated information — check date-sensitive claims (pricing, regulations, stats)
Gate 6: Brand & Voice
Matches brand voice (check marketing-context.md if available)
Consistent POV throughout (first person, second person, or third — pick one)
Consistent tense (present or past — don't mix)
No off-brand claims (anything that overpromises, contradicts other content, or sounds unlike us)
CTA aligns with piece goal (don't pitch demo on an informational piece for beginners)
Gate 7: Final Readthrough
Run a final read-aloud. Catch what scanning misses.
Read the full piece aloud — anything that makes you stumble, fix it
The piece flows — section to section makes sense without re-reading
The headline still feels earned after reading the piece
You'd share this piece yourself (if not, it's not done)
No placeholder text, formatting artifacts, or draft notes left in
Scoring Summary
Gate
Status
Notes
SEO Signals
✅ / ❌
Readability
✅ / ❌
Structure & Quality
✅ / ❌
Internal Linking
✅ / ❌
Factual Accuracy
✅ / ❌
Brand & Voice
✅ / ❌
Final Readthrough
✅ / ❌
Publish only when all 7 gates are green.
If you're skipping a gate, document why. Conscious tradeoffs are fine. Unconscious shortcuts aren't.
#!/usr/bin/env python3"""Brand Voice Analyzer - Analyzes content to establish and maintain brand voice consistency"""import refrom typing import Dict, List, Tupleimport jsonclass BrandVoiceAnalyzer: def __init__(self): self.voice_dimensions = { 'formality': { 'formal': ['hereby', 'therefore', 'furthermore', 'pursuant', 'regarding'], 'casual': ['hey', 'cool', 'awesome', 'stuff', 'yeah', 'gonna'] }, 'tone': { 'professional': ['expertise', 'solution', 'optimize', 'leverage', 'strategic'], 'friendly': ['happy', 'excited', 'love', 'enjoy', 'together', 'share'] }, 'perspective': { 'authoritative': ['proven', 'research shows', 'experts agree', 'data indicates'], 'conversational': ['you might', 'let\'s explore', 'we think', 'imagine if'] } } def analyze_text(self, text: str) -> Dict: """Analyze text for brand voice characteristics""" text_lower = text.lower() word_count = len(text.split()) results = { 'word_count': word_count, 'readability_score': self._calculate_readability(text), 'voice_profile': {}, 'sentence_analysis': self._analyze_sentences(text), 'recommendations': [] } # Analyze voice dimensions for dimension, categories in self.voice_dimensions.items(): dim_scores = {} for category, keywords in categories.items(): score = sum(1 for keyword in keywords if keyword in text_lower) dim_scores[category] = score # Determine dominant voice if sum(dim_scores.values()) > 0: dominant = max(dim_scores, key=dim_scores.get) results['voice_profile'][dimension] = { 'dominant': dominant, 'scores': dim_scores } # Generate recommendations results['recommendations'] = self._generate_recommendations(results) return results def _calculate_readability(self, text: str) -> float: """Calculate Flesch Reading Ease score""" sentences = re.split(r'[.!?]+', text) words = text.split() syllables = sum(self._count_syllables(word) for word in words) if len(sentences) == 0 or len(words) == 0: return 0 avg_sentence_length = len(words) / len(sentences) avg_syllables_per_word = syllables / len(words) # Flesch Reading Ease formula score = 206.835 - 1.015 * avg_sentence_length - 84.6 * avg_syllables_per_word return max(0, min(100, score)) def _count_syllables(self, word: str) -> int: """Count syllables in a word (simplified)""" word = word.lower() vowels = 'aeiou' syllable_count = 0 previous_was_vowel = False for char in word: is_vowel = char in vowels if is_vowel and not previous_was_vowel: syllable_count += 1 previous_was_vowel = is_vowel # Adjust for silent e if word.endswith('e'): syllable_count -= 1 return max(1, syllable_count) def _analyze_sentences(self, text: str) -> Dict: """Analyze sentence structure""" sentences = re.split(r'[.!?]+', text) sentences = [s.strip() for s in sentences if s.strip()] if not sentences: return {'average_length': 0, 'variety': 'low'} lengths = [len(s.split()) for s in sentences] avg_length = sum(lengths) / len(lengths) if lengths else 0 # Calculate variety if len(set(lengths)) < 3: variety = 'low' elif len(set(lengths)) < 5: variety = 'medium' else: variety = 'high' return { 'average_length': round(avg_length, 1), 'variety': variety, 'count': len(sentences) } def _generate_recommendations(self, analysis: Dict) -> List[str]: """Generate recommendations based on analysis""" recommendations = [] # Readability recommendations if analysis['readability_score'] < 30: recommendations.append("Consider simplifying language for better readability") elif analysis['readability_score'] > 70: recommendations.append("Content is very easy to read - consider if this matches your audience") # Sentence variety if analysis['sentence_analysis']['variety'] == 'low': recommendations.append("Vary sentence length for better flow and engagement") # Voice consistency if analysis['voice_profile']: recommendations.append("Maintain consistent voice across all content") return recommendationsdef analyze_content(content: str, output_format: str = 'json') -> str: """Main function to analyze content""" analyzer = BrandVoiceAnalyzer() results = analyzer.analyze_text(content) if output_format == 'json': return json.dumps(results, indent=2) else: # Human-readable format output = [ f"=== Brand Voice Analysis ===", f"Word Count: {results['word_count']}", f"Readability Score: {results['readability_score']:.1f}/100", f"", f"Voice Profile:" ] for dimension, profile in results['voice_profile'].items(): output.append(f" {dimension.title()}: {profile['dominant']}") output.extend([ f"", f"Sentence Analysis:", f" Average Length: {results['sentence_analysis']['average_length']} words", f" Variety: {results['sentence_analysis']['variety']}", f" Total Sentences: {results['sentence_analysis']['count']}", f"", f"Recommendations:" ]) for rec in results['recommendations']: output.append(f" • {rec}") return '\n'.join(output)if __name__ == "__main__": import sys import argparse parser = argparse.ArgumentParser( description="Brand Voice Analyzer - Analyzes content to establish and maintain brand voice consistency" ) parser.add_argument( "file", nargs="?", default=None, help="Text file to analyze" ) parser.add_argument( "--format", choices=["json", "text"], default="text", help="Output format (default: text)" ) args = parser.parse_args() if args.file: with open(args.file, 'r') as f: content = f.read() print(analyze_content(content, args.format)) else: print("Usage: python brand_voice_analyzer.py <file> [--format json|text]")
#!/usr/bin/env python3"""content_scorer.py — scores content 0-100 on readability, SEO, structure, and engagement."""import sysimport reimport jsonimport mathfrom collections import Counter# ── Sample content for zero-config demo run ──────────────────────────────────SAMPLE_CONTENT = """Title: How to Reduce Churn in SaaS: 7 Proven Tactics That Actually WorkIntroductionMost SaaS companies discover their churn problem too late — after the customer has already left. By then, the damage is done. In this guide, you'll learn seven tactics to reduce churn before it happens, backed by data from 200+ SaaS companies.## Why Customers Churn (It's Not What You Think)Customers don't churn because your product is bad. They churn because they never saw value. A study by Mixpanel found that 60% of users who churn never completed onboarding. That's a product adoption problem, not a satisfaction problem.Fix the adoption gap first. Everything else is downstream.## Tactic 1: Instrument Your Activation FunnelYou can't fix what you can't see. Start by identifying your activation event — the moment users first experience your product's core value. For Slack, it's sending 2,000 messages. For Dropbox, it's saving a first file.Map the funnel from signup to activation. Find where users drop off. That's your highest-leverage intervention point.## Tactic 2: Segment Your Churn by CohortNot all churn is equal. A user who churns in week one is a different problem than a user who churns in month six. Cohort analysis breaks this apart.Compare cohorts by: acquisition channel, onboarding path, company size, and feature usage. You'll find that certain cohorts churn 3-4x more than others. Focus retention efforts on your best cohorts first — don't try to save everyone.## Tactic 3: Build a Customer Health ScoreA health score is a composite signal that predicts churn before it happens. Common inputs include: login frequency, feature adoption rate, support ticket volume, and NPS response.Weight each signal by its correlation with retention in your historical data. A score below 40 should trigger a customer success outreach. Don't wait for the cancellation request.## ConclusionChurn is a lagging indicator. By the time you see it, the problem happened weeks ago. Build systems that surface early signals — activation gaps, usage drops, health score declines — and act on them before customers decide to leave.Start with one tactic. Instrument your activation funnel this week."""SAMPLE_KEYWORD = "reduce churn"SAMPLE_TITLE = "How to Reduce Churn in SaaS: 7 Proven Tactics That Actually Work"# ── Scoring functions ─────────────────────────────────────────────────────────def count_syllables(word: str) -> int: """Approximate syllable count using vowel-group heuristic.""" word = word.lower().strip(".,!?;:") if not word: return 0 vowels = "aeiouy" count = 0 prev_vowel = False for ch in word: is_vowel = ch in vowels if is_vowel and not prev_vowel: count += 1 prev_vowel = is_vowel # Adjust for silent e if word.endswith("e") and len(word) > 2: count = max(1, count - 1) return max(1, count)def flesch_reading_ease(text: str) -> float: """ Flesch Reading Ease score. 206.835 - 1.015 * (words/sentences) - 84.6 * (syllables/words) Higher = easier. Target: 60-70 for professional content. """ sentences = re.split(r'[.!?]+', text) sentences = [s.strip() for s in sentences if s.strip()] n_sentences = max(1, len(sentences)) words = re.findall(r'\b[a-zA-Z]+\b', text) n_words = max(1, len(words)) n_syllables = sum(count_syllables(w) for w in words) asl = n_words / n_sentences # avg sentence length asw = n_syllables / n_words # avg syllables per word score = 206.835 - (1.015 * asl) - (84.6 * asw) return round(max(0.0, min(100.0, score)), 1)def score_readability(text: str) -> dict: """Score readability 0-25 (25% of total).""" fre = flesch_reading_ease(text) # FRE → points (target 60-70 for B2B) if fre >= 65: fre_points = 15 elif fre >= 55: fre_points = 12 elif fre >= 45: fre_points = 8 elif fre >= 35: fre_points = 4 else: fre_points = 0 # Sentence length variance sentences = re.split(r'[.!?]+', text) sentences = [s.strip() for s in sentences if len(s.split()) > 2] lengths = [len(s.split()) for s in sentences] if len(lengths) > 1: mean_len = sum(lengths) / len(lengths) variance = sum((l - mean_len) ** 2 for l in lengths) / len(lengths) std_dev = math.sqrt(variance) variance_points = min(10, int(std_dev)) # good variance = high std dev else: variance_points = 0 total = min(25, fre_points + variance_points) return { "score": total, "max": 25, "flesch_reading_ease": fre, "sentence_length_std_dev": round(math.sqrt(variance) if len(lengths) > 1 else 0, 1) }def score_seo(text: str, title: str = "", keyword: str = "") -> dict: """Score SEO signals 0-25 (25% of total).""" text_lower = text.lower() title_lower = title.lower() keyword_lower = keyword.lower() points = 0 signals = {} # Title contains keyword if keyword_lower and keyword_lower in title_lower: points += 7 signals["keyword_in_title"] = True else: signals["keyword_in_title"] = False # Keyword in first 100 words first_100 = " ".join(re.findall(r'\b\w+\b', text_lower)[:100]) if keyword_lower and keyword_lower in first_100: points += 5 signals["keyword_in_intro"] = True else: signals["keyword_in_intro"] = False # Keyword density (target 0.5-2%) words = re.findall(r'\b\w+\b', text_lower) n_words = max(1, len(words)) kw_words = keyword_lower.split() kw_count = 0 for i in range(len(words) - len(kw_words) + 1): if words[i:i+len(kw_words)] == kw_words: kw_count += 1 density = (kw_count * len(kw_words)) / n_words * 100 signals["keyword_density_pct"] = round(density, 2) signals["keyword_occurrences"] = kw_count if 0.5 <= density <= 2.5: points += 5 elif kw_count > 0: points += 2 # H2 headings present h2_count = len(re.findall(r'^## .+', text, re.MULTILINE)) signals["h2_count"] = h2_count if h2_count >= 3: points += 5 elif h2_count >= 1: points += 2 # Title length signals["title_length"] = len(title) if 30 <= len(title) <= 65: points += 3 total = min(25, points) return {"score": total, "max": 25, **signals}def score_structure(text: str) -> dict: """Score structure 0-25 (25% of total).""" points = 0 signals = {} lines = text.strip().split('\n') # Intro: first non-empty paragraph paragraphs = [p.strip() for p in text.split('\n\n') if p.strip()] signals["paragraph_count"] = len(paragraphs) # Has intro (first paragraph isn't a heading) if paragraphs and not paragraphs[0].startswith('#'): intro_words = len(paragraphs[0].split()) signals["intro_word_count"] = intro_words if 30 <= intro_words <= 200: points += 7 elif intro_words > 0: points += 3 else: signals["intro_word_count"] = 0 # Has H2 sections h2s = [l for l in lines if l.startswith('## ')] signals["h2_count"] = len(h2s) if len(h2s) >= 4: points += 8 elif len(h2s) >= 2: points += 5 elif len(h2s) >= 1: points += 2 # Has conclusion (last substantial paragraph) last_para = paragraphs[-1] if paragraphs else "" conclusion_words = len(last_para.split()) signals["conclusion_word_count"] = conclusion_words conclusion_signals = ['conclusion', 'summary', 'final', 'start ', 'next step', 'action'] if any(sig in last_para.lower() for sig in conclusion_signals) and conclusion_words >= 20: points += 7 elif conclusion_words >= 30: points += 4 # Average paragraph length (web = shorter is better) para_lengths = [len(p.split()) for p in paragraphs if not p.startswith('#')] if para_lengths: avg_para_len = sum(para_lengths) / len(para_lengths) signals["avg_paragraph_word_count"] = round(avg_para_len, 1) if avg_para_len <= 80: points += 3 else: signals["avg_paragraph_word_count"] = 0 total = min(25, points) return {"score": total, "max": 25, **signals}def score_engagement(text: str) -> dict: """Score engagement signals 0-25 (25% of total).""" points = 0 signals = {} text_lower = text.lower() # Questions (engage readers, prompt thought) question_count = len(re.findall(r'\?', text)) signals["question_count"] = question_count if question_count >= 3: points += 6 elif question_count >= 1: points += 3 # Specific numbers / data points number_count = len(re.findall(r'\b\d+(?:\.\d+)?%?\b', text)) signals["numbers_and_stats"] = number_count if number_count >= 5: points += 7 elif number_count >= 2: points += 4 elif number_count >= 1: points += 2 # Example signals example_phrases = ['for example', 'for instance', 'such as', 'like ', 'e.g.', 'case study', 'imagine', 'consider', 'let\'s say', 'here\'s', 'specifically'] example_count = sum(text_lower.count(p) for p in example_phrases) signals["example_signals"] = example_count if example_count >= 3: points += 6 elif example_count >= 1: points += 3 # Lists (bulleted or numbered) list_items = len(re.findall(r'^\s*[-*•]\s+.+|^\s*\d+\.\s+.+', text, re.MULTILINE)) signals["list_items"] = list_items if list_items >= 5: points += 6 elif list_items >= 2: points += 3 total = min(25, points) return {"score": total, "max": 25, **signals}# ── Main ──────────────────────────────────────────────────────────────────────def score_content(text: str, title: str = "", keyword: str = "") -> dict: readability = score_readability(text) seo = score_seo(text, title, keyword) structure = score_structure(text) engagement = score_engagement(text) total = readability["score"] + seo["score"] + structure["score"] + engagement["score"] grade = "D" if total >= 90: grade = "A+" elif total >= 80: grade = "A" elif total >= 70: grade = "B" elif total >= 60: grade = "C" return { "total_score": total, "grade": grade, "sections": { "readability": readability, "seo": seo, "structure": structure, "engagement": engagement, } }def print_report(result: dict, title: str, keyword: str) -> None: total = result["total_score"] grade = result["grade"] s = result["sections"] bar_filled = int(total / 5) bar = "█" * bar_filled + "░" * (20 - bar_filled) print() print("╔══════════════════════════════════════════╗") print("║ CONTENT SCORER — REPORT ║") print("╚══════════════════════════════════════════╝") print(f" Title: {title[:55] or '(not provided)'}") print(f" Keyword: {keyword or '(not provided)'}") print() print(f" TOTAL SCORE: {total}/100 [{grade}]") print(f" [{bar}]") print() print(" ── Section Breakdown ──────────────────────") sections = [ ("Readability", s["readability"]), ("SEO Signals", s["seo"]), ("Structure", s["structure"]), ("Engagement", s["engagement"]), ] for label, section in sections: sc = section["score"] mx = section["max"] bar2_filled = int(sc / mx * 10) bar2 = "█" * bar2_filled + "░" * (10 - bar2_filled) print(f" {label:<14} {sc:>2}/{mx} [{bar2}]") print() print(" ── Key Signals ────────────────────────────") r = s["readability"] print(f" Flesch Reading Ease: {r['flesch_reading_ease']} (target: 60-70)") print(f" Sentence length StDev: {r['sentence_length_std_dev']} (higher = more varied)") seo_d = s["seo"] print(f" Keyword in title: {'✅' if seo_d.get('keyword_in_title') else '❌'}") print(f" Keyword in intro: {'✅' if seo_d.get('keyword_in_intro') else '❌'}") print(f" Keyword density: {seo_d.get('keyword_density_pct', 0)}% (target: 0.5-2.5%)") print(f" H2 sections: {seo_d.get('h2_count', 0)}") st = s["structure"] print(f" Intro word count: {st.get('intro_word_count', 0)} (target: 30-200)") print(f" Avg paragraph length: {st.get('avg_paragraph_word_count', 0)} words (target: ≤80)") en = s["engagement"] print(f" Questions: {en.get('question_count', 0)}") print(f" Stats/numbers: {en.get('numbers_and_stats', 0)}") print(f" Examples: {en.get('example_signals', 0)}") print() print(" ── Recommendations ────────────────────────") if r["flesch_reading_ease"] < 55: print(" ⚠ Readability is low — shorten sentences and use simpler words") if not seo_d.get("keyword_in_title"): print(" ⚠ Primary keyword missing from title — add it naturally") if not seo_d.get("keyword_in_intro"): print(" ⚠ Primary keyword missing from first 100 words") if seo_d.get("h2_count", 0) < 3: print(" ⚠ Add more H2 sections — aim for at least 4") if st.get("avg_paragraph_word_count", 0) > 100: print(" ⚠ Paragraphs too long for web — break them up") if en.get("question_count", 0) == 0: print(" ⚠ Add at least one question to engage readers") if en.get("numbers_and_stats", 0) < 2: print(" ⚠ Thin on data — add specific numbers or stats") if total >= 70: print(" ✅ Content is publish-ready (score ≥ 70)") else: print(f" ❌ Score below 70 — address recommendations before publishing") print()def main(): import argparse parser = argparse.ArgumentParser( description="Scores content 0-100 on readability, SEO, structure, and engagement." ) parser.add_argument( "file", nargs="?", default=None, help="Path to a text/markdown file to analyze. If omitted, runs demo " "with embedded sample content." ) parser.add_argument( "keyword", nargs="?", default="", help="Target SEO keyword to check density and placement." ) parser.add_argument( "--json", action="store_true", help="Also output results as JSON." ) args = parser.parse_args() title = "" keyword = args.keyword text = "" if args.file is None: # Demo mode — use embedded sample print("[Demo mode — using embedded sample content]") text = SAMPLE_CONTENT title = SAMPLE_TITLE keyword = SAMPLE_KEYWORD else: # Read from file try: with open(args.file, 'r', encoding='utf-8') as f: text = f.read() except FileNotFoundError: print(f"Error: file not found: {args.file}", file=sys.stderr) sys.exit(1) # Extract title from first H1 or first line for line in text.split('\n'): line = line.strip() if line.startswith('# '): title = line[2:].strip() break elif line.startswith('Title:'): title = line[6:].strip() break if not title and text: title = text.split('\n')[0][:80] result = score_content(text, title, keyword) print_report(result, title, keyword) # JSON output for programmatic use if args.json: print(json.dumps(result, indent=2))if __name__ == "__main__": main()
#!/usr/bin/env python3"""SEO Content Optimizer - Analyzes and optimizes content for SEO"""import refrom typing import Dict, List, Setimport jsonclass SEOOptimizer: def __init__(self): # Common stop words to filter self.stop_words = { 'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for', 'of', 'with', 'by', 'from', 'as', 'is', 'was', 'are', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'do', 'does', 'did', 'will', 'would', 'could', 'should', 'may', 'might', 'must', 'can', 'shall' } # SEO best practices self.best_practices = { 'title_length': (50, 60), 'meta_description_length': (150, 160), 'url_length': (50, 60), 'paragraph_length': (40, 150), 'heading_keyword_placement': True, 'keyword_density': (0.01, 0.03) # 1-3% } def analyze(self, content: str, target_keyword: str = None, secondary_keywords: List[str] = None) -> Dict: """Analyze content for SEO optimization""" analysis = { 'content_length': len(content.split()), 'keyword_analysis': {}, 'structure_analysis': self._analyze_structure(content), 'readability': self._analyze_readability(content), 'meta_suggestions': {}, 'optimization_score': 0, 'recommendations': [] } # Keyword analysis if target_keyword: analysis['keyword_analysis'] = self._analyze_keywords( content, target_keyword, secondary_keywords or [] ) # Generate meta suggestions analysis['meta_suggestions'] = self._generate_meta_suggestions( content, target_keyword ) # Calculate optimization score analysis['optimization_score'] = self._calculate_seo_score(analysis) # Generate recommendations analysis['recommendations'] = self._generate_recommendations(analysis) return analysis def _analyze_keywords(self, content: str, primary: str, secondary: List[str]) -> Dict: """Analyze keyword usage and density""" content_lower = content.lower() word_count = len(content.split()) results = { 'primary_keyword': { 'keyword': primary, 'count': content_lower.count(primary.lower()), 'density': 0, 'in_title': False, 'in_headings': False, 'in_first_paragraph': False }, 'secondary_keywords': [], 'lsi_keywords': [] } # Calculate primary keyword metrics if word_count > 0: results['primary_keyword']['density'] = ( results['primary_keyword']['count'] / word_count ) # Check keyword placement first_para = content.split('\n\n')[0] if '\n\n' in content else content[:200] results['primary_keyword']['in_first_paragraph'] = ( primary.lower() in first_para.lower() ) # Analyze secondary keywords for keyword in secondary: count = content_lower.count(keyword.lower()) results['secondary_keywords'].append({ 'keyword': keyword, 'count': count, 'density': count / word_count if word_count > 0 else 0 }) # Extract potential LSI keywords results['lsi_keywords'] = self._extract_lsi_keywords(content, primary) return results def _analyze_structure(self, content: str) -> Dict: """Analyze content structure for SEO""" lines = content.split('\n') structure = { 'headings': {'h1': 0, 'h2': 0, 'h3': 0, 'total': 0}, 'paragraphs': 0, 'lists': 0, 'images': 0, 'links': {'internal': 0, 'external': 0}, 'avg_paragraph_length': 0 } paragraphs = [] current_para = [] for line in lines: # Count headings if line.startswith('# '): structure['headings']['h1'] += 1 structure['headings']['total'] += 1 elif line.startswith('## '): structure['headings']['h2'] += 1 structure['headings']['total'] += 1 elif line.startswith('### '): structure['headings']['h3'] += 1 structure['headings']['total'] += 1 # Count lists if line.strip().startswith(('- ', '* ', '1. ')): structure['lists'] += 1 # Count links internal_links = len(re.findall(r'\[.*?\]\(/.*?\)', line)) external_links = len(re.findall(r'\[.*?\]\(https?://.*?\)', line)) structure['links']['internal'] += internal_links structure['links']['external'] += external_links # Track paragraphs if line.strip() and not line.startswith('#'): current_para.append(line) elif current_para: paragraphs.append(' '.join(current_para)) current_para = [] if current_para: paragraphs.append(' '.join(current_para)) structure['paragraphs'] = len(paragraphs) if paragraphs: avg_length = sum(len(p.split()) for p in paragraphs) / len(paragraphs) structure['avg_paragraph_length'] = round(avg_length, 1) return structure def _analyze_readability(self, content: str) -> Dict: """Analyze content readability""" sentences = re.split(r'[.!?]+', content) words = content.split() if not sentences or not words: return {'score': 0, 'level': 'Unknown'} avg_sentence_length = len(words) / len(sentences) # Simple readability scoring if avg_sentence_length < 15: level = 'Easy' score = 90 elif avg_sentence_length < 20: level = 'Moderate' score = 70 elif avg_sentence_length < 25: level = 'Difficult' score = 50 else: level = 'Very Difficult' score = 30 return { 'score': score, 'level': level, 'avg_sentence_length': round(avg_sentence_length, 1) } def _extract_lsi_keywords(self, content: str, primary_keyword: str) -> List[str]: """Extract potential LSI (semantically related) keywords""" words = re.findall(r'\b[a-z]+\b', content.lower()) word_freq = {} # Count word frequencies for word in words: if word not in self.stop_words and len(word) > 3: word_freq[word] = word_freq.get(word, 0) + 1 # Sort by frequency and return top related terms sorted_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True) # Filter out the primary keyword and return top 10 lsi_keywords = [] for word, count in sorted_words: if word != primary_keyword.lower() and count > 1: lsi_keywords.append(word) if len(lsi_keywords) >= 10: break return lsi_keywords def _generate_meta_suggestions(self, content: str, keyword: str = None) -> Dict: """Generate SEO meta tag suggestions""" # Extract first sentence for description base sentences = re.split(r'[.!?]+', content) first_sentence = sentences[0] if sentences else content[:160] suggestions = { 'title': '', 'meta_description': '', 'url_slug': '', 'og_title': '', 'og_description': '' } if keyword: # Title suggestion suggestions['title'] = f"{keyword.title()} - Complete Guide" if len(suggestions['title']) > 60: suggestions['title'] = keyword.title()[:57] + "..." # Meta description desc_base = f"Learn everything about {keyword}. {first_sentence}" if len(desc_base) > 160: desc_base = desc_base[:157] + "..." suggestions['meta_description'] = desc_base # URL slug suggestions['url_slug'] = re.sub(r'[^a-z0-9-]+', '-', keyword.lower()).strip('-') # Open Graph tags suggestions['og_title'] = suggestions['title'] suggestions['og_description'] = suggestions['meta_description'] return suggestions def _calculate_seo_score(self, analysis: Dict) -> int: """Calculate overall SEO optimization score""" score = 0 max_score = 100 # Content length scoring (20 points) if 300 <= analysis['content_length'] <= 2500: score += 20 elif 200 <= analysis['content_length'] < 300: score += 10 elif analysis['content_length'] > 2500: score += 15 # Keyword optimization (30 points) if analysis['keyword_analysis']: kw_data = analysis['keyword_analysis']['primary_keyword'] # Density scoring if 0.01 <= kw_data['density'] <= 0.03: score += 15 elif 0.005 <= kw_data['density'] < 0.01: score += 8 # Placement scoring if kw_data['in_first_paragraph']: score += 10 if kw_data.get('in_headings'): score += 5 # Structure scoring (25 points) struct = analysis['structure_analysis'] if struct['headings']['total'] > 0: score += 10 if struct['paragraphs'] >= 3: score += 10 if struct['links']['internal'] > 0 or struct['links']['external'] > 0: score += 5 # Readability scoring (25 points) readability_score = analysis['readability']['score'] score += int(readability_score * 0.25) return min(score, max_score) def _generate_recommendations(self, analysis: Dict) -> List[str]: """Generate SEO improvement recommendations""" recommendations = [] # Content length recommendations if analysis['content_length'] < 300: recommendations.append( f"Increase content length to at least 300 words (currently {analysis['content_length']})" ) elif analysis['content_length'] > 3000: recommendations.append( "Consider breaking long content into multiple pages or adding a table of contents" ) # Keyword recommendations if analysis['keyword_analysis']: kw_data = analysis['keyword_analysis']['primary_keyword'] if kw_data['density'] < 0.01: recommendations.append( f"Increase keyword density for '{kw_data['keyword']}' (currently {kw_data['density']:.2%})" ) elif kw_data['density'] > 0.03: recommendations.append( f"Reduce keyword density to avoid over-optimization (currently {kw_data['density']:.2%})" ) if not kw_data['in_first_paragraph']: recommendations.append( "Include primary keyword in the first paragraph" ) # Structure recommendations struct = analysis['structure_analysis'] if struct['headings']['total'] == 0: recommendations.append("Add headings (H1, H2, H3) to improve content structure") if struct['links']['internal'] == 0: recommendations.append("Add internal links to related content") if struct['avg_paragraph_length'] > 150: recommendations.append("Break up long paragraphs for better readability") # Readability recommendations if analysis['readability']['avg_sentence_length'] > 20: recommendations.append("Simplify sentences for better readability") return recommendationsdef optimize_content(content: str, keyword: str = None, secondary_keywords: List[str] = None) -> str: """Main function to optimize content""" optimizer = SEOOptimizer() # Parse secondary keywords from comma-separated string if provided if secondary_keywords and isinstance(secondary_keywords, str): secondary_keywords = [kw.strip() for kw in secondary_keywords.split(',')] results = optimizer.analyze(content, keyword, secondary_keywords) # Format output output = [ "=== SEO Content Analysis ===", f"Overall SEO Score: {results['optimization_score']}/100", f"Content Length: {results['content_length']} words", f"", "Content Structure:", f" Headings: {results['structure_analysis']['headings']['total']}", f" Paragraphs: {results['structure_analysis']['paragraphs']}", f" Avg Paragraph Length: {results['structure_analysis']['avg_paragraph_length']} words", f" Internal Links: {results['structure_analysis']['links']['internal']}", f" External Links: {results['structure_analysis']['links']['external']}", f"", f"Readability: {results['readability']['level']} (Score: {results['readability']['score']})", f"" ] if results['keyword_analysis']: kw = results['keyword_analysis']['primary_keyword'] output.extend([ "Keyword Analysis:", f" Primary Keyword: {kw['keyword']}", f" Count: {kw['count']}", f" Density: {kw['density']:.2%}", f" In First Paragraph: {'Yes' if kw['in_first_paragraph'] else 'No'}", f"" ]) if results['keyword_analysis']['lsi_keywords']: output.append(" Related Keywords Found:") for lsi in results['keyword_analysis']['lsi_keywords'][:5]: output.append(f" • {lsi}") output.append("") if results['meta_suggestions']: output.extend([ "Meta Tag Suggestions:", f" Title: {results['meta_suggestions']['title']}", f" Description: {results['meta_suggestions']['meta_description']}", f" URL Slug: {results['meta_suggestions']['url_slug']}", f"" ]) output.extend([ "Recommendations:", ]) for rec in results['recommendations']: output.append(f" • {rec}") return '\n'.join(output)if __name__ == "__main__": import sys import argparse parser = argparse.ArgumentParser( description="SEO Content Optimizer - Analyzes and optimizes content for SEO" ) parser.add_argument( "file", nargs="?", default=None, help="Text file to analyze" ) parser.add_argument( "--keyword", "-k", default=None, help="Primary keyword to optimize for" ) parser.add_argument( "--secondary", "-s", default=None, help="Comma-separated secondary keywords" ) args = parser.parse_args() if args.file: with open(args.file, 'r') as f: content = f.read() print(optimize_content(content, args.keyword, args.secondary)) else: print("Usage: python seo_optimizer.py <file> [--keyword primary] [--secondary kw1,kw2]")
Content Brief Template
Fill in every field before writing starts. Blank fields mean assumptions. Assumptions mean rewrites.
Basic Info
Field
Value
Working Title
Target Publish Date
Author / Owner
Content Type
Blog post / Guide / Case study / Comparison / Listicle
Target Length
~___ words
Goal
Awareness / Lead gen / SEO / Thought leadership / Product education
SEO Targeting
Field
Value
Primary Keyword
Monthly Search Volume
Keyword Difficulty
Secondary Keywords (2-4)
Search Intent
Informational / Commercial / Navigational
SERP Features to Target
Featured snippet / FAQ / People Also Ask
Audience
Who is reading this?
(Job title, company stage, pain point they're searching from)
What do they already know?
(Level: beginner / intermediate / expert)
What do they want to walk away with?
(The specific outcome or answer)
What's their biggest objection or doubt?
(What might make them click away?)
Angle & POV
The core argument / unique angle:
(One sentence — what's our take that's different from the competition?)
Why should they trust us on this?
(Our authority, experience, or data that backs the angle)
What's the competition missing?
(Specific gap in top-ranking content we're exploiting)
Structure
H1 (draft):
Intro approach:
(Hook type: stat / story / counterintuitive claim / problem statement)
H2 Outline:
1.
2.
3.
4.
5.
(Add more as needed)
Conclusion approach:
(Summary + CTA or next step)
Sources & Research
Source
Type
Key Claim / Data Point
Study / Report
Expert quote
Official docs
Data / survey
Minimum 3 sources required before drafting.
Internal Linking
Links FROM this piece (to existing content):
[Anchor text] → [URL / page title]
[Anchor text] → [URL / page title]
Links TO this piece (from existing content):
[Existing page] → link to this once published
Competitive Pieces to Beat
URL
Word Count
What They Do Well
What They Miss
Success Criteria
Ranks on page 1 for primary keyword within 6 months
Achieves target engagement (avg time on page > ___ min)
Generates ___ leads / clicks to product within 30 days
Other: ___
Notes / Special Instructions
(Brand voice requirements, topics to avoid, tone calibration, product mentions)
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.