Grow your X/Twitter audience through profile optimization, compelling hooks, and engagement tactics tailored to the platform. You'll receive specific thread frameworks and competitor insights designed to maximize reach and accelerate follower growth. Use this whenever you need to audit your presence or plan a high-impact posting strategy.
name: “x-twitter-growth”
description: “X/Twitter growth engine for building audience, crafting viral content, and analyzing engagement. Use when the user wants to grow on X/Twitter, write tweets or threads, analyze their X profile, research competitors on X, plan a posting strategy, or optimize engagement. Complements social-content (generic multi-platform) with X-specific depth: algorithm mechanics, thread engineering, reply strategy, profile optimization, and competitive intelligence via web search.”
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
updated: 2026-03-10
X/Twitter Growth Engine
X-specific growth skill. For general social media content across platforms, see social-content. For social strategy and calendar planning, see social-media-manager. This skill goes deep on X.
When to Use This vs Other Skills
Need
Use
Write a tweet or thread
This skill
Plan content across LinkedIn + X + Instagram
social-content
Analyze engagement metrics across platforms
social-media-analyzer
Build overall social strategy
social-media-manager
X-specific growth, algorithm, competitive intel
This skill
Step 1 — Profile Audit
Before any growth work, audit the current X presence. Run scripts/profile_auditor.py with the handle, or manually assess:
Bio Checklist
Clear value proposition in first line (who you help + how)
Specific niche — not “entrepreneur | thinker | builder”
Social proof element (followers, title, metric, brand)
Structure:- Tweet 1: Hook — must stop the scroll in <7 words- Tweet 2: Context or promise ("Here's what I learned:")- Tweets 3-N: One idea per tweet, each standalone-worthy- Final tweet: Summary + explicit CTA ("Follow @handle for more")- Reply to tweet 1: Restate hook + "Follow for more [topic]"Rules:- 5-12 tweets optimal (under 5 feels thin, over 12 loses people)- Each tweet should make sense if read alone- Use line breaks for readability- No tweet should be a wall of text (3-4 lines max)- Number the tweets or use "↓" in tweet 1
2. Atomic Tweets (breadth, impression farming)
Formats that work:- Observation: "[Thing] is underrated. Here's why:"- Listicle: "10 tools I use daily:\n\n1. X — for Y"- Contrarian: "Unpopular opinion: [statement]"- Lesson: "I [did X] for [time]. Biggest lesson:"- Framework: "[Concept] explained in 30 seconds:"Rules:- Under 200 characters gets more engagement- One idea per tweet- No links in tweet body (kills reach — put link in reply)- Question tweets drive replies (algorithm loves replies)
3. Quote Tweets (authority building)
Formula: Original tweet + your unique take- Add data the original missed- Provide counterpoint or nuance- Share personal experience that validates/contradicts- Never just say "This" or "So true"
4. Replies (network growth, fastest path to visibility)
Strategy:- Reply to accounts 2-10x your size- Add genuine value, not "great post!"- Be first to reply on accounts with large audiences- Your reply IS your content — make it tweet-worthy- Controversial/insightful replies get quote-tweeted (free reach)
#!/usr/bin/env python3"""X/Twitter Competitor Analyzer — Analyze competitor profiles for content strategy insights.Takes competitor handles and available data, produces a competitiveintelligence report with content patterns, engagement strategies, and gaps.Usage: python3 competitor_analyzer.py --handles @user1 @user2 @user3 python3 competitor_analyzer.py --handles @user1 --followers 50000 --niche "AI" python3 competitor_analyzer.py --import data.json"""import argparseimport jsonimport sysfrom dataclasses import dataclass, field, asdictfrom typing import Optional@dataclassclass CompetitorProfile: handle: str followers: int = 0 following: int = 0 posts_per_week: float = 0 avg_likes: float = 0 avg_replies: float = 0 avg_retweets: float = 0 thread_frequency: str = "" # daily, weekly, rarely top_topics: list = field(default_factory=list) content_mix: dict = field(default_factory=dict) # format: percentage posting_times: list = field(default_factory=list) bio: str = "" notes: str = ""@dataclassclass CompetitiveInsight: category: str finding: str opportunity: str priority: str # HIGH, MEDIUM, LOWdef calculate_engagement_rate(profile: CompetitorProfile) -> float: if profile.followers <= 0: return 0 total_engagement = profile.avg_likes + profile.avg_replies + profile.avg_retweets return (total_engagement / profile.followers) * 100def analyze_competitors(competitors: list) -> list: insights = [] # Engagement comparison engagement_rates = [] for c in competitors: er = calculate_engagement_rate(c) engagement_rates.append((c.handle, er)) if engagement_rates: top = max(engagement_rates, key=lambda x: x[1]) if top[1] > 0: insights.append(CompetitiveInsight( "Engagement", f"Highest engagement: {top[0]} ({top[1]:.2f}%)", "Study their top posts — what format and topics drive replies?", "HIGH" )) # Posting frequency frequencies = [(c.handle, c.posts_per_week) for c in competitors if c.posts_per_week > 0] if frequencies: avg_freq = sum(f for _, f in frequencies) / len(frequencies) insights.append(CompetitiveInsight( "Frequency", f"Average posting: {avg_freq:.0f}/week across competitors", f"Match or exceed {avg_freq:.0f} posts/week to compete for mindshare", "HIGH" )) # Thread usage thread_users = [c.handle for c in competitors if c.thread_frequency in ("daily", "weekly")] if thread_users: insights.append(CompetitiveInsight( "Format", f"Active thread users: {', '.join(thread_users)}", "Threads are a proven growth lever in your niche. Publish 2-3/week minimum.", "HIGH" )) # Reply engagement reply_heavy = [(c.handle, c.avg_replies) for c in competitors if c.avg_replies > c.avg_likes * 0.3] if reply_heavy: names = [h for h, _ in reply_heavy] insights.append(CompetitiveInsight( "Community", f"High reply ratios: {', '.join(names)}", "These accounts build community through conversation. Ask more questions in your tweets.", "MEDIUM" )) # Follower/following ratio for c in competitors: if c.followers > 0 and c.following > 0: ratio = c.followers / c.following if ratio > 10: insights.append(CompetitiveInsight( "Authority", f"{c.handle} has {ratio:.0f}x follower/following ratio", "Strong authority signal — they attract followers without follow-backs", "LOW" )) # Topic gaps all_topics = [] for c in competitors: all_topics.extend(c.top_topics) if all_topics: from collections import Counter common = Counter(all_topics).most_common(5) insights.append(CompetitiveInsight( "Topics", f"Most covered topics: {', '.join(t for t, _ in common)}", "Cover these topics to compete, but find unique angles. What are they NOT covering?", "MEDIUM" )) return insightsdef print_report(competitors: list, insights: list): print(f"\n{'='*70}") print(f" COMPETITIVE ANALYSIS REPORT") print(f"{'='*70}") # Profile summary table print(f"\n {'Handle':<20} {'Followers':>10} {'Posts/wk':>10} {'Eng Rate':>10}") print(f" {'─'*20} {'─'*10} {'─'*10} {'─'*10}") for c in competitors: er = calculate_engagement_rate(c) print(f" {c.handle:<20} {c.followers:>10,} {c.posts_per_week:>10.0f} {er:>9.2f}%") # Insights if insights: print(f"\n {'─'*66}") print(f" KEY INSIGHTS\n") priority_order = {"HIGH": 0, "MEDIUM": 1, "LOW": 2} sorted_insights = sorted(insights, key=lambda x: priority_order.get(x.priority, 3)) for i in sorted_insights: icon = {"HIGH": "🔴", "MEDIUM": "🟡", "LOW": "⚪"}.get(i.priority, "❓") print(f" {icon} [{i.category}] {i.finding}") print(f" → {i.opportunity}") print() # Action items print(f" {'─'*66}") print(f" NEXT STEPS\n") print(f" 1. Search each competitor's profile on X — note their pinned tweet and bio") print(f" 2. Read their last 20 posts — categorize by format and topic") print(f" 3. Identify their top 3 performing posts — what made them work?") print(f" 4. Find gaps — what topics do they NOT cover that you can own?") print(f" 5. Set engagement targets based on their metrics as benchmarks") print(f"\n{'='*70}\n")def main(): parser = argparse.ArgumentParser( description="Analyze X/Twitter competitors for content strategy insights", formatter_class=argparse.RawDescriptionHelpFormatter, epilog="""Examples: %(prog)s --handles @user1 @user2 %(prog)s --import competitors.json JSON format for --import: [{"handle": "@user1", "followers": 50000, "posts_per_week": 14, ...}] """) parser.add_argument("--handles", nargs="+", default=[], help="Competitor handles") parser.add_argument("--import", dest="import_file", help="Import from JSON file") parser.add_argument("--json", action="store_true", help="Output JSON") args = parser.parse_args() competitors = [] if args.import_file: with open(args.import_file) as f: data = json.load(f) for item in data: competitors.append(CompetitorProfile(**item)) elif args.handles: for handle in args.handles: if not handle.startswith("@"): handle = f"@{handle}" competitors.append(CompetitorProfile(handle=handle)) if all(c.followers == 0 for c in competitors): print(f"\n ℹ️ Handles registered: {', '.join(c.handle for c in competitors)}") print(f" To get full analysis, provide data via JSON import:") print(f" 1. Research each profile on X") print(f" 2. Create a JSON file with follower counts, posting frequency, etc.") print(f" 3. Run: {sys.argv[0]} --import data.json") print(f"\n Example JSON:") example = [asdict(CompetitorProfile( handle="@example", followers=25000, following=1200, posts_per_week=14, avg_likes=150, avg_replies=30, avg_retweets=20, thread_frequency="weekly", top_topics=["AI", "startups", "engineering"], ))] print(f" {json.dumps(example, indent=2)}") print() return if not competitors: print("Error: provide --handles or --import", file=sys.stderr) sys.exit(1) insights = analyze_competitors(competitors) if args.json: print(json.dumps({ "competitors": [asdict(c) for c in competitors], "insights": [asdict(i) for i in insights], }, indent=2)) else: print_report(competitors, insights)if __name__ == "__main__": main()
#!/usr/bin/env python3"""X/Twitter Content Planner — Generate weekly posting calendars.Creates structured content plans with topic suggestions, format mix,optimal posting times, and engagement targets.Usage: python3 content_planner.py --niche "AI engineering" --frequency 5 --weeks 2 python3 content_planner.py --niche "SaaS growth" --frequency 3 --weeks 1 --json"""import argparseimport jsonimport sysfrom datetime import datetime, timedeltafrom dataclasses import dataclass, field, asdictCONTENT_FORMATS = { "atomic_tweet": {"growth_weight": 0.3, "effort": "low", "description": "Single tweet — observation, tip, or hot take"}, "thread": {"growth_weight": 0.35, "effort": "high", "description": "5-12 tweet deep dive — highest reach potential"}, "question": {"growth_weight": 0.15, "effort": "low", "description": "Engagement bait — drives replies"}, "quote_tweet": {"growth_weight": 0.10, "effort": "low", "description": "Add value to someone else's content"}, "reply_session": {"growth_weight": 0.10, "effort": "medium", "description": "30 min focused engagement on target accounts"},}OPTIMAL_TIMES = { "weekday": ["07:00-08:00", "12:00-13:00", "17:00-18:00", "20:00-21:00"], "weekend": ["09:00-10:00", "14:00-15:00", "19:00-20:00"],}TOPIC_ANGLES = [ "Lessons learned (personal experience)", "Framework/system breakdown", "Tool recommendation (with honest take)", "Myth busting (challenge common belief)", "Behind the scenes (process, workflow)", "Industry trend analysis", "Beginner guide (explain like I'm 5)", "Comparison (X vs Y — which is better?)", "Prediction (what's coming next)", "Case study (real example with numbers)", "Mistake I made (vulnerability + lesson)", "Quick tip (tactical, immediately useful)", "Controversial take (spicy but defensible)", "Curated list (best resources, tools, accounts)",]@dataclassclass DayPlan: date: str day_of_week: str posts: list = field(default_factory=list) engagement_target: str = ""@dataclassclass PostSlot: time: str format: str topic_angle: str topic_suggestion: str notes: str = ""@dataclassclass WeekPlan: week_number: int start_date: str end_date: str days: list = field(default_factory=list) thread_count: int = 0 total_posts: int = 0 focus_theme: str = ""def generate_plan(niche: str, posts_per_day: int, weeks: int, start_date: datetime) -> list: plans = [] angle_idx = 0 time_idx = 0 for week in range(weeks): week_start = start_date + timedelta(weeks=week) week_end = week_start + timedelta(days=6) week_plan = WeekPlan( week_number=week + 1, start_date=week_start.strftime("%Y-%m-%d"), end_date=week_end.strftime("%Y-%m-%d"), focus_theme=TOPIC_ANGLES[week % len(TOPIC_ANGLES)], ) for day in range(7): current = week_start + timedelta(days=day) day_name = current.strftime("%A") is_weekend = day >= 5 times = OPTIMAL_TIMES["weekend" if is_weekend else "weekday"] actual_posts = max(1, posts_per_day - (1 if is_weekend else 0)) day_plan = DayPlan( date=current.strftime("%Y-%m-%d"), day_of_week=day_name, engagement_target="15 min reply session" if is_weekend else "30 min reply session", ) for p in range(actual_posts): # Determine format based on day position if day in [1, 3] and p == 0: # Tue/Thu first slot = thread fmt = "thread" elif p == actual_posts - 1 and not is_weekend: fmt = "question" # Last post = engagement driver elif day == 4 and p == 0: # Friday first = quote tweet fmt = "quote_tweet" else: fmt = "atomic_tweet" angle = TOPIC_ANGLES[angle_idx % len(TOPIC_ANGLES)] angle_idx += 1 slot = PostSlot( time=times[p % len(times)], format=fmt, topic_angle=angle, topic_suggestion=f"{angle} about {niche}", notes="Pin if performs well" if fmt == "thread" else "", ) day_plan.posts.append(asdict(slot)) if fmt == "thread": week_plan.thread_count += 1 week_plan.total_posts += 1 week_plan.days.append(asdict(day_plan)) plans.append(asdict(week_plan)) return plansdef print_plan(plans: list, niche: str): print(f"\n{'='*70}") print(f" X/TWITTER CONTENT PLAN — {niche.upper()}") print(f"{'='*70}") for week in plans: print(f"\n WEEK {week['week_number']} ({week['start_date']} to {week['end_date']})") print(f" Theme: {week['focus_theme']}") print(f" Posts: {week['total_posts']} | Threads: {week['thread_count']}") print(f" {'─'*66}") for day in week['days']: print(f"\n {day['day_of_week']:9} {day['date']}") for post in day['posts']: fmt_icon = { "thread": "🧵", "atomic_tweet": "💬", "question": "❓", "quote_tweet": "🔄", "reply_session": "💬", }.get(post['format'], "📝") print(f" {fmt_icon} {post['time']:12} [{post['format']:<14}] {post['topic_angle']}") if post['notes']: print(f" ℹ️ {post['notes']}") print(f" 📊 Engagement: {day['engagement_target']}") print(f"\n{'='*70}") print(f" WEEKLY TARGETS") print(f" • Reply to 10+ accounts in your niche daily") print(f" • Quote tweet 2-3 relevant posts per week") print(f" • Update pinned tweet if a thread outperforms current pin") print(f" • Review analytics every Sunday — double down on what works") print(f"{'='*70}\n")def main(): parser = argparse.ArgumentParser( description="Generate X/Twitter content calendars", formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument("--niche", required=True, help="Your content niche") parser.add_argument("--frequency", type=int, default=3, help="Posts per day (default: 3)") parser.add_argument("--weeks", type=int, default=2, help="Weeks to plan (default: 2)") parser.add_argument("--start", default="", help="Start date YYYY-MM-DD (default: next Monday)") parser.add_argument("--json", action="store_true", help="Output JSON") args = parser.parse_args() if args.start: start = datetime.strptime(args.start, "%Y-%m-%d") else: today = datetime.now() days_until_monday = (7 - today.weekday()) % 7 if days_until_monday == 0: days_until_monday = 7 start = today + timedelta(days=days_until_monday) plans = generate_plan(args.niche, args.frequency, args.weeks, start) if args.json: print(json.dumps(plans, indent=2)) else: print_plan(plans, args.niche)if __name__ == "__main__": main()
#!/usr/bin/env python3"""X/Twitter Growth Tracker — Track and analyze account growth over time.Stores periodic snapshots of account metrics and calculates growth trends,engagement patterns, and milestone projections.Usage: python3 growth_tracker.py --record --handle @user --followers 5200 --eng-rate 2.1 python3 growth_tracker.py --report --handle @user python3 growth_tracker.py --report --handle @user --period 30d --json python3 growth_tracker.py --milestone --handle @user --target 10000"""import argparseimport jsonimport osimport sysfrom datetime import datetime, timedeltafrom pathlib import PathDATA_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", ".growth-data")def get_data_file(handle: str) -> str: clean = handle.lstrip("@").lower() os.makedirs(DATA_DIR, exist_ok=True) return os.path.join(DATA_DIR, f"{clean}.jsonl")def record_snapshot(handle: str, followers: int, following: int = 0, eng_rate: float = 0, posts_week: float = 0, notes: str = ""): entry = { "timestamp": datetime.now().isoformat(), "handle": handle, "followers": followers, "following": following, "engagement_rate": eng_rate, "posts_per_week": posts_week, "notes": notes, } filepath = get_data_file(handle) with open(filepath, "a") as f: f.write(json.dumps(entry) + "\n") return entrydef load_snapshots(handle: str, period_days: int = 0) -> list: filepath = get_data_file(handle) if not os.path.exists(filepath): return [] entries = [] cutoff = None if period_days > 0: cutoff = datetime.now() - timedelta(days=period_days) with open(filepath) as f: for line in f: line = line.strip() if not line: continue entry = json.loads(line) if cutoff: ts = datetime.fromisoformat(entry["timestamp"]) if ts < cutoff: continue entries.append(entry) return entriesdef generate_report(handle: str, entries: list) -> dict: if not entries: return {"handle": handle, "error": "No data found"} report = { "handle": handle, "data_points": len(entries), "first_record": entries[0]["timestamp"], "last_record": entries[-1]["timestamp"], "current_followers": entries[-1]["followers"], } if len(entries) >= 2: first = entries[0] last = entries[-1] follower_change = last["followers"] - first["followers"] days_span = (datetime.fromisoformat(last["timestamp"]) - datetime.fromisoformat(first["timestamp"])).days days_span = max(days_span, 1) report["follower_change"] = follower_change report["days_tracked"] = days_span report["daily_growth"] = round(follower_change / days_span, 1) report["weekly_growth"] = round((follower_change / days_span) * 7, 1) report["monthly_projection"] = round((follower_change / days_span) * 30) if first["followers"] > 0: pct_change = ((last["followers"] - first["followers"]) / first["followers"]) * 100 report["growth_percent"] = round(pct_change, 1) # Engagement trend eng_rates = [e["engagement_rate"] for e in entries if e.get("engagement_rate", 0) > 0] if len(eng_rates) >= 2: mid = len(eng_rates) // 2 first_half_avg = sum(eng_rates[:mid]) / mid second_half_avg = sum(eng_rates[mid:]) / (len(eng_rates) - mid) report["engagement_trend"] = "improving" if second_half_avg > first_half_avg else "declining" report["avg_engagement_rate"] = round(sum(eng_rates) / len(eng_rates), 2) return reportdef project_milestone(handle: str, entries: list, target: int) -> dict: if len(entries) < 2: return {"error": "Need at least 2 data points for projection"} current = entries[-1]["followers"] if current >= target: return {"handle": handle, "target": target, "status": "Already reached!"} first = entries[0] last = entries[-1] days_span = (datetime.fromisoformat(last["timestamp"]) - datetime.fromisoformat(first["timestamp"])).days days_span = max(days_span, 1) daily_growth = (last["followers"] - first["followers"]) / days_span if daily_growth <= 0: return {"handle": handle, "target": target, "status": "Not growing — can't project", "daily_growth": round(daily_growth, 1)} remaining = target - current days_needed = remaining / daily_growth target_date = datetime.now() + timedelta(days=days_needed) return { "handle": handle, "current": current, "target": target, "remaining": remaining, "daily_growth": round(daily_growth, 1), "days_needed": round(days_needed), "projected_date": target_date.strftime("%Y-%m-%d"), }def print_report(report: dict): print(f"\n{'='*60}") print(f" GROWTH REPORT — {report['handle']}") print(f"{'='*60}") if "error" in report: print(f"\n ⚠️ {report['error']}") print(f" Record data first: python3 growth_tracker.py --record --handle {report['handle']} --followers N") print() return print(f"\n Current followers: {report['current_followers']:,}") print(f" Data points: {report['data_points']}") print(f" Tracking since: {report['first_record'][:10]}") if "follower_change" in report: change_icon = "📈" if report["follower_change"] > 0 else "📉" if report["follower_change"] < 0 else "➡️" print(f"\n {change_icon} Change: {report['follower_change']:+,} followers over {report['days_tracked']} days") print(f" Daily avg: {report.get('daily_growth', 0):+.1f}/day") print(f" Weekly avg: {report.get('weekly_growth', 0):+.1f}/week") print(f" 30-day projection: {report.get('monthly_projection', 0):+,}") if "growth_percent" in report: print(f" Growth rate: {report['growth_percent']:+.1f}%") if "engagement_trend" in report: trend_icon = "📈" if report["engagement_trend"] == "improving" else "📉" print(f" Engagement: {trend_icon} {report['engagement_trend']} (avg {report['avg_engagement_rate']}%)") print(f"\n{'='*60}\n")def main(): parser = argparse.ArgumentParser( description="Track X/Twitter account growth over time", formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument("--record", action="store_true", help="Record a new snapshot") parser.add_argument("--report", action="store_true", help="Generate growth report") parser.add_argument("--milestone", action="store_true", help="Project when target will be reached") parser.add_argument("--handle", required=True, help="X handle") parser.add_argument("--followers", type=int, default=0, help="Current follower count") parser.add_argument("--following", type=int, default=0, help="Current following count") parser.add_argument("--eng-rate", type=float, default=0, help="Current engagement rate (pct)") parser.add_argument("--posts-week", type=float, default=0, help="Posts per week") parser.add_argument("--notes", default="", help="Notes for this snapshot") parser.add_argument("--period", default="all", help="Report period: 7d, 30d, 90d, all") parser.add_argument("--target", type=int, default=0, help="Follower milestone target") parser.add_argument("--json", action="store_true", help="Output JSON") args = parser.parse_args() if not args.handle.startswith("@"): args.handle = f"@{args.handle}" if args.record: if args.followers <= 0: print("Error: --followers required for recording", file=sys.stderr) sys.exit(1) entry = record_snapshot(args.handle, args.followers, args.following, args.eng_rate, args.posts_week, args.notes) if args.json: print(json.dumps(entry, indent=2)) else: print(f" ✅ Recorded: {args.handle} — {args.followers:,} followers") print(f" File: {get_data_file(args.handle)}") elif args.report: period_days = 0 if args.period != "all": period_days = int(args.period.rstrip("d")) entries = load_snapshots(args.handle, period_days) report = generate_report(args.handle, entries) if args.json: print(json.dumps(report, indent=2)) else: print_report(report) elif args.milestone: if args.target <= 0: print("Error: --target required for milestone projection", file=sys.stderr) sys.exit(1) entries = load_snapshots(args.handle) result = project_milestone(args.handle, entries, args.target) if args.json: print(json.dumps(result, indent=2)) else: if "error" in result: print(f" ⚠️ {result['error']}") elif "status" in result and "days_needed" not in result: print(f" 🎉 {result['status']}") else: print(f"\n 🎯 Milestone Projection: {result['handle']}") print(f" Current: {result['current']:,}") print(f" Target: {result['target']:,}") print(f" Gap: {result['remaining']:,}") print(f" Growth: {result['daily_growth']:+.1f}/day") print(f" ETA: {result['projected_date']} (~{result['days_needed']} days)") print() else: parser.print_help()if __name__ == "__main__": main()
#!/usr/bin/env python3"""X/Twitter Profile Auditor — Audit any X profile for growth readiness.Checks bio quality, pinned tweet, posting patterns, and providesactionable recommendations. Works without API access by analyzingprofile data you provide or scraping public info via web search.Usage: python3 profile_auditor.py --handle @username python3 profile_auditor.py --handle @username --json python3 profile_auditor.py --bio "current bio text" --followers 5000 --posts-per-week 10"""import argparseimport jsonimport reimport sysfrom dataclasses import dataclass, field, asdictfrom typing import Optional@dataclassclass ProfileData: handle: str = "" bio: str = "" followers: int = 0 following: int = 0 posts_per_week: float = 0 reply_ratio: float = 0 # % of posts that are replies thread_ratio: float = 0 # % of posts that are threads has_pinned: bool = False pinned_age_days: int = 0 has_link: bool = False has_newsletter: bool = False avg_engagement_rate: float = 0 # likes+replies+rts / followers@dataclassclass AuditFinding: area: str status: str # GOOD, WARN, CRITICAL message: str fix: str = ""@dataclassclass AuditReport: handle: str score: int = 0 max_score: int = 100 grade: str = "" findings: list = field(default_factory=list) recommendations: list = field(default_factory=list)def audit_bio(profile: ProfileData) -> list: findings = [] bio = profile.bio.strip() if not bio: findings.append(AuditFinding("Bio", "CRITICAL", "No bio provided for audit", "Provide bio text with --bio flag")) return findings # Length check if len(bio) < 30: findings.append(AuditFinding("Bio", "WARN", f"Bio too short ({len(bio)} chars)", "Aim for 100-160 characters with clear value prop")) elif len(bio) > 160: findings.append(AuditFinding("Bio", "WARN", f"Bio may be too long ({len(bio)} chars)", "Keep under 160 chars for readability")) else: findings.append(AuditFinding("Bio", "GOOD", f"Bio length OK ({len(bio)} chars)")) # Hashtag check hashtags = re.findall(r'#\w+', bio) if hashtags: findings.append(AuditFinding("Bio", "WARN", f"Hashtags in bio ({', '.join(hashtags)})", "Remove hashtags — signals amateur. Use plain text.")) else: findings.append(AuditFinding("Bio", "GOOD", "No hashtags in bio")) # Buzzword check buzzwords = ['entrepreneur', 'guru', 'ninja', 'rockstar', 'visionary', 'hustler', 'thought leader', 'serial entrepreneur', 'dreamer', 'doer'] found = [bw for bw in buzzwords if bw.lower() in bio.lower()] if found: findings.append(AuditFinding("Bio", "WARN", f"Buzzwords detected: {', '.join(found)}", "Replace with specific, concrete descriptions of what you do")) # Specificity check — pipes and slashes often signal unfocused bios if bio.count('|') >= 3 or bio.count('/') >= 3: findings.append(AuditFinding("Bio", "WARN", "Bio may lack focus (too many roles/identities)", "Lead with ONE clear identity. What's the #1 thing you want to be known for?")) # Social proof check proof_patterns = [r'\d+[kKmM]?\+?\s*(followers|subscribers|readers|users|customers)', r'(founder|ceo|cto|vp|head|director|lead)\s+(of|at|@)', r'(author|writer)\s+of', r'featured\s+in', r'ex-\w+'] has_proof = any(re.search(p, bio, re.IGNORECASE) for p in proof_patterns) if has_proof: findings.append(AuditFinding("Bio", "GOOD", "Social proof detected")) else: findings.append(AuditFinding("Bio", "WARN", "No obvious social proof in bio", "Add a credential: title, metric, brand association, or achievement")) # CTA/Link check if profile.has_link: findings.append(AuditFinding("Bio", "GOOD", "Profile has a link")) else: findings.append(AuditFinding("Bio", "WARN", "No link in profile", "Add a link to newsletter, product, or portfolio")) return findingsdef audit_activity(profile: ProfileData) -> list: findings = [] # Posting frequency if profile.posts_per_week <= 0: findings.append(AuditFinding("Activity", "CRITICAL", "No posting data provided", "Provide --posts-per-week estimate")) elif profile.posts_per_week < 3: findings.append(AuditFinding("Activity", "CRITICAL", f"Very low posting ({profile.posts_per_week:.0f}/week)", "Minimum 7 posts/week (1/day). Aim for 14-21.")) elif profile.posts_per_week < 7: findings.append(AuditFinding("Activity", "WARN", f"Low posting ({profile.posts_per_week:.0f}/week)", "Aim for 2-3 posts per day for consistent growth")) elif profile.posts_per_week < 21: findings.append(AuditFinding("Activity", "GOOD", f"Good posting cadence ({profile.posts_per_week:.0f}/week)")) else: findings.append(AuditFinding("Activity", "GOOD", f"High posting cadence ({profile.posts_per_week:.0f}/week)")) # Reply ratio if profile.reply_ratio > 0: if profile.reply_ratio < 0.2: findings.append(AuditFinding("Activity", "WARN", f"Low reply ratio ({profile.reply_ratio:.0%})", "Aim for 30%+ replies. Engage with others, don't just broadcast.")) elif profile.reply_ratio >= 0.3: findings.append(AuditFinding("Activity", "GOOD", f"Healthy reply ratio ({profile.reply_ratio:.0%})")) # Follower/following ratio if profile.followers > 0 and profile.following > 0: ratio = profile.followers / profile.following if ratio < 0.5: findings.append(AuditFinding("Profile", "WARN", f"Low follower/following ratio ({ratio:.1f}x)", "Unfollow inactive accounts. Ratio should trend toward 2:1+")) elif ratio >= 2: findings.append(AuditFinding("Profile", "GOOD", f"Healthy follower/following ratio ({ratio:.1f}x)")) # Pinned tweet if profile.has_pinned: if profile.pinned_age_days > 30: findings.append(AuditFinding("Profile", "WARN", f"Pinned tweet is {profile.pinned_age_days} days old", "Update pinned tweet monthly with your latest best content")) else: findings.append(AuditFinding("Profile", "GOOD", "Pinned tweet is recent")) else: findings.append(AuditFinding("Profile", "WARN", "No pinned tweet", "Pin your best-performing tweet or thread. It's your landing page.")) return findingsdef calculate_score(findings: list) -> tuple: total = len(findings) if total == 0: return 0, "F" good = sum(1 for f in findings if f.status == "GOOD") score = int((good / total) * 100) if score >= 90: grade = "A" elif score >= 75: grade = "B" elif score >= 60: grade = "C" elif score >= 40: grade = "D" else: grade = "F" return score, gradedef generate_recommendations(findings: list, profile: ProfileData) -> list: recs = [] criticals = [f for f in findings if f.status == "CRITICAL"] warns = [f for f in findings if f.status == "WARN"] for f in criticals: if f.fix: recs.append(f"🔴 {f.fix}") for f in warns[:3]: # Top 3 warnings if f.fix: recs.append(f"🟡 {f.fix}") # Stage-specific advice if profile.followers < 1000: recs.append("📈 Growth phase: Focus 70% on replies to larger accounts, 30% on your own posts") elif profile.followers < 10000: recs.append("📈 Momentum phase: 2-3 threads/week + daily engagement. Start a recurring series.") else: recs.append("📈 Scale phase: Leverage audience with cross-platform repurposing + newsletter growth") return recsdef main(): parser = argparse.ArgumentParser( description="Audit an X/Twitter profile for growth readiness", formatter_class=argparse.RawDescriptionHelpFormatter, epilog="""Examples: %(prog)s --handle @rezarezvani --bio "CTO building AI products" --followers 5000 %(prog)s --bio "Entrepreneur | Dreamer | Hustle" --followers 200 --posts-per-week 3 %(prog)s --handle @example --followers 50000 --posts-per-week 21 --reply-ratio 0.4 --json """) parser.add_argument("--handle", default="@unknown", help="X handle") parser.add_argument("--bio", default="", help="Current bio text") parser.add_argument("--followers", type=int, default=0, help="Follower count") parser.add_argument("--following", type=int, default=0, help="Following count") parser.add_argument("--posts-per-week", type=float, default=0, help="Average posts per week") parser.add_argument("--reply-ratio", type=float, default=0, help="Fraction of posts that are replies (0-1)") parser.add_argument("--has-pinned", action="store_true", help="Has a pinned tweet") parser.add_argument("--pinned-age-days", type=int, default=0, help="Age of pinned tweet in days") parser.add_argument("--has-link", action="store_true", help="Has link in profile") parser.add_argument("--json", action="store_true", help="Output JSON") args = parser.parse_args() profile = ProfileData( handle=args.handle, bio=args.bio, followers=args.followers, following=args.following, posts_per_week=args.posts_per_week, reply_ratio=args.reply_ratio, has_pinned=args.has_pinned, pinned_age_days=args.pinned_age_days, has_link=args.has_link, ) findings = audit_bio(profile) + audit_activity(profile) score, grade = calculate_score(findings) recs = generate_recommendations(findings, profile) report = AuditReport( handle=profile.handle, score=score, grade=grade, findings=[asdict(f) for f in findings], recommendations=recs, ) if args.json: print(json.dumps(asdict(report), indent=2)) else: print(f"\n{'='*60}") print(f" X PROFILE AUDIT — {report.handle}") print(f"{'='*60}") print(f"\n Score: {report.score}/100 (Grade: {report.grade})\n") for f in findings: icon = {"GOOD": "✅", "WARN": "⚠️", "CRITICAL": "🔴"}.get(f.status, "❓") print(f" {icon} [{f.area}] {f.message}") if f.fix and f.status != "GOOD": print(f" → {f.fix}") if recs: print(f"\n {'─'*56}") print(f" TOP RECOMMENDATIONS\n") for i, r in enumerate(recs, 1): print(f" {i}. {r}") print(f"\n{'='*60}\n")if __name__ == "__main__": main()
#!/usr/bin/env python3"""Tweet Composer — Generate structured tweets and threads with proven hook patterns.Provides templates, character counting, thread formatting, and hook generationfor different content types. No API required — pure content scaffolding.Usage: python3 tweet_composer.py --type tweet --topic "AI in healthcare" python3 tweet_composer.py --type thread --topic "lessons from scaling" --tweets 8 python3 tweet_composer.py --type hooks --topic "startup mistakes" --count 10 python3 tweet_composer.py --validate "your tweet text here""""import argparseimport jsonimport sysimport textwrapfrom dataclasses import dataclass, field, asdictfrom typing import OptionalMAX_TWEET_CHARS = 280HOOK_PATTERNS = { "listicle": [ "{n} {topic} that changed how I {verb}:", "The {n} biggest mistakes in {topic}:", "{n} {topic} most people don't know about:", "I spent {time} studying {topic}. Here are {n} lessons:", "{n} signs your {topic} needs work:", ], "contrarian": [ "Unpopular opinion: {claim}", "Hot take: {claim}", "Everyone says {common_belief}. They're wrong.", "Stop {common_action}. Here's what to do instead:", "The {topic} advice you keep hearing is backwards.", ], "story": [ "I {did_thing} and it completely changed my {outcome}.", "Last {timeframe}, I made a mistake with {topic}. Here's what happened:", "3 years ago I was {before_state}. Now I'm {after_state}. Here's the playbook:", "I almost {near_miss}. Then I discovered {topic}.", "The best {topic} advice I ever got came from {unexpected_source}.", ], "observation": [ "{topic} is underrated. Here's why:", "Nobody talks about this part of {topic}:", "The gap between {thing_a} and {thing_b} is where the money is.", "If you're struggling with {topic}, you're probably {mistake}.", "The secret to {topic} isn't what you think.", ], "framework": [ "The {name} framework for {topic} (save this):", "How to {outcome} in {timeframe} (step by step):", "{topic} explained in 60 seconds:", "The only {n} things that matter for {topic}:", "A simple system for {topic} that actually works:", ], "question": [ "What's the most underrated {topic}?", "If you could only {do_one_thing} for {topic}, what would it be?", "What {topic} advice would you give your younger self?", "Real question: why do most people {common_mistake}?", "What's one {topic} that completely changed your perspective?", ],}THREAD_STRUCTURE = """Thread Outline: {topic}{'='*50}Tweet 1 (HOOK — most important): Pattern: {hook_pattern} Draft: {hook_draft} Chars: {hook_chars}/280Tweet 2 (CONTEXT): Purpose: Set up why this matters Suggestion: "Here's what most people get wrong about {topic}:" OR: "I spent [time] learning this. Here's the breakdown:"Tweets 3-{n} (BODY — one idea per tweet):{body_suggestions}Tweet {n_plus_1} (CLOSE): Purpose: Summarize + CTA Suggestion: "TL;DR:\\n\\n[3 bullet summary]\\n\\nFollow @handle for more on {topic}"Reply to Tweet 1 (ENGAGEMENT BAIT): Purpose: Resurface the thread Suggestion: "What's your experience with {topic}? Drop it below 👇""""@dataclassclass TweetDraft: text: str char_count: int over_limit: bool warnings: list = field(default_factory=list)def validate_tweet(text: str) -> TweetDraft: """Validate a tweet and return analysis.""" char_count = len(text) over_limit = char_count > MAX_TWEET_CHARS warnings = [] if over_limit: warnings.append(f"Over limit by {char_count - MAX_TWEET_CHARS} characters") # Check for links in body import re if re.search(r'https?://\S+', text): warnings.append("Contains URL — consider moving link to reply (hurts reach)") # Check for hashtags hashtags = re.findall(r'#\w+', text) if len(hashtags) > 2: warnings.append(f"Too many hashtags ({len(hashtags)}) — max 1-2, ideally 0") elif len(hashtags) > 0: warnings.append(f"Has {len(hashtags)} hashtag(s) — consider removing for cleaner look") # Check for @mentions at start if text.startswith('@'): warnings.append("Starts with @ — will be treated as reply, not shown in timeline") # Readability lines = text.strip().split('\n') long_lines = [l for l in lines if len(l) > 70] if long_lines: warnings.append("Long unbroken lines — add line breaks for mobile readability") return TweetDraft(text=text, char_count=char_count, over_limit=over_limit, warnings=warnings)def generate_hooks(topic: str, count: int = 10) -> list: """Generate hook variations for a topic.""" hooks = [] for pattern_type, patterns in HOOK_PATTERNS.items(): for p in patterns: hook = p.replace("{topic}", topic).replace("{n}", "7").replace( "{time}", "6 months").replace("{timeframe}", "month").replace( "{claim}", f"{topic} is overrated").replace( "{common_belief}", f"{topic} is simple").replace( "{common_action}", f"overthinking {topic}").replace( "{outcome}", "approach").replace("{verb}", "think").replace( "{name}", "3-Step").replace("{did_thing}", f"changed my {topic} strategy").replace( "{before_state}", "stuck").replace("{after_state}", "thriving").replace( "{near_miss}", f"gave up on {topic}").replace( "{unexpected_source}", "a complete beginner").replace( "{thing_a}", "theory").replace("{thing_b}", "execution").replace( "{mistake}", "overcomplicating it").replace( "{common_mistake}", f"ignore {topic}").replace( "{do_one_thing}", "change one thing").replace( "{common_action}", f"overthinking {topic}") hooks.append({"type": pattern_type, "hook": hook, "chars": len(hook)}) if len(hooks) >= count: return hooks return hooks[:count]def generate_thread_outline(topic: str, num_tweets: int = 8) -> str: """Generate a thread structure outline.""" hooks = generate_hooks(topic, 3) best_hook = hooks[0]["hook"] if hooks else f"Everything I know about {topic}:" body = [] suggestions = [ "Key insight or surprising fact", "Common mistake people make", "The counterintuitive truth", "A practical example or case study", "The framework or system", "Implementation steps", "Results or evidence", "The nuance most people miss", ] for i, s in enumerate(suggestions[:num_tweets - 3], 3): body.append(f" Tweet {i}: [{s}]") body_text = "\n".join(body) return f"""{'='*60} THREAD OUTLINE: {topic}{'='*60} Tweet 1 (HOOK): "{best_hook}" Chars: {len(best_hook)}/280 Tweet 2 (CONTEXT): "Here's what most people get wrong about {topic}:"{body_text} Tweet {num_tweets - 1} (CLOSE): "TL;DR: • [Key takeaway 1] • [Key takeaway 2] • [Key takeaway 3] Follow for more on {topic}" Reply to Tweet 1 (BOOST): "What's your biggest challenge with {topic}? 👇"{'='*60} RULES: - Each tweet must stand alone (people read out of order) - Max 3-4 lines per tweet (mobile readability) - No filler tweets — cut anything that doesn't add value - Hook tweet determines 90%% of thread performance{'='*60}"""def main(): parser = argparse.ArgumentParser( description="Generate tweets, threads, and hooks with proven patterns", formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument("--type", choices=["tweet", "thread", "hooks", "validate"], default="hooks", help="Content type to generate") parser.add_argument("--topic", default="", help="Topic for content generation") parser.add_argument("--tweets", type=int, default=8, help="Number of tweets in thread") parser.add_argument("--count", type=int, default=10, help="Number of hooks to generate") parser.add_argument("--validate", nargs="?", const="", help="Tweet text to validate") parser.add_argument("--json", action="store_true", help="Output JSON") args = parser.parse_args() if args.type == "validate" or args.validate is not None: text = args.validate or args.topic if not text: print("Error: provide tweet text to validate", file=sys.stderr) sys.exit(1) result = validate_tweet(text) if args.json: print(json.dumps(asdict(result), indent=2)) else: icon = "🔴" if result.over_limit else "✅" print(f"\n {icon} {result.char_count}/{MAX_TWEET_CHARS} characters") if result.warnings: for w in result.warnings: print(f" ⚠️ {w}") else: print(" No issues found.") print() elif args.type == "hooks": if not args.topic: print("Error: --topic required for hook generation", file=sys.stderr) sys.exit(1) hooks = generate_hooks(args.topic, args.count) if args.json: print(json.dumps(hooks, indent=2)) else: print(f"\n{'='*60}") print(f" HOOK IDEAS: {args.topic}") print(f"{'='*60}\n") for i, h in enumerate(hooks, 1): print(f" {i:2d}. [{h['type']:<12}] {h['hook']}") print(f" ({h['chars']} chars)") print() elif args.type == "thread": if not args.topic: print("Error: --topic required for thread generation", file=sys.stderr) sys.exit(1) outline = generate_thread_outline(args.topic, args.tweets) print(outline) elif args.type == "tweet": if not args.topic: print("Error: --topic required", file=sys.stderr) sys.exit(1) hooks = generate_hooks(args.topic, 5) print(f"\n 5 tweet drafts for: {args.topic}\n") for i, h in enumerate(hooks, 1): print(f" {i}. {h['hook']}") print(f" ({h['chars']} chars)\n")if __name__ == "__main__": main()
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.