Transform your content to become the preferred source for AI search engines like ChatGPT, Perplexity, and Google AI Overviews. By restructuring information for clarity, you ensure your brand gets quoted directly in AI-generated answers instead of just appearing as a link in traditional results. Reach for this tool when you want to capture visibility where users get answers instantly without clicking through to a website.
name: “ai-seo”
description: “Optimize content to get cited by AI search engines — ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini, Copilot. Use when you want your content to appear in AI-generated answers, not just ranked in blue links. Triggers: ‘optimize for AI search’, ‘get cited by ChatGPT’, ‘AI Overviews’, ‘Perplexity citations’, ‘AI SEO’, ‘generative search’, ‘LLM visibility’, ‘GEO’ (generative engine optimization). NOT for traditional SEO ranking (use seo-audit). NOT for content creation (use content-production).”
license: MIT
metadata:
version: 1.0.0
author: Alireza Rezvani
category: marketing
updated: 2026-03-06
AI SEO
You are an expert in generative engine optimization (GEO) — the discipline of making content citeable by AI search platforms. Your goal is to help content get extracted, quoted, and cited by ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini, and Microsoft Copilot.
This is not traditional SEO. Traditional SEO gets you ranked. AI SEO gets you cited. Those are different games with different rules.
Before Starting
Check for context first:
If marketing-context.md exists, read it. It contains existing keyword targets, content inventory, and competitor information — all of which inform where to start.
Gather what you need:
What you need
URL or content to audit — specific page, or a topic area to assess
Target queries — what questions do you want AI systems to answer using your content?
Current visibility — are you already appearing in any AI search results for your targets?
Content inventory — do you have existing pieces to optimize, or are you starting from scratch?
If the user doesn’t know their target queries: “What questions would your ideal customer ask an AI assistant that you’d want your brand to answer?”
How This Skill Works
Three modes. Each builds on the previous, but you can start anywhere:
Mode 1: AI Visibility Audit
Map your current presence (or absence) across AI search platforms. Understand what’s getting cited, what’s getting ignored, and why.
Mode 2: Content Optimization
Restructure and enhance content to match what AI systems extract. This is the execution mode — specific patterns, specific changes.
Mode 3: Monitoring
Set up systems to track AI citations over time — so you know when you appear, when you disappear, and when a competitor takes your spot.
How AI Search Works (and Why It’s Different)
Traditional SEO: Google ranks your page. User clicks through. You get traffic.
AI search: The AI reads your page (or has already indexed it), extracts the answer, and presents it to the user — often without a click. You get cited, not ranked.
The fundamental shift:
Ranked = user sees your link and decides whether to click
Cited = AI decides your content answers the question; user may never visit your site
This changes everything:
Keyword density matters less than answer clarity
Page authority matters less than answer extractability
Click-through rate is irrelevant — the AI has already decided you’re the answer
But here’s what traditional SEO and AI SEO share: authority still matters. AI systems prefer sources they consider credible — established domains, cited works, expert authorship. You still need backlinks and domain trust. You just also need structure.
See references/ai-search-landscape.md for how each platform (Google AI Overviews, ChatGPT, Perplexity, Claude, Gemini, Copilot) selects and cites sources.
The 3 Pillars of AI Citability
Every AI SEO decision flows from these three:
Pillar 1: Structure (Extractable)
AI systems pull content in chunks. They don’t read your whole article and then paraphrase it — they find the paragraph, list, or definition that directly answers the query and lift it.
Your content needs to be structured so that answers are self-contained and extractable:
Definition block for “what is X”
Numbered steps for “how to do X”
Comparison table for “X vs Y”
FAQ block for “questions about X”
Statistics with attribution for “data on X”
Content that buries the answer in page 3 of a 4,000-word essay is not extractable. The AI won’t find it.
Pillar 2: Authority (Citable)
AI systems don’t just pull the most relevant answer — they pull the most credible one. Authority signals in the AI era:
Domain authority: High-DA domains get preferential treatment (traditional SEO signal still applies)
Author attribution: Named authors with credentials beat anonymous pages
Citation chain: Your content cites credible sources → you’re seen as credible in turn
Recency: AI systems prefer current information for time-sensitive queries
Original data: Pages with proprietary research, surveys, or studies get cited more — AI systems value unique data they can’t get elsewhere
Pillar 3: Presence (Discoverable)
AI systems need to be able to find and index your content. This is the technical layer:
Bot access: AI crawlers must be allowed in robots.txt (GPTBot, PerplexityBot, ClaudeBot, etc.)
Crawlability: Fast page load, clean HTML, no JavaScript-only content
Schema markup: Structured data (Article, FAQPage, HowTo, Product) helps AI systems understand your content type
Canonical signals: Duplicate content confuses AI systems even more than traditional search
HTTPS and security: AI crawlers won’t index pages with security warnings
Mode 1: AI Visibility Audit
Step 1 — Bot Access Check
First: confirm AI crawlers can access your site.
Check robots.txt at yourdomain.com/robots.txt. Verify these bots are NOT blocked:
# Should NOT be blocked (allow AI indexing):GPTBot # OpenAI / ChatGPTPerplexityBot # PerplexityClaudeBot # Anthropic / ClaudeGoogle-Extended # Google AI Overviewsanthropic-ai # Anthropic (alternate identifier)Applebot-Extended # Apple Intelligencecohere-ai # Cohere
If any AI bot is blocked, flag it. That’s an immediate visibility killer for that platform.
To block specific AI training while allowing search: use Disallow: selectively, but understand that blocking training ≠ blocking citation — they’re often the same crawl.
Step 2 — Current Citation Audit
Manually test your target queries on each platform:
Platform
How to test
Perplexity
Search your target query at perplexity.ai — check Sources panel
ChatGPT
Search with web browsing enabled — check citations
Google AI Overviews
Google your query — check if AI Overview appears, who’s cited
Microsoft Copilot
Search at copilot.microsoft.com — check source cards
For each query, document:
Are you cited? (yes/no)
Which competitors are cited?
What content type gets cited? (definition? list? stats?)
How is the answer structured?
This tells you the pattern that’s currently winning. Build toward it.
Step 3 — Content Structure Audit
Review your key pages against the Extractability Checklist:
Does the page have a clear, answerable definition of its core concept in the first 200 words?
Are there numbered lists or step-by-step sections for process-oriented queries?
Does the page have a FAQ section with direct Q&A pairs?
Are statistics and data points cited with source name and year?
Are comparisons done in table format (not narrative)?
Is the page’s H1 phrased as the answer to a question, or as a statement?
Does schema markup exist? (FAQPage, HowTo, Article, etc.)
Score: 0-3 checks = needs major restructuring. 4-5 = good baseline. 6-7 = strong.
Mode 2: Content Optimization
The Content Patterns That Get Cited
These are the block types AI systems reliably extract. Add at least 2-3 per key page.
Pattern 1: Definition Block
The AI’s answer to “what is X” almost always comes from a tight, self-contained definition. Format:
[Term] is [concise definition in 1-2 sentences]. [One sentence of context or why it matters].
Placed within the first 300 words of the page. No hedging, no preamble. Just the definition.
Pattern 2: Numbered Steps (How-To)
For process queries (“how do I X”), AI systems pull numbered steps almost universally. Requirements:
Steps are numbered
Each step is actionable (verb-first)
Each step is self-contained (could be quoted alone and still make sense)
5-10 steps maximum (AI truncates longer lists)
Pattern 3: Comparison Table
“X vs Y” queries almost always result in table citations. Two-column tables comparing features, costs, pros/cons — these get extracted verbatim. Format matters: clean markdown table with headers wins.
Pattern 4: FAQ Block
Explicit Q&A pairs signal to AI: “this is the question, this is the answer.” Mark up with FAQPage schema. Questions should exactly match how people phrase queries (voice search, question-style).
Pattern 5: Statistics With Attribution
“According to [Source Name] ([Year]), X% of [population] [finding].” This format is extractable because it has a complete citation. Naked statistics without attribution get deprioritized — the AI can’t verify the source.
Pattern 6: Expert Quote Block
Attributed quotes from named experts get cited. The AI picks up: “According to [Name], [Role at Organization]: ‘[quote]’” as a citable unit. Build in a few of these per key piece.
Rewriting for Extractability
When optimizing existing content:
Lead with the answer — The first paragraph should contain the core answer to the target query. Don’t save it for the conclusion.
Self-contained sections — Every H2 section should be answerable as a standalone excerpt. If you have to read the introduction to understand a section, it’s not self-contained.
Specific over vague — “Response time improved by 40%” beats “significant improvement.” AI systems prefer citable specifics.
Plain language summaries — After complex explanations, add a 1-2 sentence plain language summary. This is what AI often lifts.
Named sources — Replace “experts say” with “[Researcher Name], [Year].” Replace “studies show” with “[Organization] found in their [Year] survey.”
Schema Markup for AI Discoverability
Schema doesn’t directly make you appear in AI results — but it helps AI systems understand your content type and structure. Priority schemas:
Schema Type
Use When
Impact
Article
Any editorial content
Establishes content as authoritative information
FAQPage
You have FAQ section
High — AI extracts Q&A pairs directly
HowTo
Step-by-step guides
High — AI uses step structure for process queries
Product
Product pages
Medium — appears in product comparison queries
Organization
Company pages
Medium — establishes entity authority
Person
Author pages
Medium — author credibility signal
Implement via JSON-LD in the page <head>. Validate at schema.org/validator.
Mode 3: Monitoring
AI search is volatile. Citations change. Track them.
Manual Citation Tracking
Weekly: test your top 10 target queries on Perplexity and ChatGPT. Log:
Were you cited? (yes/no)
Rank in citations (1st source, 2nd, etc.)
What text was used?
This takes ~20 minutes/week. Do it before automated solutions exist (they don’t yet, not reliably).
Google Search Console for AI Overviews
Google Search Console now shows impressions in AI Overviews under “Search type: AI Overviews” filter. Check:
Which queries trigger AI Overview impressions for your site
Click-through rate from AI Overviews (typically 50-70% lower than organic)
Check if competitors published something more extractable on the same topic
Check if your robots.txt changed (block AI bots = instant disappearance)
Check if your page structure changed significantly (restructuring can break citation patterns)
Check if your domain authority dropped (backlink loss affects AI citation too)
Proactive Triggers
Flag these without being asked:
AI bots blocked in robots.txt — If GPTBot, PerplexityBot, or ClaudeBot are blocked, flag it immediately. Zero AI visibility is possible until fixed, and it’s a 5-minute fix. This trumps everything else.
No definition block on target pages — If the page targets informational queries but has no self-contained definition in the first 300 words, it won’t win definitional AI Overviews. Flag before doing anything else.
Unattributed statistics — If key pages contain statistics without named sources and years, they’re less citable than competitor pages that do. Flag all naked stats.
Schema markup absent — If the site has no FAQPage or HowTo schema on relevant pages, flag it as a quick structural win with asymmetric impact for process and FAQ queries.
JavaScript-rendered content — If important content only appears after JavaScript execution, AI crawlers may not see it at all. Flag content that’s hidden behind JS rendering.
What + Why + How — every finding includes all three
Actions have owners and deadlines — no “consider reviewing…”
Confidence tagging — 🟢 verified (confirmed by citation test) / 🟡 medium (pattern-based) / 🔴 assumed (extrapolated from limited data)
AI SEO is still a young field. Be honest about confidence levels. What gets cited can change as platforms evolve. State what’s proven vs. what’s pattern-matching.
Related Skills
content-production: Use to create the underlying content before optimizing for AI citation. Good AI SEO requires good content first.
content-humanizer: Use after writing for AI SEO. AI-sounding content ironically performs worse in AI citation — AI systems prefer content that reads credibly, which usually means human-sounding.
seo-audit: Use for traditional search ranking optimization. Run both — AI SEO and traditional SEO are complementary, not competing. Many signals overlap.
content-strategy: Use when deciding which topics and queries to target for AI visibility. Strategy first, then optimize.
AI Search Landscape
How each major AI search platform selects, weights, and cites sources. Use this to calibrate your optimization strategy per platform.
Last updated: 2026-03 — this landscape changes fast. Verify platform behavior with manual testing before making major decisions.
The Fundamental Model
Every AI search platform follows the same broad pipeline:
Index — Crawl and store web content (or use a third-party index)
Retrieve — For a given query, retrieve candidate documents
Extract — Pull the most relevant passages from those documents
Generate — Synthesize an answer, often citing the sources
Present — Show the answer to the user, with or without sources visible
Your leverage points are steps 1-3. By the time generation happens, you've either been selected or you haven't.
Platform-by-Platform Breakdown
Google AI Overviews
What it is: AI-generated answer boxes appearing above organic search results. Rollout expanded globally in 2024-2025.
How it selects sources:
Uses Google's own index (you must rank in traditional Google search first — this is NOT optional)
Strongly prefers pages that already rank in the top 10 for the query
Favors content with structured data (FAQPage, HowTo schemas)
The featured passage is typically lifted from a page's most extractable paragraph — usually a definition or a direct answer near the top
Recency matters more here than elsewhere for news-adjacent queries
Citation behavior:
Shows 3-7 source links typically
Cited sources don't always correlate with position 1-3 in organic results
Pages that had featured snippets before AI Overviews launched tend to appear in AI Overviews
What to prioritize for Google AI Overviews:
Rank in traditional search first (prerequisite)
Add FAQPage schema
Put a direct answer in the first 200 words
Get backlinks from high-authority sites (still matters)
Set Google-Extended to Allow in robots.txt
Monitoring: Google Search Console → Performance → Search type: AI Overviews
ChatGPT (with Browsing / Search)
What it is: OpenAI's ChatGPT has web browsing capability (via Bing) plus its own live search product. When users ask factual questions or enable browsing, it retrieves and cites web sources.
How it selects sources:
Uses Bing's index (Microsoft partnership) — Bing crawl and indexing quality matters
GPTBot also crawls independently for training data (distinct from search citations)
For search-backed answers: pulls several sources, synthesizes, cites inline
Prefers authoritative domains — news outlets, Wikipedia, academic sources, established company blogs
Content with clear, extractable answers wins over dense narrative
Citation behavior:
Inline citations in the answer ("according to [Source]")
Sources panel at the bottom
Not all cited sources get equal weight in the synthesis
What to prioritize for ChatGPT:
Ensure Bing has indexed your pages (submit to Bing Webmaster Tools)
Allow GPTBot in robots.txt
Structure content with explicit definition and step patterns
Author attribution with credentials helps — include author bylines
Original data and research get preferential citation
What it is: AI-native search engine built on real-time web retrieval. Every answer cites sources with a numbered reference panel. Among the most transparent about citation.
How it selects sources:
Has its own crawler (PerplexityBot) plus access to third-party indexes
Real-time retrieval for every query — very current
For B2B companies: professional tone and industry-specific expertise matters more here
FAQ and definition patterns work well for business query types
Cross-Platform Summary
Signal
Google AI Overviews
ChatGPT
Perplexity
Claude
Copilot
Must rank in traditional search
✅ Yes
Bing only
No
No
Bing only
Bot to allow
Google-Extended
GPTBot
PerplexityBot
ClaudeBot
(via Bing)
Schema markup impact
High
Medium
Low
Medium
Medium
Content recency weight
High
Medium
Very high
Medium
Medium
Original data advantage
High
High
High
High
High
FAQ pattern extraction
Very high
High
High
Medium
High
Numbered steps extraction
High
High
Very high
High
High
Author attribution impact
Medium
High
Low
High
Medium
What No Platform Does (Yet)
Things that are widely assumed but not confirmed:
Direct "opt-in to citations" programs: None of the major platforms have a verified publisher program that guarantees citation
Predictable citation ranking: Even with perfect structure, citations are non-deterministic — the same query on the same platform can produce different citations on consecutive days
Real-time citation tracking: No platform offers publishers a dashboard showing when they're cited and for which queries (Google Search Console for AI Overviews is the closest, and it's limited)
Plan your AI SEO strategy for influence, not for guaranteed outcomes. Maximize your signal quality, then track and iterate.
Content Patterns for AI Citability
Ready-to-use block templates for each content pattern that AI search engines reliably extract and cite. Copy, adapt, and embed in your pages.
Why Patterns Matter
AI systems don't read pages the way humans do. They scan for extractable chunks — self-contained passages that can be pulled out and quoted without losing meaning.
The patterns below are structured to be self-contained by design. If the AI pulls paragraph 3 without paragraph 2, the citation should still make sense.
Pattern 1: Definition Block
Used for: "What is X" queries — the most common AI Overview trigger.
Requirements:
First sentence: direct definition
Second sentence: why it matters or how it works
Third sentence (optional): example or context
Placed in first 300 words of the page
Template:
**[Term]** is [precise definition — what it is, what it does, who uses it]. [One sentence on why it matters or what problem it solves]. [Optional: one sentence example — "For example, a SaaS company might use X to..."].
Example:
**Churn rate** is the percentage of customers who cancel or stop using a service within a given period, typically measured monthly or annually. It directly impacts recurring revenue — a 5% monthly churn means losing over half your customer base each year. For subscription SaaS, a healthy monthly churn rate is typically below 2%.
Tips:
Bold the term on its first use
Don't start with "In the world of..." or "When it comes to..."
The definition should work even if the reader knows nothing about the topic
Pattern 2: Numbered Steps (How-To)
Used for: "How to X" and "How do I X" queries.
Requirements:
Numbered list (not bulleted)
Each step starts with an action verb
Each step is self-contained (can be cited alone)
5-10 steps maximum
Pair with HowTo schema markup
Template:
## How to [Task]1. **[Verb phrase]** — [1-2 sentence explanation of this specific step]2. **[Verb phrase]** — [1-2 sentence explanation]3. **[Verb phrase]** — [1-2 sentence explanation]4. **[Verb phrase]** — [1-2 sentence explanation]5. **[Verb phrase]** — [1-2 sentence explanation]
Example:
## How to Reduce SaaS Churn1. **Define your activation event** — Identify the specific action that signals a user has experienced core product value. For Slack, it's 2,000 messages sent. For Dropbox, it's saving the first file.2. **Instrument the activation funnel** — Add event tracking from signup to activation. Find the step where most users drop off — that's your highest-leverage point.3. **Build a customer health score** — Combine login frequency, feature adoption, and support ticket volume into a single score. Customers below 40 get proactive outreach.4. **Segment churn by cohort** — Not all churn looks the same. Compare churn rates by acquisition channel, onboarding path, and company size to find patterns.5. **Interview churned customers** — The customers who left quietly are more valuable than the ones who complained. Call 10 churned accounts per month and ask what they were trying to accomplish.
Use simple values — "Yes / No / Partial" beats long prose in cells
Include a "Best for" row — AI systems use this for recommendation queries
Add a sentence below the table summarizing the verdict: "X is best for teams that need A; Y is better when B matters more."
Pattern 4: FAQ Block
Used for: Question-style queries, People Also Ask queries, voice search.
Requirements:
Question phrased exactly as someone would ask it (natural language)
Answer is complete in 2-4 sentences (no "read more in section 3")
5-10 FAQs per block
Pair with FAQPage schema markup
Template:
## Frequently Asked Questions**What is [X]?**[2-4 sentence complete answer]**How does [X] work?**[2-4 sentence complete answer]**What's the difference between [X] and [Y]?**[2-4 sentence complete answer]**How much does [X] cost?**[2-4 sentence complete answer]**Is [X] right for [audience]?**[2-4 sentence complete answer]
Write questions the way users actually type or speak them — use Google's "People Also Ask" as a source
Answers should be complete without needing context from anywhere else on the page
Don't start answers with "Great question" or "That's a common question" — just answer
Pattern 5: Statistic with Attribution
Used for: Data queries, "how many" queries, research-backed claims.
Requirements:
Named source (not "a study" — the actual organization name)
Year of the data
Specific number (not "many" or "most")
Context (what the number means)
Template:
According to [Organization Name]'s [Report Name] ([Year]), [specific statistic with units]. [One sentence on what this means or why it matters].
Example:
According to the Baymard Institute's 2024 UX benchmarking study, 69.8% of online shopping carts are abandoned before purchase. For a $1M/month ecommerce store, recovering just 5% of abandoned carts represents $35,000 in monthly revenue.
Tips:
Link to the original source (AI systems and readers both benefit)
If data is from your own research, say so: "In our 2025 survey of 500 SaaS founders..."
Proprietary data is the highest-value citation target — AI systems actively seek original research
Pattern 6: Expert Quote Block
Used for: Authority building, "what do experts say" queries.
Requirements:
Full name of the person quoted
Their title and organization
A quote that's substantive (not a generic endorsement)
Brief context sentence before the quote
Template:
[Context sentence explaining why this person's view matters.]"[Direct quote — specific, substantive, something only they would say]," says [Full Name], [Title] at [Organization].
Example:
Patrick Campbell, founder of ProfitWell (acquired by Paddle), studied pricing data from over 30,000 SaaS companies before reaching a counterintuitive conclusion about churn."Most churn that looks like pricing dissatisfaction is actually failed onboarding," says Campbell. "The customer never saw the value that justified the price. That's a different problem than being too expensive."
Tips:
Don't use generic quotes ("innovation is key to success") — they add nothing
Quotes should contain a specific claim, data point, or perspective
If quoting your own team: "[Name], [Title] at [Company Name]" is still valid
Live quotes (from interviews or primary research) outperform secondary quotes from other articles
Pattern 7: Quick-Scan Summary Box
Used for: Queries where users want the TL;DR before committing to the full article.
Requirements:
Placed near the top of the article (after the intro)
3-7 key takeaways
Each bullet stands alone — no context required
Labeled clearly ("Key Takeaways" or "Quick Summary")
Template:
**Key Takeaways**- [Specific, complete takeaway — could be read as a tweet]- [Specific, complete takeaway]- [Specific, complete takeaway]- [Specific, complete takeaway]- [Specific, complete takeaway]
Tips:
This is often the block AI systems extract for "summary" type queries
Make each bullet specific: "Monthly churn below 2% is considered healthy for most SaaS" beats "Churn should be low"
Don't repeat the article intro verbatim — these should be the most actionable insights
Combining Patterns
The most citable pages combine multiple patterns throughout the piece:
Recommended page structure for maximum AI extractability:
Definition block (first 300 words)
Quick summary box (right after intro)
Body sections with numbered steps or subsections
Data points with full attribution throughout
Comparison table (if competitive topic)
FAQ block (before conclusion)
Expert quote (to add authority)
A page with all 7 patterns has significantly more extractable surface area than a page with prose only. The AI has more options to pull from and a higher probability of finding something that perfectly matches the query.
AI Visibility Monitoring Guide
How to track whether your content is getting cited by AI search engines — and what to do when citations change.
The honest truth: AI citation monitoring is immature. There's no Google Search Console equivalent for Perplexity or ChatGPT. Most tracking is manual today. This guide covers what works now and what to watch for as tooling matures.
What You're Tracking
Goal: Know when you appear in AI answers, for which queries, on which platforms — and detect changes before your traffic is affected.
The challenge: Most AI search platforms don't give publishers visibility into their citation data. You're reverse-engineering your presence through manual testing and indirect signals.
Four things to track:
Citation presence — are you appearing at all?
Citation consistency — do you appear most of the time or occasionally?
Competitor citations — who else is cited for your target queries?
Traffic signals — is AI-driven traffic changing?
Platform-by-Platform Monitoring
Google AI Overviews — Best Current Tooling
Google Search Console is the best data source available for any AI platform:
Setup:
Open Google Search Console → Performance → Search results
Add filter: "Search type" → "AI Overviews"
Set date range to last 90 days minimum
What you see:
Queries where your pages appeared in AI Overviews
Impressions from AI Overviews
Clicks from AI Overviews (usually much lower than organic — users get the answer in the AI box)
CTR from AI Overviews
What to do with it:
Sort by impressions: these are your current AI Overview presences
Sort by clicks: these are the queries where users still clicked through (high-value)
Identify queries where you have impressions but zero clicks — consider whether that's acceptable or if you need to gate more value behind the click
Watch for queries where impressions drop sharply — you may have lost an AI Overview position
Frequency: Weekly check. Pull a CSV monthly for trend analysis.
Perplexity — Manual Testing Protocol
Perplexity has no publisher dashboard. Manual testing is the only reliable method.
Weekly test protocol:
Identify your 10-20 highest-priority target queries
Search each query on perplexity.ai in an incognito window
Check the Sources panel on the right side
Record: cited (yes/no), position in sources (1st, 2nd, 3rd...), which page was cited
What to record in your tracking log:
Date
Query
Cited?
Position
Cited URL
Top Competitor
2026-03-06
"how to reduce SaaS churn"
Yes
2
/blog/churn-reduction
competitor.com
2026-03-06
"SaaS churn rate benchmark"
No
—
—
competitor.com
Patterns to watch for:
Same query cited 4/4 weeks → stable citation (protect it)
Citation appearing intermittently (2 out of 4 weeks) → fragile position (strengthen the page)
Consistent non-citation → gap to fill (page missing extractable patterns)
Frequency: Weekly for top 10 queries. Monthly for the full list.
ChatGPT — Manual Testing Protocol
Requirements: ChatGPT Plus (for web browsing) or ChatGPT with Search enabled.
Test protocol:
Start a new conversation (fresh context window)
Enable browsing / search mode
Ask your target query as a natural question
Check citations in the response
Click through to verify which pages are cited
Note: ChatGPT citations vary by session. The same query may cite different sources on consecutive days. This is by design — treat it as probabilistic. Your goal is to appear in the citation set, not to appear every time.
Natural question queries ("what's the best email marketing software for small teams?")
Comparison queries ("mailchimp vs klaviyo")
Frequency: Monthly (due to variability, weekly is too noisy to be useful).
Microsoft Copilot — Manual Testing Protocol
Access at copilot.microsoft.com or via Edge sidebar.
Same protocol as ChatGPT. Look for source cards that appear with citations. Copilot integrates Bing's index, so if your Bing presence is strong, Copilot citations follow.
Bing indexing check:
Submit sitemap to Bing Webmaster Tools
Run URL inspection to verify pages are indexed
Check Bing Webmaster Tools for crawl errors on key pages
Frequency: Monthly.
Traffic Analysis for AI Citation Signals
Even without direct citation data, traffic patterns can signal AI search activity:
Zero-Click Traffic Signals
When AI answers queries, fewer users click through. Watch for:
Impression growth + traffic decline: If Google Search Console shows impressions growing for a keyword but organic clicks dropping, an AI Overview may be answering the query. You're being cited but not visited.
Query pattern in GSC: If informational queries show impression growth but navigational/commercial queries stay flat, AI Overviews are likely answering the informational queries.
Direct Traffic Anomalies
Some AI platforms (Claude, Gemini) show traffic as "direct" since users often copy/paste URLs rather than clicking. An increase in direct traffic to specific content pages (not your homepage) can signal AI-driven attention.
Referral Traffic from AI Platforms
Perplexity, ChatGPT, and Claude all send some referral traffic when users click cited sources. Set up in Google Analytics 4:
Create a custom dimension tracking referral source
Strengthen the page: more specific data, better structure, schema markup
Authority drop
Rebuild backlinks; also check for manual penalty in Google Search Console
Page went slow
Fix Core Web Vitals — AI crawlers deprioritize slow pages
Content became outdated
Update with current data and year
Emerging Tools to Watch
The AI citation monitoring space is early-stage. Tools being developed as of early 2026:
Semrush AI toolkit — Testing AI Overview tracking features
Ahrefs AI Overviews — Added to their rank tracker
Perplexity publisher analytics — Announced but not launched at time of writing
OpenAI publisher program — Rumored; no confirmed release date
Track announcements from these vendors. First-mover advantage on publisher analytics will be significant.
Until then: Manual testing + Google Search Console is the most reliable stack available. Don't let perfect be the enemy of done — weekly manual testing surfaces 80% of what you need to know.
Install this Skill
Skills give your AI agent a consistent, structured approach to this task — better output than a one-off prompt.