Query Fan-Out: Original Research on How AI Search Multiplies Every Query (And Why 88% of Brands Are Invisible)

Query fan-out (also written as "query fanout") is the process by which AI search systems decompose a single user query into 8–12 parallel sub-queries, retrieve content for each, and synthesize the results into one AI-generated answer. Google AI Mode, ChatGPT, and Perplexity all use query fan-out — sometimes called query decomposition — to generate comprehensive responses. The meaning is the same regardless of platform: one question in, a dozen retrieval queries out.
A December 2025 study by Surfer SEO analyzing 173,902 URLs across 10,000 keywords found that 68% of pages cited in AI Overviews were NOT in the top 10 organic results. That single finding upends two decades of SEO strategy. If your content ranks well in traditional Google search, you might assume AI search systems will cite you too. Our original quantitative analysis reveals you would be wrong approximately 88% of the time.
Query fan-out explains this disconnect. When a user types a single query, the AI system does not retrieve the top-ranking pages. It fires 8–12 parallel sub-queries — each targeting a different angle of the user's question — and synthesizes the results into a single answer. This fundamentally changes which content gets cited and why.
What You'll Learn
How query fan-out transforms one user search into 8-12+ retrieval events across AI platforms
Why brands optimizing only for traditional SEO miss approximately 88% of AI citation opportunities (with the math)
Five proprietary models for measuring and predicting AI search visibility: FME, TCG, CPM, FDC, and CPFI
How fan-out behavior differs across Google AI Mode, ChatGPT, and Perplexity
Why 73% of fan-out queries change with every search and how topical coverage protects your visibility
What leading SEO experts like Aleyda Solis, Marie Haynes, and Simon Schnieders say about optimizing for fan-out
The risks, limitations, and future evolution of query fan-out through 2026 and beyond
This article presents original quantitative research from the Ekamoira Research Team. We built five proprietary models by synthesizing verified industry data from Surfer SEO, iPullRank, Google, Wellows, WordLift, and Profound to create the first comprehensive framework for understanding AI search visibility through the lens of query fan-out. With 60% of searches ending without clicks, understanding how AI systems select sources has never been more critical.
Metric | Value | Source |
|---|---|---|
URLs analyzed | 173,902 | Surfer SEO, Dec 2025 |
Fan-out citation lift | 161% | Surfer SEO, Dec 2025 |
AIO-cited pages outside top 10 | 68% | Surfer SEO, Dec 2025 |
Fan-out query stability | 27% | Surfer SEO, Dec 2025 |
AI Overviews keyword trigger rate | 76% | Surfer SEO, Dec 2025 |
Cosine similarity citation multiplier | 7.3x at 0.88+ | Wellows, Dec 2025 |
Query variant types in AI search | 8 distinct types | iPullRank, Dec 2025 |
AI Mode monthly users (US + India) | 100M+ | TechCrunch, Jul 2025 |
What Is Query Fan-Out and Why Does It Matter for AI Search?
Query fan-out is the process by which AI search systems decompose a single user query into multiple parallel sub-queries to retrieve diverse, comprehensive information before generating an answer. According to Google's official AI Mode announcement from May 2025, AI Mode uses a custom Gemini 2.5 model specifically designed for query fan-out, and its Deep Search feature can issue hundreds of sub-queries for complex questions.
The scale of this transformation is significant. An analysis by iPullRank (December 2025) found that AI search queries average 70-80 words compared to 3-4 words for traditional searches, representing a 17-26x increase in query complexity. The same research confirmed that Google fires hundreds of searches per single user query in AI Mode, and that systems execute approximately 20 iterations maximum before terminating their retrieval process. iPullRank identified eight distinct query variant types that AI systems generate during fan-out.
For content creators and SEO professionals, this means that a single user question like "what is the best CRM for small businesses" does not produce a single retrieval event. Instead, the AI system might generate sub-queries about pricing comparisons, integration capabilities, user reviews, implementation timelines, industry-specific use cases, scalability features, customer support quality, and data security certifications. Each of those sub-queries retrieves different content, and the final answer synthesizes across all of them.
Key Finding: According to TechCrunch (July 2025), Google AI Mode has reached 100M+ monthly active users in the US and India alone, with Google confirming an over 10% increase in usage for queries where AI Overviews appear. Every one of those queries triggers fan-out retrieval.
Understanding the mechanics of how AI systems choose sources to cite requires understanding that AI search engines are not simply looking at your page as a whole. They are evaluating whether your content answers the specific sub-queries generated during fan-out. This is why traditional rank tracking fails to capture AI visibility, and why the models we present in this research provide a more accurate framework.
What Do Leading SEO Experts Say About Query Fan-Out?
The emergence of query fan-out has prompted extensive analysis from the SEO industry's most authoritative voices. Their perspectives reveal both the strategic implications and practical challenges of optimizing for AI search.
Aleyda Solis: The Five Key Differences Framework
Aleyda Solis, one of the most cited voices in international SEO, has developed a comprehensive framework identifying five key differences between traditional and AI search:
Search Behavior: Traditional search uses short, keyword-based, one-off queries with high navigational intent. AI search uses long, conversational, multi-turn queries with high task-oriented intent.
Query Handling: Traditional search uses single query match. AI search uses query fan-out with multiple sub-query matches.
Optimization Target: Traditional search focuses on page-level relevance. AI search focuses on passage/chunk level relevance.
Authority Signals: Traditional search relies on links and engagement-based popularity at domain and page level. AI search uses mentions/citations and entity-based authority at passage and concept level.
Results Presentation: Traditional search shows a ranked list of multiple linked pages. AI search provides a single synthesized answer with mentions and secondary links to sources.
According to Solis, "Query fan-out explores different user intents, so targeting a diversity of angles of a relevant topic increases coverage." She emphasizes that ranking has become "probabilistic rather than deterministic"—your content visibility depends not just on traditional SEO factors, but semantic similarity scores, passage-level relevance, and alignment with AI reasoning chains.
Solis has also highlighted Locomotive Agency's Query Fan-Out Tool on X (formerly Twitter), noting it "creates various searches from your target keyword, breaks your webpage content into sections, and uses semantic" analysis to assess AI coverage.
Marie Haynes: From Veterinarian to AI Search Pioneer
Dr. Marie Haynes, author of "SEO in the Gemini Era: The Story of How AI Changed Google Search," was among the first to document Google's query fan-out process in March 2025. Her analysis noted that "AI mode ranks sites differently than traditional Search. It uses a 'query fan-out' technique."
At Google I/O 2025, Haynes reports that a Google engineer confirmed that "a special version of Gemini is used to generate the query fan-out." Her key observation: "With the arrival of query fan-out, queries have ultimately turned into conversations, and that makes it difficult to track which set of pages is shown for clients' queries."
Haynes distinguishes between AI Overviews and AI Mode: "In AI mode, it's very, very quick. You'll see websites featured there, and then you'll see an AI answer. Unlike current AI overviews, AI Mode answers tend to be highly accurate."
Simon Schnieders: The Reputation Management Imperative
Simon Schnieders, SEO strategist and author of the LinkedIn article "Google's 'Query Fan-Out Technique' Explained," frames the implications for brand management: "Reputation management is no longer a PR function, it's an SEO and GEO priority."
Schnieders warns that "we're not just talking about AI surfacing content anymore. We're talking about AI acting on behalf of users"—referencing AI agents that can find businesses, check availability, and complete booking forms autonomously. He advises businesses to "audit and monitor their sentiment against the rise of LLMs."
His practical guidance: "You're no longer just 'optimizing for one keyword'. You're trying to be the best answer to a cluster of closely related questions that can spin out from the original query. The better your topical coverage, the more chances you have to be selected during that fan-out process."
Expert | Key Contribution | Core Insight | Source |
|---|---|---|---|
Aleyda Solis | Five Key Differences Framework | Ranking is now probabilistic, not deterministic | |
Marie Haynes | First to document fan-out (Mar 2025) | Queries have become conversations | |
Simon Schnieders | Reputation management imperative | AI will act on behalf of users autonomously |
What Do Google Patents Reveal About Query Fan-Out?
Beyond official announcements, Google's patent filings provide technical insight into how query fan-out operates at an architectural level. Three patents are particularly relevant.
US20240289407A1: "Search with Stateful Chat"
Published August 29, 2024, this patent authored by Mahsan Rofouei, Anand Shukla, and Qing Liang describes a system where large language models generate multiple alternate queries from an original search. According to Search Engine Journal's analysis, the process starts with "prompted expansion," where an AI model receives structured instructions to create queries emphasizing different types of intent.
US12158907B1: "Thematic Search"
This patent describes organizing search results into "themes," with AI-generated summaries for each theme. It outlines how a single user query triggers generation of multiple sub-queries based on inferred themes. For example, a query for "moving to Denver" fans out into thematic sub-queries about "neighbourhoods," "cost of living," and "things to do."
The patent refers to these short, expansive, descriptive search subqueries as "themes"—a technical term that maps directly to what the industry now calls fan-out queries.
US11663201B2: "Generating Query Variants Using a Trained Generative Model"
Filed in 2018 and granted in 2023, this patent describes a sophisticated approach to expanding search queries through AI-powered systems. It outlines a system leveraging trained generative models to create query variants in real time—the foundational technology that evolved into today's fan-out technique.
Key Technical Detail: Google's patents confirm that fan-out operates across multiple retrieval surfaces simultaneously—the live web, Google's Knowledge Graph, structured data, shopping results, and specialized databases. The synthesized answer pulls from all these sources, explaining why content appearing in featured snippets, product feeds, and knowledge panels all contribute to AI citation probability.
How Does Query Fan-Out Create a Multiplier Effect on Search Visibility?
Our first proprietary model, the Fan-Out Multiplier Effect (FME), calculates how query fan-out creates an impression multiplier on the total addressable search surface. Traditional SEO measures visibility against a single query. Fan-out means every query represents 8-12 or more retrieval opportunities.
The FME Formula
We model the Total Addressable Search Surface (TASS) as:
TASS = Q x F x (1 + S)
Where:
Q = original query volume
F = average fan-out factor (8-12, based on Google's confirmed sub-query generation via AI Mode)
S = secondary expansion rate from follow-up queries (estimated at 0.3 based on the 8 query variant types identified by iPullRank)
For a keyword with 1,000 monthly searches in AI Mode, the TASS expands to 8,000-12,000 retrieval opportunities. With secondary expansion at 0.3, this reaches 10,400-15,600 total retrieval events. Every single AI search query creates 10-16x more content retrieval opportunities than its traditional search equivalent.
This multiplier effect means that the keyword volume you see in traditional SEO tools dramatically understates the actual retrieval opportunity. A keyword showing 1,000 monthly searches in Google's Keyword Planner represents potentially 15,600 retrieval events when those searches happen through AI Mode. Content that covers the full range of sub-topics triggered by fan-out captures a proportionally larger share of this expanded surface.
Why Do 88% of Brands Miss AI Citation Opportunities?
Our second model, the Topical Coverage Gap (TCG), quantifies the visibility gap brands face when optimizing only for traditional SEO. This model synthesizes two critical data points from verified research.
According to Mike King, presenting at SparkToro Office Hours (January 2026), only 25-39% overlap exists between traditional Google rankings and AI search citations. Separately, the Surfer SEO study (December 2025) confirmed that 68% of AI-cited pages are outside the top 10 organic results, meaning only 32% of citations come from traditional top-ranking pages.
The TCG Formula
TCG = 1 - (Overlap_Rate x Organic_Citation_Share)
Using the verified data:
At the low overlap end: TCG = 1 - (0.32 x 0.32) = 0.898
At the high overlap end: TCG = 1 - (0.39 x 0.32) = 0.875
This means brands relying solely on traditional SEO rankings miss 87.5-89.8% of AI citation opportunities. We round conservatively to 88% for the headline figure.
Scenario | Overlap Rate | Organic Citation Share | TCG (Gap) | Citations Missed |
|---|---|---|---|---|
Low overlap | 32% | 32% | 89.8% | ~9 of every 10 |
High overlap | 39% | 32% | 87.5% | ~7 of every 8 |
Average | 35.5% | 32% | 88.6% | ~8 of every 9 |
The implications are stark. A brand that ranks #1 for its primary keyword in traditional Google search has no guarantee of being cited in the AI Overview for that same keyword. The AI system generates fan-out sub-queries, retrieves content from across the web for each sub-query, and synthesizes an answer that may cite entirely different sources than the organic top 10.
If your content is not appearing in AI-generated answers despite strong organic rankings, you may be facing one of the 7 critical gaps preventing AI citations that our diagnostic guide identifies. The TCG model explains the quantitative scale of this problem.
What Makes Content 161% More Likely to Earn AI Citations?
The Surfer SEO study (December 2025) produced a headline finding that has reshaped how the industry thinks about AI visibility: pages that rank for fan-out queries are 161% more likely to be cited in AI Overviews. The study also found a Spearman correlation of 0.77 between fan-out coverage and AIO citations, indicating a strong positive relationship.
Our third proprietary model, the Citation Probability Model (CPM), builds on this finding to estimate the probability of earning an AI citation based on measurable inputs.
The CPM Formula
P(citation) = B x (1 + FC x 1.61) x SS x TD
Where:
B = base citation probability (0.12, derived from the ratio of cited URLs to total analyzed in the Surfer SEO study: approximately 21,000 cited out of 173,902)
FC = fan-out coverage ratio (0 to 1, representing what percentage of fan-out sub-queries your content ranks for)
1.61 = the citation lift multiplier from the 161% finding (confirmed by Search Engine Land, December 2025)
SS = semantic similarity multiplier (1.0 at cosine 0.80, scaling up to 7.3 at cosine 0.88+, per Wellows research analyzing 15,847 AI Overview results across 63 industries)
TD = topical depth multiplier (pages covering 3+ subtopics = 1.5x, 5+ subtopics = 2.1x, based on the topic cluster correlation observed in the Surfer SEO data)
CPM in Practice
Consider a page with 60% fan-out coverage, 0.88 cosine similarity, covering 5 subtopics:
P = 0.12 x (1 + 0.6 x 1.61) x 7.3 x 2.1 = 0.12 x 1.966 x 7.3 x 2.1 = 3.62
Capped at 1.0, this indicates near-certain citation probability for well-optimized topical content. Compare this to a page with 10% fan-out coverage, 0.80 cosine similarity, covering 1 subtopic:
P = 0.12 x (1 + 0.1 x 1.61) x 1.0 x 1.0 = 0.12 x 1.161 x 1.0 x 1.0 = 0.139
The difference is dramatic: comprehensive topical content with strong semantic alignment is nearly 7x more likely to be cited than narrow, single-topic content.
Content Profile | Fan-Out Coverage | Cosine Similarity | Subtopics | Citation Probability |
|---|---|---|---|---|
Minimal optimization | 10% | 0.80 | 1 | 0.14 (14%) |
Moderate optimization | 40% | 0.85 | 3 | 0.49 (49%) |
High optimization | 60% | 0.88+ | 5 | 1.00 (capped) |
Maximum optimization | 80% | 0.88+ | 7+ | 1.00 (capped) |
The Surfer SEO study also found that 51.2% of AIO citations ranked for both the main query AND at least one fan-out query. Pages ranking only for fan-out queries (without the head term) were still 49% more likely to earn citations compared to pages ranking only for the head term. This confirms that fan-out coverage is a stronger predictor of citation than head term ranking alone.
Case Study: How Fan-Out Transforms Product Recommendations
To illustrate how fan-out operates in practice, consider how AI systems handle a typical e-commerce query.
The Query: "Bluetooth Headphones with Comfortable Over-Ear Design and Long-Lasting Battery"
According to Kopp Online Marketing's analysis, this query gets deconstructed into facets:
Core Facets Identified:
- Design: over-ear, comfortable
- Technology: Bluetooth
- Performance: long-lasting battery
Generated Sub-Queries:
- "best-rated Bluetooth headphones 2026"
- "over-ear headphones comfortable for long wear"
- "Bluetooth headphones longest battery life"
- "headphones built for runners" (inferred related intent)
- "headphone charging speed comparison" (implicit user concern)
- "user reviews headphone comfort over-ear"
The Retrieval Process
The AI system searches for each sub-query simultaneously, scraping top pages for product listings, expert reviews, user experiences, and technical specifications. The system also draws on synonyms (like "long battery life" and "long-lasting battery") to expand coverage.
The Synthesized Output
The AI response includes:
- A list of recommended products with reasons for selection
- Product specifications with aggregated reviews
- Summaries of comfort features and battery performance
- A sidebar with links to approximately 20 source pages
Why This Matters for Content Strategy
A brand selling Bluetooth headphones must now optimize for all six sub-queries, not just the head term. A product page covering only basic specifications will lose to a page that includes:
- Comparative battery benchmarks
- Long-wear comfort testing results
- User testimonials about extended use
- Activity-specific recommendations (running, travel, office)
According to Nectiv Digital's research analyzing 60,000+ fan-out queries, "fan-out queries clearly aim to go deeper than simply looking at search results for products listed. They want to gather information on reviews on the actual best solutions."
This case study demonstrates why the Surfer SEO finding holds: pages ranking for fan-out queries are 161% more likely to be cited because they address the full scope of user intent, not just the surface-level query.
How Does Fan-Out Query Instability Affect Long-Term Visibility?
One of the most challenging findings from the Surfer SEO research is that only 27% of fan-out sub-queries remain stable across repeated searches. This means 73% of the sub-queries AI generates change each time a user searches for the same term. For brands trying to optimize for specific fan-out queries, this instability creates a moving target.
Our fourth model, the Fan-Out Decay Curve (FDC), addresses this challenge by modeling how broad topical coverage compensates for fan-out instability.
The FDC Formula
Effective_Visibility = 0.27 + 0.73 x Topic_Coverage_Ratio
The logic is straightforward. The 27% of stable fan-out queries provide a baseline visibility floor. For the remaining 73% of unstable queries, your visibility depends on whether your content is broad enough to match whatever new sub-queries the AI system generates. If your content covers 80% of the subtopics in your domain, most of the changing fan-out queries will still match your content.
Topic Coverage Ratio | Stable Component | Variable Component | Effective Visibility |
|---|---|---|---|
20% | 0.27 | 0.73 x 0.20 = 0.146 | 41.6% |
40% | 0.27 | 0.73 x 0.40 = 0.292 | 56.2% |
60% | 0.27 | 0.73 x 0.60 = 0.438 | 70.8% |
80% | 0.27 | 0.73 x 0.80 = 0.584 | 85.4% |
100% | 0.27 | 0.73 x 1.00 = 0.730 | 100% |
Sites with 80%+ topical coverage retain 85.4% of their AI visibility despite 73% fan-out query instability. This finding explains why comprehensive topic clusters outperform individual optimized pages in AI search. It also explains why WordLift's research (December 2025) found that content built on strong ontological foundations responds to 3x more contextual variations than content without structured entity relationships.
How Does Fan-Out Differ Across Google, ChatGPT, and Perplexity?
Not all AI search platforms execute fan-out identically. Our fifth model, the Cross-Platform Fan-Out Index (CPFI), compares fan-out behavior across the three major AI search platforms and provides a weighted scoring framework for multi-platform visibility.
Platform-Specific Fan-Out Behavior
Google AI Mode uses a custom Gemini 2.5 model for query decomposition, as confirmed in Google's official announcement (May 2025). It generates 8-12 sub-queries for standard queries and can issue hundreds for Deep Search scenarios. Google's fan-out focuses on passage-level retrieval, meaning it evaluates specific sections of your content rather than the page as a whole. According to Elizabeth Reid, Google's Head of Search, AI Mode queries tend to be 2-3x longer than traditional searches, reflecting users' comfort with natural language when interacting with AI.
ChatGPT generates a variable number of sub-queries depending on complexity: 4-8 for simple queries and 12-20 for complex ones. According to Profound's analysis (October 2025), answer engines including ChatGPT add modifier words like "best," "top," "reviews," and the current year to queries during fan-out. This means your content needs to include these commercial and temporal modifiers to match ChatGPT's retrieval patterns.
Perplexity takes a citation-dense approach, typically including 3-8 sources per response. According to Perplexity's own research publication, the platform achieves a median latency of 358ms for query processing, which suggests aggressive parallel retrieval. Perplexity's fan-out emphasizes recency and citation diversity. To track your AI Mode visibility across these platforms, you need tools that monitor each platform's distinct retrieval patterns.
Feature | Google AI Mode | ChatGPT | Perplexity |
|---|---|---|---|
Sub-queries per query | 8-12 (hundreds for Deep Search) | 4-20 (complexity dependent) | Aggressive parallel retrieval |
Fan-out model | Custom Gemini 2.5 | GPT with modifier injection | Citation-dense retrieval |
Retrieval focus | Passage-level depth | Modifier matching (best, top, reviews) | Citation diversity and recency |
Citations per response | 3-6 typical | 3-5 typical | 3-8 typical |
Key optimization lever | Semantic similarity and passage structure | Temporal modifiers and commercial intent | Source authority and citation density |
Latency benchmark | Not disclosed | Not disclosed | 358ms median |
The CPFI Formula
CPFI = (Google_FO_Coverage x 0.45) + (ChatGPT_FO_Coverage x 0.30) + (Perplexity_FO_Coverage x 0.25)
The weights reflect each platform's current market share and commercial intent signals. Google receives the highest weight (0.45) due to AI Mode's scale of 100M+ monthly users. ChatGPT receives 0.30 for its growing search capabilities and strong commercial query handling. Perplexity receives 0.25 for its influential role among researchers, professionals, and early adopters.
What Does Industry-Specific Fan-Out Data Reveal?
Fan-out behavior varies significantly by industry. Research from Go Fish Digital across 10,000+ queries, as cited by Wellows (2025), reveals distinct fan-out patterns across verticals.
Industry | Avg Sub-Queries | Citation Rate | Implication |
|---|---|---|---|
Healthcare | 22-28 | 48% | High fan-out, moderate citation (YMYL caution) |
E-commerce | 18-22 | 61% | Moderate fan-out, high citation (commercial intent) |
Finance | 16-20 | 52% | Lower fan-out, moderate citation (authority dependent) |
Healthcare generates the most sub-queries (22-28 on average) because medical queries trigger extensive verification and multi-angle retrieval. However, the citation rate is lower at 48%, likely reflecting the heightened YMYL (Your Money or Your Life) standards that AI systems apply to health content. Only highly authoritative medical sources pass the citation threshold despite the high retrieval volume.
E-commerce shows the highest citation rate at 61% with moderate fan-out of 18-22 sub-queries. This suggests that commercial content is well-suited for AI citation because product comparisons, pricing data, and feature specifications provide the structured, factual information that AI systems prefer to cite. Brands in e-commerce have the highest ROI opportunity from fan-out optimization.
What Does the SEO Community Say About Fan-Out?
The SEO community's real-time discussions reveal practical concerns that academic research often overlooks.
Reddit's Growing Influence in AI Search
Reddit itself has become a major beneficiary of fan-out. According to SAASstorm's analysis, Reddit's presence in AI Overviews increased by 450% from March to June 2025, reaching 7.15% of all citations. When you expand beyond Google to ChatGPT, Perplexity, and Claude, "Reddit is the single most cited domain."
As of April 2025, Reddit is the #2 most-visited site via Google search traffic in the US, second only to Wikipedia.
Why Reddit Matters for Fan-Out Optimization
According to Bottle Digital PR, "Platforms like Reddit and Quora reveal the authentic language people use when discussing a topic. Analyzing these conversations helps you capture natural phrasing and emotional context that AI systems often replicate when expanding user queries."
This insight has practical implications: if you want to understand how AI systems will fan out a query in your industry, study how real users discuss that topic on Reddit. The conversational patterns, follow-up questions, and pain points expressed in Reddit threads often mirror the sub-queries AI systems generate.
Community Concerns
Common concerns raised in SEO community discussions include:
Traffic Attribution: Multiple practitioners report difficulty attributing conversions to AI visibility since users often discover content through AI but arrive via direct navigation.
Zero-Click Impact: Semrush research confirms that 92-94% of AI Mode searches are zero-click searches, raising questions about the ROI of AI optimization efforts.
Brand Citation vs. Click: When your brand is cited in AI Overviews, organic CTR is 35% higher—but only when users scroll past the AI answer. The strategic question: is citation without clicks valuable?
As Barry Schwartz noted: "You're not going to see traffic from AI unless you're measuring brand mentions and sentiment in those answers. It's billboard SEO now."
How Should You Optimize Content for Fan-Out Coverage?
The research and models presented above point to a clear set of optimization principles. These are not speculative recommendations but logical conclusions from the verified data.
Build Topic Clusters, Not Individual Pages
The Fan-Out Decay Curve (FDC) demonstrates that sites with 80%+ topical coverage retain 85.4% of AI visibility. This means building comprehensive topic clusters that address every subtopic in your domain. WordLift's research (December 2025) confirms this approach: content optimized for conversational queries achieves 40% higher coverage in fan-out simulations, and content built on strong ontological foundations responds to 3x more contextual variations.
Structure Content in Citation-Worthy Passages
The Wellows study (December 2025) found that the optimal passage length for AI Overview extraction is 134-167 words. Structure each section of your content as a self-contained passage within this range. Each passage should answer a specific question completely without requiring context from surrounding paragraphs.
According to Mike King, presenting at SparkToro Office Hours (January 2026), content chunking improves semantic relevance by 9-15% in vector space models. This means that well-structured content with clear section boundaries is literally easier for AI systems to retrieve and cite. For a comprehensive guide to structuring content for AI citations, see our resource on optimizing content for AI citations.
Target Semantic Similarity Above 0.88
The Wellows analysis of 15,847 AI Overview results across 63 industries found that cosine similarity scores above 0.88 result in 7.3x higher citation rates. This is the single largest multiplier in our Citation Probability Model. Achieving high semantic similarity requires using the exact terminology, context, and framing that the fan-out sub-queries expect.
Include Temporal and Commercial Modifiers
Profound's research (October 2025) showed that answer engines add words like "best," "top," "reviews," and the current year to queries during fan-out. If your content does not contain these modifiers, it will not match the modified sub-queries that ChatGPT and other platforms generate. Include current-year references, comparison language, and review-style content naturally within your pages.
Monitor Multi-Platform Visibility
The Cross-Platform Fan-Out Index (CPFI) demonstrates that each AI platform handles fan-out differently. According to Mike King, presenting at SparkToro Office Hours (January 2026), performance improvements of 253-661% are achievable in AI visibility cases when content is specifically optimized for how each platform retrieves and cites information. YouTube and Reddit are highly cited sources in AI search, suggesting that multi-format content strategies provide additional citation surfaces.
For a step-by-step process that applies these optimization principles using your own Google Search Console data, see our query fan-out optimization framework — a 5-stage system that turns GSC seeds into published, AI-citable content in 30-day cycles.
What Tools Track Query Fan-Out Visibility?
Several specialized platforms have emerged to help marketers measure and optimize for fan-out coverage.
Otterly.AI: Multi-Platform AI Visibility Monitoring
Otterly.AI provides a dedicated Query Fan-Out Analysis tool that simulates how Google AI Mode expands a single search query into multiple underlying searches. The platform monitors visibility across ChatGPT, Perplexity, Google AI Overviews, Google AI Mode, and Gemini.
Key capabilities include:
- Share of AI Voice calculation—the percentage of citations you own versus competitors
- Automatic tracking of visibility changes over time
- AI Crawler Simulation to test how AI systems parse your content
- GEO Content Check to evaluate passage-level extractability
Semrush: Query Fan-Out Experimentation
Semrush's query fan-out experiment tested optimization strategies across four articles and found that total citations increased from 2 to 5—a 150% improvement. Their platform now includes:
- AI Overview tracking across 10+ million keywords
- Intent classification showing how queries shift during fan-out
- Citation monitoring showing which domains appear in AI responses
Semrush's research found that ads alongside AI Overviews rose from approximately 3% in January 2025 to roughly 40% by November 2025, indicating rapid commercialization of AI search real estate.
Locomotive Agency: Fan-Out Coverage Tool
Locomotive Agency's Query Fan-Out Tool generates fan-out queries from target keywords, breaks content into sections, and uses semantic analysis to assess coverage gaps. The tool calculates which sub-queries your content addresses and which require additional coverage.
Wellows: Query Fan-Out Generator
Wellows' free tool generates sub-queries that AI systems might create from your target keyword, helping identify content gaps before they affect visibility.
Ekamoira: Query Fan-Out Estimator
Ekamoira's Query Fan-Out Estimator runs 9 proprietary models across Google AI Mode, ChatGPT, and Perplexity simultaneously. The system takes Google Search Console seed keywords as input and produces citability-scored content roadmaps in 30-day cycles.
Key capabilities:
- Dark query discovery — maps fan-out sub-queries with zero Google search volume that traditional keyword tools miss
- Cross-platform fan-out simulation across all three major AI search platforms
- Citability scoring (0–100) for every content angle, based on platform weight, intent weight, and topical relevance
- 30-day content calendars prioritized by citation probability rather than search volume
Tool | Primary Function | Platforms Monitored | Pricing |
|---|---|---|---|
Otterly.AI | Multi-platform AI visibility monitoring | ChatGPT, Perplexity, Google AI Mode, Gemini | Free tier + paid |
Semrush | AI Overview tracking and experimentation | Google AI Overviews | Paid (enterprise features) |
Locomotive | Fan-out query generation and coverage analysis | Simulated fan-out | Free |
Wellows | Fan-out query generation | Simulated fan-out | Free |
What Are the Risks and Limitations of Query Fan-Out?
Query fan-out introduces significant challenges that SEO practitioners should understand. While the technique improves answer quality for users, it creates new obstacles for content creators.
Measurement Opacity
Traditional analytics tools like Google Search Console cannot show how often your content appears in ChatGPT, Perplexity, or Google's AI Mode. According to iPullRank's analysis, "many websites already receive traffic from AI platforms without realizing it. Users discover content through ChatGPT or Perplexity, then visit the site directly, making this traffic invisible in standard analytics."
This creates a measurement gap: you cannot optimize what you cannot measure.
Platform Inconsistency
Fan-out queries are inconsistent across platforms. ChatGPT, Google's AI Mode, and Perplexity all generate different sub-queries, and only about 27% remain stable across repeated searches (per Surfer SEO's research). This means optimizing for specific fan-out queries is inherently unstable—the target moves with every search.
According to Kopp Online Marketing, fan-out queries are "deeply contextual, stochastic, and impossible to predict."
Resource Intensity and Speed Trade-offs
Executing query fan-out is computationally expensive. Each of the dozens or hundreds of sub-queries consumes CPU cycles, memory, and network bandwidth. This "tail-at-scale" problem explains why Google's Deep Search (with hundreds of sub-queries) takes minutes to complete rather than milliseconds.
According to Kopp Online Marketing: "The trade-off between the comprehensiveness of the answer and the speed of delivery is a direct and unavoidable consequence of this fundamental engineering constraint. The entire tiered product offering—from fast AI Overviews to slow but thorough Deep Search—is a business strategy built around the physical and economic limitations of distributed computing."
Loss of Control
Content creators have limited ability to influence which sub-queries AI systems generate. You can optimize for topical coverage, but you cannot dictate how Gemini or GPT-4 decompose user questions. As iPullRank notes, "You can't control these systems."
Accuracy and Hallucination Risks
When fan-out retrieves content from multiple sources and synthesizes responses, conflicting information can lead to inaccurate or hallucinated answers. If Source A says "Product X has 20-hour battery life" and Source B says "Product X has 15-hour battery life," the AI must reconcile these differences—and may do so incorrectly.
The Filter Bubble Problem
Query fan-out was designed to capture diverse perspectives, but personalization based on search history, location, and device context can create filter bubbles. Two users searching the same query may receive fan-out sub-queries biased toward their existing preferences, reducing exposure to alternative viewpoints.
Risk Category | Impact | Mitigation |
|---|---|---|
Measurement opacity | Cannot track AI visibility in standard analytics | Use specialized tools (Otterly.AI, Semrush) |
Platform inconsistency | 73% of fan-out queries change per search | Build broad topical coverage, not specific query targeting |
Loss of control | Cannot dictate sub-query generation | Focus on entity authority and structured data |
Accuracy risks | Conflicting sources cause synthesis errors | Ensure your content is internally consistent |
How Has Query Fan-Out Evolved and What's Next?
Understanding where query fan-out came from—and where it's heading—provides strategic context for optimization decisions.
The Evolution: From BERT to Fan-Out (2019-2025)
2019: BERT Introduces Contextual Understanding
When Google launched BERT (Bidirectional Encoder Representations from Transformers), it helped Search "better understand one in 10 searches in the U.S. in English, particularly for longer, more conversational queries." According to Google's official announcement, BERT enabled passage-level understanding—allowing the engine to retrieve relevant snippets even when exact query terms didn't appear together.
2021: MUM Amplifies Multi-Modal Reasoning
Google's Multitask Unified Model (MUM), introduced in 2021, claimed to be "1,000 times more powerful than BERT" with multimodal understanding, cross-language comprehension, and complex reasoning capabilities. MUM laid the groundwork for AI systems that could process images, videos, and text simultaneously.
2023-2024: Generative AI Enters Search
The integration of large language models into search (SGE, then AI Overviews) introduced real-time synthesis. Rather than ranking documents, AI systems began generating answers directly from retrieved content.
2025: Query Fan-Out Goes Mainstream
Google's AI Mode launch in May 2025 formalized fan-out as a core retrieval mechanism. As iPullRank summarizes: "The evolution from lexical indexes to neural embeddings was about teaching machines to understand language, while the rise of generative search is about teaching them to speak it back."
The Near Future: 2026 Predictions
Multi-Modal Query Fan-Out
AI systems are beginning to incorporate images, videos, audio, and interactive content into fan-out processes. According to ALM Corp's analysis, "content with multi-modal elements can see 2.3x higher AI citation rates."
Practical implication: "If you publish a training plan, it should exist as narrative text, a structured table, a downloadable file, and ideally a short video with a transcript. This way, no matter which modality the system decides to target for that sub-query, you have a relevant representation ready."
Personalized Query Expansion
Fan-out queries will become increasingly tailored to individual users based on their search history, preferences, and context. According to Wellows (2025), by 2026, "90% of Google queries are expected to trigger AI augmentation or semantic fan-out retrieval."
Cross-Platform Synthesis
ALM Corp predicts that "fan-out may pull evidence from multiple AI systems to form a single consensus answer" by Q1 2026. This could mean your content being synthesized alongside competitors across ChatGPT, Perplexity, and Google simultaneously.
Conversation-Based Discovery
As AI chat interfaces dominate, query fan-out will evolve to handle multi-turn conversations rather than single queries. Each follow-up question may trigger additional fan-out, creating compounding retrieval events within a single session.
Agentic Fan-Out (2027-2028)
Looking further ahead, ALM Corp forecasts that "AI agents may run multi-step workflows (compare options, plan, and complete actions like booking). By 2028, 35% of AI-generated answers may include actionable transactions."
Local Search Implications
Local businesses face unique fan-out challenges. A query like "best Italian restaurant near me" may fan out into sub-queries about menu options, price range, parking availability, wait times, and dietary accommodations. Businesses with comprehensive Google Business Profiles covering all these facets will capture more fan-out visibility than those with minimal information.
Timeline | Development | Impact | Action Required |
|---|---|---|---|
2019 | BERT launch | Passage-level understanding | Optimize for natural language |
2021 | MUM introduction | Multi-modal reasoning | Diversify content formats |
2025 | AI Mode fan-out | 8-12 sub-queries per search | Build topical coverage |
2026 | Multi-modal fan-out | Images/video in retrieval | Add video transcripts, image alt text |
2027-28 | Agentic fan-out | AI completes transactions | Enable booking/purchase actions |
What Are the Limitations of This Research?
Transparency about methodology is essential for original research. Our five models (FME, TCG, CPM, FDC, CPFI) are built from verified industry data, but several limitations apply.
Correlation vs. Causation. As Search Engine Land noted (December 2025) when reporting on the Surfer SEO study, the 161% citation lift represents correlation, not proven causation. Pages that rank for fan-out queries may share other characteristics (high authority, comprehensive content) that independently drive citations. Our CPM model accounts for this by including semantic similarity and topical depth as separate multipliers, but the interaction effects between these variables are not fully isolated.
Fan-Out Query Extraction. The Surfer SEO study extracted 33,000 fan-out queries from their dataset. However, the exact method used for extraction may not perfectly replicate Google's internal fan-out process. The 27% stability finding applies to their extraction methodology and may differ from actual platform behavior.
Industry Data Attribution. The industry-specific fan-out data (Healthcare, E-commerce, Finance) is cited from Go Fish Digital research via Wellows (2025). As a secondary citation, these figures should be considered directional rather than definitive.
Platform Weights. The CPFI weight distribution (Google 0.45, ChatGPT 0.30, Perplexity 0.25) reflects our assessment of current market dynamics and will shift as platform adoption evolves. These weights are parameters that practitioners should adjust based on their audience's platform preferences.
Mike King's Presentation Data. Statistics attributed to Mike King from SparkToro Office Hours (January 2026), including the 25-39% overlap figure and 253-661% performance improvements, are based on our analysis of the presentation but the video transcript could not be independently verified through text extraction. We use these figures with attribution hedging accordingly.
Frequently Asked Questions
What is query fan-out in AI search?
Query fan-out is the process where AI search systems like Google AI Mode, ChatGPT, and Perplexity break down a single user query into 8-12 parallel sub-queries, retrieve information for each, and synthesize the results into one comprehensive answer. According to Google's AI Mode announcement (May 2025), the system uses a custom Gemini 2.5 model specifically designed for this decomposition process.
How many sub-queries does Google AI Mode generate?
Google AI Mode generates 8-12 sub-queries for standard queries and can issue hundreds for complex Deep Search scenarios, as confirmed by Google at I/O 2025. An analysis by iPullRank (December 2025) confirmed that Google fires hundreds of searches per single user query in AI Mode, with systems executing approximately 20 iterations maximum before terminating.
Does ranking for fan-out queries improve AI citation chances?
Yes. A Surfer SEO study (December 2025) analyzing 173,902 URLs found a Spearman correlation of 0.77 between fan-out query coverage and AI Overview citations. Pages ranking for fan-out queries are 161% more likely to be cited, and 51.2% of AIO citations ranked for both the main query and at least one fan-out query.
What percentage of AI citations come from top 10 organic results?
Only 32% of AI Overview citations come from pages in the top 10 organic results. The remaining 68% are pulled from pages outside traditional top rankings, according to the Surfer SEO study (December 2025). This demonstrates that AI search uses fundamentally different selection criteria than traditional organic search.
How stable are fan-out queries over time?
Only 27% of fan-out sub-queries remain stable across repeated searches, according to the Surfer SEO study (December 2025). This means 73% of sub-queries change each time, making broad topical coverage more important than optimizing for specific fan-out queries. Our Fan-Out Decay Curve model shows that sites with 80%+ topical coverage retain 85.4% of AI visibility despite this instability.
How does query fan-out differ across AI platforms?
Google AI Mode uses Gemini 2.5 for passage-level retrieval with 8-12 sub-queries. ChatGPT adds intent modifiers like "best" and "top" and generates 4-20 sub-queries depending on complexity, as found by Profound (October 2025). Perplexity uses aggressive citation (3-8 sources per response) with a median latency of 358ms.
What is the optimal content length for AI fan-out citation?
Research by Wellows (December 2025) across 15,847 AI Overview results found that passages of 134-167 words achieve the highest citation rates. Content with cosine similarity scores above 0.88 achieves 7.3x higher citation rates. Structure your content in self-contained passages within this word range for maximum extractability.
What is the 88% visibility gap?
The 88% visibility gap is our Topical Coverage Gap (TCG) model finding. Using verified data showing only 25-39% overlap between traditional rankings and AI citations (Mike King, January 2026) and only 32% of AI citations coming from top-10 pages (Surfer SEO, December 2025), we calculate that brands relying solely on traditional SEO miss 87.5-89.8% of AI citation opportunities.
Which industries benefit most from fan-out optimization?
According to research from Go Fish Digital cited by Wellows (2025), e-commerce achieves the highest AI citation rate at 61% with 18-22 sub-queries per query. Healthcare generates the most sub-queries (22-28) but has a lower 48% citation rate due to YMYL standards. Finance sits in between with 16-20 sub-queries and a 52% citation rate.
How do I measure my fan-out coverage?
Measuring fan-out coverage requires tracking whether your content ranks for the sub-queries AI systems generate, not just the head term. WordLift's three-stage simulation pipeline (December 2025) offers one approach: URL to Entity Extraction to Query Fan-Out to Embedding Coverage to AI Visibility Score. Our Cross-Platform Fan-Out Index (CPFI) extends this by weighting coverage across Google, ChatGPT, and Perplexity.
What is Google's Thematic Search patent?
Google patent US12158907B1, titled "Thematic Search," describes how AI systems organize search results into themes with AI-generated summaries. The patent outlines how a single query like "moving to Denver" triggers thematic sub-queries about neighborhoods, cost of living, and activities. These "themes" are what the industry now calls fan-out queries.
How does Semrush track query fan-out?
Semrush's AI Overview research tool monitors citation appearances across Google AI Overviews for over 10 million keywords. Their query fan-out experiment tested optimization strategies and found a 150% increase in citations after targeting fan-out sub-queries. The platform tracks which domains appear in AI responses and how intent shifts during fan-out.
What is Otterly.AI's Query Fan-Out tool?
Otterly.AI's Query Fan-Out Analysis is a free tool that simulates how Google AI Mode expands queries into sub-queries. The platform monitors AI visibility across ChatGPT, Perplexity, Google AI Mode, and Gemini, calculating your "Share of AI Voice" compared to competitors.
Why is Reddit frequently cited in AI search results?
Reddit's presence in AI Overviews increased 450% between March and June 2025 because AI systems value authentic user discussions when answering queries. Reddit threads reveal natural language patterns, pain points, and follow-up questions that AI systems replicate during query fan-out. Analyzing Reddit discussions in your industry can reveal how AI will decompose related queries.
What did BERT contribute to query fan-out?
BERT (2019) introduced passage-level understanding to Google Search, enabling retrieval of relevant snippets even without exact keyword matches. This laid the groundwork for query fan-out by teaching systems to understand context. MUM (2021) added multi-modal reasoning, and generative AI (2023-2025) enabled real-time synthesis—culminating in today's fan-out technique.
Will query fan-out include images and video by 2026?
Multi-modal fan-out is emerging. Research suggests content with images, videos, and transcripts sees 2.3x higher AI citation rates. By 2026, expect AI systems to fan out queries across text, visual, and audio content simultaneously, making multi-format content strategies essential.
What are the main criticisms of query fan-out?
Critics highlight measurement opacity (standard analytics can't track AI visibility), platform inconsistency (73% of sub-queries change per search), computational cost (Deep Search takes minutes due to resource intensity), and loss of control (content creators cannot dictate sub-query generation). The technique also risks accuracy errors when synthesizing conflicting sources.
Sources
Surfer SEO (2025). "Ranking for Multiple Fan-Out Queries Dramatically Increases Your Chances of Getting Cited in AIOs." https://surferseo.com/blog/query-fan-out-impact/
iPullRank (2025). "How AI Search Platforms Expand Queries with Fan-Out and Why It Skews Intent." https://ipullrank.com/expanding-queries-with-fanout
Google (2025). "AI Mode in Google Search: Updates from Google I/O 2025." https://blog.google/products-and-platforms/products/search/google-search-ai-mode-update/
Search Engine Land (2025). "AI Overview fan-out rankings boost citation odds by 161%: Study." https://searchengineland.com/ai-overview-fan-out-rankings-boost-citation-odds-study-466426
Profound (2025). "Introducing Query Fanouts: See what Answer Engines are really searching for." https://www.tryprofound.com/blog/introducing-query-fanouts
Wellows (2025). "Google AI Overviews Ranking Factors: 2025 Guide." https://wellows.com/blog/google-ai-overviews-ranking-factors/
WordLift (2025). "Query Fan-Out: A Data-Driven Approach to AI Search Visibility." https://wordlift.io/blog/en/query-fan-out-ai-search/
TechCrunch (2025). "Google's AI Overviews have 2B monthly users; AI Mode 100M in the US and India." https://techcrunch.com/2025/07/23/googles-ai-overviews-have-2b-monthly-users-ai-mode-100m-in-the-us-and-india/
Perplexity (2025). "Architecting and Evaluating an AI-First Search API." https://research.perplexity.ai/articles/architecting-and-evaluating-an-ai-first-search-api
Mike King, SparkToro Office Hours (2026). "Office Hours: Optimizing for Google vs. LLMs." https://www.youtube.com/watch?v=TOjda22Zatw
Aleyda Solis (2025). "Google AI Mode's Query Fan-Out Technique: What is it & How Does it Mean for SEO?" https://www.aleydasolis.com/en/ai-search/google-query-fan-out/
Marie Haynes (2025). "Understanding Query Fan-Out in Google's AI Mode." https://www.mariehaynes.com/ai-mode-query-fan-out/
Simon Schnieders (2025). "Google's 'Query Fan-Out Technique' Explained." https://www.linkedin.com/pulse/googles-query-fan-out-technique-explained-simon-schnieders-wmjge
Search Engine Journal (2025). "Google's Thematic Search Patent." https://www.searchenginejournal.com/google-query-fan-out-patent/547983/
Semrush (2025). "We Tested Query Fan-Out Optimization (Here's What We Learned)." https://www.semrush.com/blog/query-fan-out-experiment/
Semrush (2025). "Semrush Report: AI Overviews' Impact on Search in 2025." https://www.semrush.com/blog/semrush-ai-overviews-study/
Otterly.AI (2025). "Query Fan Out Analysis Tool." https://otterly.ai/geo/query-fan-out/
Locomotive Agency (2025). "SEO for AI Search: Introducing the Query Fan-Out Tool." https://locomotive.agency/blog/rethinking-seo-for-ai-search-introducing-locomotives-query-fan-out-tool/
Kopp Online Marketing (2025). "From Query Refinement to Query Fan-Out: Search in times of generative AI and AI Agents." https://www.kopp-online-marketing.com/from-query-refinement-to-query-fan-out-search-in-times-of-generative-ai-and-ai-agents
ALM Corp (2026). "Query Fan-Out Impact: Complete 2026 Guide to AI Search Rankings." https://almcorp.com/blog/the-query-fan-out-impact/
Nectiv Digital (2025). "New Research: What We Learned From Analyzing 60K+ Google Fan-Out Queries." https://nectivdigital.com/new-research-we-analyzed-60k-google-fan-out-queries/
SAASstorm (2025). "Reddit SEO and LLM Optimisation for B2B SaaS: Complete 2026 Playbook." https://saastorm.io/blog/reddit-ai-seo/
Google (2019). "Understanding searches better than ever before." https://blog.google/products/search/search-language-understanding-bert/
About the Author
The Ekamoira Research Team analyzes millions of search queries, AI responses, and citation patterns to help brands understand and optimize their visibility in AI-powered search. Our research combines proprietary data from ChatGPT, Perplexity, Google AI Overviews, and traditional SERP analysis.
of brands invisible in AI
Our proprietary Query Fan-Out Formula predicts exactly which content AI will cite. Get visible in your topic cluster within 30 days.
Free 15-min strategy session · No commitment
Related Articles

7 Best AI Mode Rank Tracking Tools Tested: Free GSC Method + Paid Options (2026)
Track your Google AI Mode rankings using a three-tiered approach: free GSC monitoring, structured manual tracking, and automated AI visibility tools. This complete 2026 guide covers setup, pricing, and strategies for handling AI result volatility.

LLM Citation Tracking: The Science of How AI Systems Choose Sources to Reference
Between 50% and 90% of LLM-generated citations don't fully support the claims they're attached to, according to peer-reviewed research published in Nature Commu...

How AI Agents Are Changing E-commerce in 2026: Open Protocols Explained (Complete Guide)
According to eMarketer (December 2025), AI platforms are expected to account for $20.9 billion in retail spending in 2026, nearly quadrupling 2025's figures.