Copy the formatted transcript to paste into ChatGPT or Claude for analysis
David Quaid is joining the podcast once
again and David, you're going to be
talking about how to rank in LLMs.
>> Absolutely. And thanks for having me
back. Um, I think there's a lot of
information or uh misinformation about
um how people get visibility in search
engines and and LLM search engines. And
and this is something that that is is
really close to my heart because I a big
believer in SEO and and that
democratization of access to people. And
I think you covered uh a story that was
on our SEO a while back about how um geo
companies were paying SEOs to um spread
AI-generated overview
SEO expert David Quaid explains how large language models (LLMs) like ChatGPT and Perplexity actually rank content through a process called 'query fan-out' rather than trained knowledge. Quaid demonstrates that LLMs break down user prompts into simplified search queries sent to traditional search engines like Google, then synthesize results in real-time. He proves that ranking in LLMs requires traditional SEO optimization, as LLM citations directly correlate with Google rankings. Quaid debunks common myths about LLM visibility, including that schema markup, E-E-A-T signals, or Reddit preferences influence LLM citations. Using Google Search Console data, he shows how to identify LLM-driven traffic through zero-click queries with high impressions and top-10 rankings. He demonstrates that content can rank in LLMs within minutes of Google indexing, and provides practical methods for tracking LLM referral traffic through Google Analytics 4 and Looker Studio dashboards.
LLMs use 'query fan-out' to break down complex prompts into simplified search queries sent to traditional search engines like Google or Bing, then synthesize results in real-time rather than relying on trained knowledge.
Ranking in Google directly determines LLM visibility—content indexed in Google can appear in LLM results within 2-10 minutes, with no separate optimization required beyond traditional SEO.
Schema markup provides no meaningful advantage for LLM visibility in 99% of cases, as LLMs require the actual content and schema cannot replace or summarize page information effectively.
LLM traffic appears in Google Analytics as referral traffic from sources like 'chatgpt.com' or 'perplexity.ai' with approximately 0.01-0.1% click-through rates compared to traditional search's 1-40%.
How LLMs retrieve info in real time (not stored). PageRank still influences LLM citations. Query fan-outs explained. Drift in LLM results. Looker Studio reporting for LLM visibility