A recent study highlights differences between how Google ranks pages and how large language models cite sources. The report compares citations from OpenAI’s ChatGPT, Google’s Gemini, and Perplexity with traditional Google search results, revealing gaps in alignment.

The research, conducted by SEO software company Search Atlas, analysed 18,377 matched queries to examine the relationship between search visibility and AI citations. The findings show that ranking highly on Google does not automatically translate to being cited by AI platforms.

Perplexity’s live web retrieval gives it citation patterns that more closely resemble traditional search results. According to the study, Perplexity demonstrated a median domain overlap of around 25–30% with Google, and a median URL overlap near 20%. In total, it shared approximately 43% of cited domains with Google.

ChatGPT, by contrast, showed much lower overlap with Google results. Its median domain overlap hovered around 10–15%, while URL-level matches generally remained below 10%. Overall, only about 21% of ChatGPT’s cited domains were shared with Google.

Gemini produced less consistent results. Some responses had very little overlap with search results, while others aligned more closely. Overall, just 4% of Gemini’s cited domains matched Google’s, even though these domains represented roughly 28% of all citations within Gemini’s responses.

These results indicate that visibility on Google does not guarantee prominence in AI-generated citations. While Perplexity actively searches the web and aligns more closely with current rankings, ChatGPT and Gemini rely heavily on pre-trained knowledge and selective retrieval of sources.

Perplexity’s approach means that sites performing well in Google search are more likely to appear in its citations. This suggests that traditional SEO efforts can still influence visibility for retrieval-based AI platforms.

In contrast, ChatGPT and Gemini operate more selectively. They tend to cite a narrower set of sources, often independent of current Google rankings. As a result, URL-level matches between these models and Google remain low.

The study notes several limitations. Perplexity dominated the dataset, representing 89% of matched queries, while OpenAI accounted for 8% and Gemini just 3%. This imbalance could influence how representative the results are across models.

Queries were matched using semantic similarity scoring. While paired queries expressed similar information needs, they were not identical, with an 82% similarity threshold applied using OpenAI’s embedding model.

Additionally, the analysis covers only a two-month period, providing a snapshot rather than a long-term view. Longer studies would be needed to determine whether these overlap patterns persist over time.

For retrieval-focused systems like Perplexity, traditional SEO signals and domain strength are likely to remain significant factors for visibility. High-ranking pages are more likely to be cited, reinforcing the importance of optimised content.

For reasoning-oriented models such as ChatGPT and Gemini, these signals may have a weaker effect. These platforms draw on pre-trained knowledge and selective citation, meaning current search rankings have less influence on what is referenced.

Overall, the report demonstrates that AI citations and Google rankings operate according to different principles. Understanding these differences is crucial for marketers aiming to maintain visibility across both traditional search and AI-driven platforms.

 

More Digital Marketing BLOGS here: 

Local SEO 2024 – How To Get More Local Business Calls

3 Strategies To Grow Your Business

Is Google Effective for Lead Generation?

What is SEO and How It Works?

How To Get More Customers On Facebook Without Spending Money

How Do I Get Clients Fast On Facebook?

How Do I Retarget Customers?

How Do You Use Retargeting In Marketing?

How To Get Clients From Facebook Groups

What Is The Best Way To Generate Leads On Facebook?

How Do I Get Leads From A Facebook Group?

>