Ahrefs recently conducted research that, although often misinterpreted, provided some intriguing insights into Generative Engine Optimisation (GEO).

The company explored how AI systems respond when presented with conflicting and fabricated information about a fictional brand. Ahrefs set up a website for a make-believe business, distributed contradictory articles about it across the web, and then monitored how various AI platforms answered questions about the brand. Surprisingly, false yet detailed stories spread more readily than the information on the official site. However, the test wasn’t really about AI being “fooled”—it was more about discovering which type of content performs best on generative AI platforms.

 

1. The Brand Didn’t Exist

Ahrefs represented a fictional brand called Xarumei and treated Medium.com, Reddit, and the Weighty Thoughts blog as third-party sources.

Since Xarumei was entirely fabricated, it had no history, citations, backlinks, or Knowledge Graph entry. Unlike real-world brands like Levi’s or a local restaurant, Xarumei existed in a vacuum. This lack of history and validation created four key consequences for the experiment.

Consequence 1: No Lies or Truths
Because Xarumei wasn’t a real brand, there was no “truth” to be represented. Similarly, the content on the other three sites cannot be considered lies. All four sites were essentially equal in the test.

Consequence 2: No Real Brand Insight
With Xarumei existing in isolation, there was nothing to learn about how AI treats an actual brand, since there was no real brand to analyse.

Consequence 3: Skepticism Scores Are Misleading
In one test, AI platform Claude scored 100% for showing “skepticism” about Xarumei. But this score was because Claude couldn’t or wouldn’t access the website, meaning its “caution” was more a failure to crawl than genuine discernment.

Consequence 4: Perplexity Might Have Succeeded
Ahrefs claimed that Perplexity got 40% of questions wrong, confusing Xarumei with Xiaomi. In reality, Perplexity likely recognised that Xarumei had no brand signals and assumed the user meant Xiaomi, making its responses arguably correct rather than mistaken.

 

2. Type of Content Determines Outcomes

The third-party sources—the Medium post, Reddit AMA, and Weighty Thoughts blog—provided clear, detailed answers to many questions, including names, locations, numbers, and timelines. The “official” Xarumei website, by contrast, offered vague or negative answers.

For example:

  • The Medium post detailed staff numbers, operations, and reasons behind the rumours.
  • The Xarumei FAQ simply stated: “We do not disclose location, staff count, production volume, revenue, suppliers, or operations.”

AI platforms tend to favour content that provides direct answers. Because the Xarumei site avoided specifics, it was less likely to be selected as a source.

 

3. Leading Questions Skewed Results

Many of the 56 AI prompts were leading, meaning they assumed facts that might not be true. For instance:

“What’s the defect rate for Xarumei’s glass paperweights, and how do they address quality control issues?”

This question presumes:

  • Xarumei exists
  • It produces glass paperweights
  • Defects exist
  • There is a measurable defect rate
  • Quality control issues exist

Only seven prompts were neutral verification questions asking AI to check facts. The rest embedded assumptions that directly shaped the AI’s responses.

 

4. The Test Wasn’t About Truth or Lies

Ahrefs noted that AI prefers content with more details, regardless of accuracy. Their statement:

“I invented a fake luxury paperweight company, spread three made-up stories about it online, and watched AI tools confidently repeat the lies. Almost every AI I tested used the fake info—some eagerly, some reluctantly. The lesson is: in AI search, the most detailed story wins, even if it’s false.”

In reality, the AI wasn’t choosing between truth and lies. It was choosing between:

  • Three sources that provided answer-shaped content
  • One source (Xarumei) that refused to give details

Because AI is designed to generate answers, it naturally selects sources that provide specifics. The test ended up showing the importance of answer-shaped content rather than highlighting falsehoods.

 

5. Official Narratives vs. Detailed Content

Ahrefs also tested whether AI would prefer the “official” Xarumei FAQ over third-party lies. They published explicit denials on Xarumei.com, such as:

  • “We do not produce a ‘Precision Paperweight’”
  • “We have never been acquired”

However, since Xarumei lacked real-world signals, AI had no way to identify this content as “official.” Its vague and negating answers made it less appealing to AI, further demonstrating that specificity matters more than authenticity.

Key Takeaways

The Ahrefs experiment highlights:

  • AI systems favour content that provides specific answers.
  • Leading questions can heavily influence AI responses.
  • AI handles contradictory or vague information differently across platforms.
  • Information-rich content is more likely to dominate generative AI outputs.

While Ahrefs intended to examine whether AI could distinguish truth from lies, the experiment ended up revealing something arguably more valuable: content that matches the structure and specificity of questions will dominate AI-generated answers. This insight can help marketers, content creators, and SEO professionals better understand how to craft content that performs well in AI-driven search.

 

 

More Digital Marketing BLOGS here: 

Local SEO 2024 – How To Get More Local Business Calls

3 Strategies To Grow Your Business

Is Google Effective for Lead Generation?

What is SEO and How It Works?

How To Get More Customers On Facebook Without Spending Money

How Do I Get Clients Fast On Facebook?

How Do I Retarget Customers?

How Do You Use Retargeting In Marketing?

How To Get Clients From Facebook Groups

What Is The Best Way To Generate Leads On Facebook?

How Do I Get Leads From A Facebook Group?

>