An SEO professional has claimed on social media that removing meta descriptions from live websites actually led to an increase in traffic. According to this test, the absence of meta descriptions allowed Google to generate its own snippets, which, in this specific case, appeared to perform better.

The announcement sparked debate within the SEO community, as the idea challenges long-standing advice that every page should include a carefully written meta description to help attract clicks.

At the same time, another experienced SEO published an article arguing that such tests are more like “performative theatre” than true scientific analysis. They suggested that isolated experiments like these often fail to capture how Google’s ranking systems really work on a broader scale.

Their point was that real-world search results involve countless variables, including search intent, competition, algorithm updates and user behaviour—all of which are impossible to fully replicate in a limited test.

This disagreement raises questions about how SEOs should approach optimisation strategies: by strictly following best practices, or by experimenting to see what actually moves the needle.

Some argue that testing on live sites is valuable because it reflects genuine user reactions and changing algorithms. Others caution that without proper controls, these experiments risk drawing misleading conclusions that could waste time and resources.

The original tester insisted that their findings weren’t meant to be universal advice, but rather a prompt for SEOs to question assumptions and investigate what works for their own sites.

Meanwhile, the critic maintained that while testing is useful, it should be balanced against an understanding of how search engines actually process and display content.

The discussion also touches on the role of meta descriptions today, as Google often rewrites snippets based on the user’s query, rather than sticking to the provided text.

For some sites, especially those with thousands of pages, writing unique meta descriptions can be a major task—and might not even be necessary if Google rarely shows them.

Others believe that even if Google rewrites meta descriptions, having strong, relevant copy helps guide what the snippet might look like, while also reinforcing brand messaging.

It’s clear this debate won’t be settled any time soon, but it does highlight the tension between traditional SEO best practices and data-driven experimentation.

Ultimately, both sides seem to agree on one thing: what works on one site may not work on another, and SEOs should remain flexible and curious.

Keeping an eye on traffic data, user behaviour and rankings can reveal which changes are actually having an impact.

While SEO trends and algorithms shift over time, thoughtful analysis and testing—balanced with a critical mindset—remain at the heart of good optimisation.

 

Are SEO Tests Performative Theater?

Coincidentally, Jono Alderson, a respected technical SEO consultant, recently shared his thoughts on SEO testing in an article titled “Stop testing. Start shipping.” In it, he describes much of what passes as SEO testing as little more than “performative theatre.”

Alderson points out that while the concept of SEO testing sounds appealing and scientific, it doesn’t quite translate as neatly as it does in paid advertising. “You tweak one thing, you measure the outcome, you learn, you scale. It works for paid media, so why not here?” he writes.

The problem, he explains, is that SEO isn’t a closed system. It involves architecture, semantics, signals and algorithms – all operating within a constantly changing landscape. Trying to apply controlled testing methods to this environment misunderstands how the web, and Google itself, truly work.

Alderson highlights that search results can be highly volatile. Even external factors like the weather can affect how users behave and what they click on. This makes it extremely difficult to isolate the impact of any single change.

He adds, “Trying to isolate the impact of a single change in that chaos isn’t scientific. It’s theatre.” In other words, while testing might feel rigorous, it often fails to account for countless outside variables that could skew the results.

Alderson further notes that A/B testing, as commonly understood, doesn’t really fit within SEO because of this complexity. Instead, most SEO tests end up being “a best-effort simulation” riddled with assumptions and subject to unexpected variables.

Even the cleanest, most carefully planned tests can only suggest a possible relationship between a change and a result – they can’t fully prove cause and effect, especially in an environment as unpredictable as organic search.

He makes a strong case about the limitations of tests where neither inputs nor outputs can truly be controlled. In typical scientific studies, comparisons happen in closed systems where every factor can be managed and kept consistent.

However, when it comes to SEO, pages might target different types of search terms – from long-tail keywords to highly competitive phrases – which naturally have different potentials to perform. Daily fluctuations in user behaviour, seasonality, and even seemingly trivial events can all influence results.

Although Williams-Cook, who ran the original test, argued he used a control group, Alderson points out just how hard it is to truly create identical conditions for comparison on live sites.

For example, Google’s algorithms themselves are always changing in ways that can’t be fully observed. These hidden shifts can heavily affect outcomes, meaning any test result might reflect factors that weren’t actually part of the test itself.

Williams-Cook claimed the 3% increase he observed was statistically meaningful and consistent across different sets of pages. But as Alderson argues, because Google’s processes are largely a “black box,” it’s impossible to know with certainty why that change occurred.

Without being able to see inside the algorithms or fully control the environment, treating such test results as definitive evidence becomes problematic.

Ultimately, if it’s not possible to isolate one change in a complex, open system like organic search, it’s extremely challenging to make reliable claims based on the outcome. And while testing can still offer useful insights, Alderson’s argument is a reminder to view those results with a healthy dose of caution.

 

Focus On Meaningful SEO Improvements

Jono’s article highlights the limitations of SEO testing, but the heart of his piece is actually about something broader. He argues that by concentrating too heavily on what can be tested and measured, many in the industry end up neglecting deeper, “meaningful” improvements.

These meaningful changes, according to Jono, often relate to content quality and enhancing user experience—areas that can’t always be easily quantified or split-tested. His point is that an overemphasis on data and testing risks overshadowing the creative and strategic decisions that truly shape how people engage with a website.

It’s here that the discussion loops back to Williams-Cook. Even if, as Jono suggests, statistically valid A/B SEO tests might amount to a kind of “theatre,” that doesn’t automatically mean Williams-Cook’s recommendation to skip meta descriptions is incorrect.

There is a genuine possibility that in some cases, allowing Google to generate its own snippets could lead to better outcomes, especially if those dynamically generated descriptions match user intent more closely than static, written ones.

At the same time, it’s worth recognising that SEO remains highly subjective. What proves effective for one website or audience might not matter as much for another. Different industries, content types and user behaviours can all shape what works best.

So it brings us back to the central question: is removing meta descriptions from every page really a meaningful change? Or is it a tactic that, while possibly offering a short-term lift in some situations, may not truly improve the user’s experience or add long-term value?

Ultimately, both perspectives have merit. Testing and data can reveal insights that challenge assumptions, but strategy shouldn’t be driven by numbers alone. There’s still room for thoughtful decisions rooted in understanding what content best serves visitors.

In today’s constantly evolving search landscape, balancing data-driven experiments with bigger-picture thinking might be the most sensible approach. And in the end, whether or not to remove meta descriptions should probably depend on each site’s unique context rather than a blanket rule.

 

More Digital Marketing BLOGS here: 

Local SEO 2024 – How To Get More Local Business Calls

3 Strategies To Grow Your Business

Is Google Effective for Lead Generation?

What is SEO and How It Works?

How To Get More Customers On Facebook Without Spending Money

How Do I Get Clients Fast On Facebook?

How Do I Retarget Customers?

How Do You Use Retargeting In Marketing?

How To Get Clients From Facebook Groups

What Is The Best Way To Generate Leads On Facebook?

How Do I Get Leads From A Facebook Group?

How To Generate Leads On Facebook For FREE

>