A recently published testimony from a Google engineer has shed some light on how the tech giant scores page quality and ranks content in its search engine results.

The testimony, which has been partially redacted and made public by the U.S. Department of Justice, reveals that Google does indeed use a popularity signal derived from Chrome browser data. This is a rare confirmation of how user interaction can influence search visibility.

While the document avoids revealing in-depth technical details, it does provide a broad overview of the signals Google uses in its ranking systems. It outlines how the algorithms function in general terms, offering insight into the types of data and measurements involved without disclosing their precise workings.

 

Hand-Crafted Signals

The document starts by explaining how signals used in Google’s ranking systems are developed. It refers to a process known as “hand crafting”, which involves using data from sources such as quality raters and user clicks.

This data is then processed using mathematical and statistical models to produce ranking scores. The term “hand crafted” in this context refers to algorithms that have been carefully fine-tuned by Google’s search engineers.

It’s important to note that this doesn’t mean websites are being ranked manually. Rather, engineers adjust the scale and behaviour of algorithms based on observed data and testing to improve overall search results.

 

Google’s ABC Signals

According to the U.S. Department of Justice document, Google uses three key types of ranking signals known as the ABC Signals. These stand for Anchors (links from other pages to the target page), Body (the presence of search query terms within the page’s content), and Clicks (specifically how long a user stays on a page before returning to the search results).

The ABC signals are only a simplified component of Google’s broader ranking process. In reality, search result rankings are shaped by a highly intricate system involving hundreds—if not thousands—of algorithms. These systems play a role in every phase, from indexing and analysing links, to spam detection, personalisation, and even re-ranking of search results. Google representatives such as Liz Reid have previously referenced the “Core Topicality Systems” as part of this process, while Martin Splitt has spoken about “annotations” used to help understand webpages.

The document explains that these ABC signals form a foundation for measuring how relevant a webpage is to a user’s query. This process, referred to as Topicality (T*), blends the three types of signals in a method that is mostly hand-crafted by Google engineers.

It further elaborates that topicality scoring is technically complex, involving teams of engineers who work on difficult mathematical problems to refine how content relevance is determined.

Google reportedly prefers hand-crafted signals because they offer greater transparency. If something goes wrong, the development team knows exactly where to look to make improvements or fixes. This is contrasted with Microsoft’s automated system at Bing, which, according to the document, is harder to troubleshoot when issues arise due to the less transparent nature of its approach.

This insight underscores not only the complexity behind Google’s search engine but also highlights the company’s strategy of keeping its ranking signals understandable and manageable through direct human input.

 

Interplay Between Page Quality And Relevance

One key insight shared by a Google search engineer is that a webpage’s quality score is largely independent of specific search queries. If a page is identified as high-quality and trustworthy, it is generally considered reliable across a broad range of relevant searches. This is what is meant by the term “static” — the quality score isn’t recalculated for each individual search query.

That said, there are still relevance-related signals linked to the specific query that contribute to how pages are ultimately ranked. This means while page quality plays a foundational role, relevance remains essential in determining which pages appear in the top search results.

The document further clarifies this by noting that although quality is generally consistent across queries, there are instances where query-specific information is taken into account. For example, a page may be of high quality but offer general information. If a user’s query is highly technical or specific, the system may prioritise another page of similar quality that better matches the technical depth required.

The engineer emphasises the significance of quality signals, stating:
“Q* (page quality – meaning the trustworthiness of the content) is incredibly important. If competitors were to see the internal data logs, they would have a clear sense of a website’s authority in the eyes of Google.”

They also noted that the quality score remains a critical factor even today, and it is often the element that website owners raise the most concerns about. This underscores how essential trust and perceived authority are in how Google ranks and evaluates content across the web.

 

Cryptic Chrome-Based Popularity Signal

Another ranking signal was mentioned, although its exact name has been redacted. It appears to be connected to measuring popularity, and was vaguely described as:

“[redacted] (popularity) signal that uses Chrome data.”

This has led some to speculate that it could validate claims that the recent Chrome API leak revealed actual ranking factors. However, many in the SEO community, including myself, believe that those APIs are intended for developers. They are likely used to display performance metrics—such as Core Web Vitals—within the Chrome Dev Tools interface, rather than being direct ranking components.

That said, it’s entirely possible this reference points to a separate popularity signal we haven’t yet identified or understood fully.

The engineer also referred to a different set of leaked documents that listed certain parts of Google’s ranking system. However, they stressed that the documents on their own don’t provide enough detail to reverse-engineer how the algorithms work.

As the engineer explained:
“There was a leak of Google documents which named certain components of Google’s ranking system, but the documents don’t go into specifics of the curves and thresholds. For example, the documents alone do not give you enough details to figure it out, but the data likely does.”

This suggests that while the leaks reveal broad structures, the real insight would come from access to internal data—something not made available in the leaks.

 

Takeaway

A newly published document summarises a deposition given by a Google engineer to the U.S. Justice Department. It outlines certain aspects of how Google’s search ranking systems operate, offering insights into the way specific signals are designed and used.

The engineer’s testimony touches on manually developed signals, the use of fixed page quality scores, and a lesser-known popularity signal that appears to be based on data collected through Chrome.

This document gives a rare glimpse into the workings behind Google’s algorithms—shedding light on how elements such as topical relevance, trust indicators, user click patterns, and LLM-driven transparency contribute to the ranking of websites.

 

More Digital Marketing BLOGS here: 

Local SEO 2024 – How To Get More Local Business Calls

3 Strategies To Grow Your Business

Is Google Effective for Lead Generation?

What is SEO and How It Works?

How To Get More Customers On Facebook Without Spending Money

How Do I Get Clients Fast On Facebook?

How Do I Retarget Customers?

How Do You Use Retargeting In Marketing?

How To Get Clients From Facebook Groups

What Is The Best Way To Generate Leads On Facebook?

How Do I Get Leads From A Facebook Group?

How To Generate Leads On Facebook For FREE

>