Google now looks to be completing global testing of AI-overviews for non-logged in users in a limited capacity.

This comes after Google announced that they would be opening up the Search Generative Experience to non-logged in users in the US for a small subset of queries.

While I have been keeping a close eye out for any testing since this announcement, I’ve been unable to see any evidence of public testing for non-logged users that is verifiable beyond a handful of forum posts

This is the first time I’ve come across AI-overviews within normal search results, with the test appearing across various regions from what I can see, ranging from Australia, the US, the UK and India when using a VPN.

The global AI-overview limited test

So far, I’ve discovered that the AI-overviews are showing only on mobile and within knowledge panels for English searches. The focus of this testing reminds me of the early testing of the celebrity cards on desktop, which later rolled out beyond just celebrities.

When comparing the AI-overview test in normal search results compared to SGE and the description provided by Wikipedia, the answer to the query is reasonably different across each.

Comparing normal search results against SGE and the test in normal search results.

When reviewing the description within each screenshot, I would lean more toward the test within normal search results as being the strongest description of the three, which should be considered a win from Google’s perspective within this phase of the testing and looming rollout.

It is also interesting to note that the text highlighting colour within the test matches the knowledge panel, which feels more visually appealing and seamlessly integrated compared to the SGE answer which is different. This may however be because the features are different (one is within a knowledge panel and the other is outside of it).

It makes sense that Google would be more forceful with their testing of AI-overviews within a feature like knowledge panels, considering they tend to show for broader queries (less so for specific queries) and may appear as less risky in terms of accuracy.

How the AI-overview test works

With the way that the AI-overview mobile test in knowledge panels works, it appears as though the AI-overview answers themselves are cached for users, meaning that there is no loading delay in waiting for the answer to appear. This has been a major criticism of AI answers in general.

It is also worth noting that it appears that only the most prominent web page used in the answer generation would be rewarded with the reference and impression within Google Search Console. All other sources are given the “+x more” treatment, which doesn’t list the URL.

It is only upon interacting with the AI-overview itself to expand the answer where the other sources are shown, which would yield an impression for britannica.com in this instance.

The different attributes of the mobile AI-overview test.

Another aspect worth noting is that the “AI overviews are experimental” notice is less prominent within the knowledge panel test within normal search results, whereas it is much more visible at the top when replacing featured snippets in SGE. This could however still be the display shown within the testing phase within normal search results.

Overall, I don’t mind this experience as a whole, in that the way the test is being shown doesn’t differ too much compared to normal search results with respect to attribution, and it doesn’t have the same delay that generating a new AI-overview would tend to comprise.

Final thoughts

Since discovering the mobile testing of AI-overviews within knowledge panels, I’ve been unable to replicate for other features (such as for featured snippets) under the same limited test.

The reason why this test is important to document compared to the limited testing in the US is that this is a global test and is far more widespread compared to the reports I’ve seen within forums over the past few weeks – where I’m now able to consistently replicate the same test.

I’ll be continuing to update this article as I discover more testing of AI-overviews in the wild. For the moment, the broader testing looks to be confined in a way that is unlikely to expose Google to too much scrutiny for the meantime.