Undoubtedly Google’s most anticipated release of 2023, Bard has arrived in a limited beta. Being available to users in the US and the UK, I was able to gain access to an early preview.

While I’ll be reviewing Google’s Bard from an SEO professionals point-of-view, I’ll also be providing details of how Bard’s generative AI performs as a user.

Bard has now been available to the public for over a week and I’ve had enough time to understand the ins and outs of their chatbot, and how it compares to Bing’s Sydney.

Much like my review of Bing’s AI Chat, I’ll try to provide a balanced review of Bard. But first, here’s how I was able to access Bard, while located in Australia.

Accessing Bard outside of the US and the UK

Like many who are eager to test out Bard, you might be like me and not located in either the US or the UK. That is fine, and something you can easily get around.

The first step is in signing up for the waitlist. An annoying part of this process is that you can’t use a Google Workspace email, so you must have an @gmail.com address. 

Once you’ve submitted your details, you’ll need to wait until you receive an email from Google to say that access has been granted. If you’re located outside the US or the UK, there will then be a notice that appears like the one below.

Use a VPN to access Bard if you’re located outside of the UK or the US

To get around this, you’ll need to have a VPN installed. The simplest VPN to use is probably ExpressVPN, which you’ll need to install on your computer. Use it as a trial for the duration of your testing if you don’t want it indefinitely.

Once you’ve enabled the VPN while on the screen shown to the left above, you can refresh your browser while on bard.google.com and the AI chat appears. Keep the VPN on for either a US or UK location to continue using Bard.

Now, what does my Twitter community think of Bard? I did an early poll and here’s what users of Bard thought in the early stages.

Almost 70% of users think Bing’s chatbot beta is better

Bing must have done something right, because they’re beating Google’s AI chatbot offering in the Beta phase. That’s a big deal and quite a resounding preference for Bing.

A big part of this preference toward Sydney is in the use of citations, which I’ll get into in a moment. But more than anything, users just felt like Bard was unexciting.

Google’s PR around this feedback seems to be that they were avoiding an outcome considered to be “dangerous” with respect to their AI, being a direct jab at Bing.

While this certainly carries some merit, it feels like quite an easy fallback for Google and gives off the impression that they are in complete control of Bard. By why not make the beta more exciting then?

Moving forward, I will be reviewing Google’s Bard across 4 different categories: Accuracy, Succinctness, Creator Attribution, and User Interface. Each will provide their own contribution to my overall opinion of Bard’s experimentation phase.

How accurate is Bard?

The early preview of Bard was a PR disaster for Google, by showing incorrect information in one of their AI-generated results. This was something that Google needed to avoid happening again at all costs, and likely the reason for the delay in its release.

After testing out Bard, I found that its accuracy was reasonable. It didn’t seem to be hallucinating at the same level as Bing’s AI in the early stages. While this is a net positive for the AI Chat, it wasn’t leaving much to be desired.

There are many instances of Bard answering users with very corporate responses. Admitting it didn’t have enough information or saying that it was unable to answer questions because it was a Large Language Model (LLM).

Now, I’m far from an LLM or AI expert, but that feels like far too safe of a fallback response and something that doesn’t seem difficult to program. Saying that “we don’t have an answer” when it doesn’t feel like there’s enough accuracy. At least Bing was having a go at taking difficult queries head-on.

A good example of this can be seen in a LinkedIn article written by Aleyda Solis in the early stages of last week. Aleyda asked Bing who she was and compared the answer to Bard, and here’s what each came back with.

Bard apparently doesn’t have enough information for a simple query

So Bard somehow doesn’t have access to enough information about one of the most well-known people in the SEO industry? Bing attempts to answer the question and passes with flying colours.

I would say that Bard is accurate overall for an LLM, but if it isn’t providing answers to a large portion of queries submitted, that makes it more difficult to judge accuracy. Until Bard attempts to answer more queries, it’s difficult to compare the two.

How succinct are Bard’s answers?

One of the core reasons why Google’s featured snippets have contributed to their success in Search in such as big way is related to their succinctness. The answers need to be fit into a words limit of 40-50 words and highlight important information.

This was a reason why I felt like AI Chat was reinventing the wheel in its early stages. Both traditional search engines and AI Chat is trying to provide answers in a very similar text-based format, with the benefit of AI being that answers would supposedly be better.

When comparing Bard against Bing for answer succinctness, Bard seems to provide longer answers by default, which generally feel longer than they need to be. Whereas Bing Chat is more likely to provide a shorter answer if a longer response isn’t required.

Bard’s answers feel unnecessarily long-winded in many instances

This goes back to Google’s latest communication around Bard, with it not being designed as a replacement for Search. This is however drastically different communication to what Google presented earlier in the year (which was a direct replacement for some featured snippets).

This is a strategic move by Google because this enables them to continue to funnel users into traditional Search through the ‘Google it’ icon, allowing them to serve ads to users as they always have.

Because Bing isn’t pushing users to their Search offering (rightfully so, because it has a ton of issues), they are instead prioritising content creators, which are the only external links presented.

Does Bard support content creators?

The short answer is no. Citations are rare for answers provided within Bard, which is a major downside and one of the core reasons why the SEO community hasn’t supported the initial Beta offering.

In instances where Bard does provide answers to questions that are directly taken from a website, the reference is provided in a tiny section at the bottom that’s titled: ‘Sources – Learn more’.

Bard doesn’t provide feedback as to which part of the answer was taken from different websites through footnotes like Bing does. This is a major difference among the two AI Chat tools.

Bing does a much better job with content creator attribution

It is said that Neeva was the first to present AI results within the footnote referencing format. Although I found this to not be true, with Microsoft experimenting with a similar format dating back to 2018. This is an important consideration with my Google Bard review, as Bing appears to be the OG creator of footnote references with AI.

Even if Bard is summarising various pages of the content found on the web and making the answer their own, that still doesn’t give them the right to not provide sources. No matter how Google positions Bard, providing sources is integral to a healthy web ecosystem.

It sounds like Google intends to get better at providing links out of Bard to creators, but for the moment, I will be giving them a ‘fail’ with respect to the support of creators.

Is Bard’s UI better than Bing’s?

The User Interface (UI) is important for the success of AI Chat. ChatGPT started the trend with a very simple interface, that had the results slowly transcribed onto the screen.

Bing represented a big shift in UI from ChatGPT, presenting what felt like a groundbreaking experience for AI Chat users. While the text continued to be transcribed on screen (recently becoming a lot faster), the interface was engaging and fun to use.

Fast-forward several months from when ChatGPT first launched, Bard is again employing a very simple and clean interface, which is in line with other Google products to a high degree.

I quite like both Bard and Bing’s UI for different reasons. Bard incorporates the blue and purple stars (inspiring the creation of my feature image in Midjourney), with some additional functions for previewing other draft responses.

Whereas Bing is providing some of the compute actions with green ticks for users, along with an engaging interface. Bard is faster to generate results compared to Bing, likely because of the move away from the appearance of typing for results.

Key takeaways

Overall, I am impressed with Google’s Bard. While it’s not quite at the same level as Bing’s AI Chat, I could see it overtaking it in the near future if progression continues at the same rate. Bard just needs to cite its sources more effectively!

Here are some of the key takeaways from my Google Bard review:

  • If you would like to access Bard and you’re not located in either the UK or the US, you will need to use a VPN. And don’t forget that Google Workspace email won’t work (for some ridiculous reason).
  • According to a recent poll, almost 70% of users think that Bing’s AI Chat is currently better than Google’s Bard. This was based on an immediate reaction, and could easily change over the coming months.
  • Bard appears to have a reasonable level of accuracy when generating results and does seem to hallucinate to a lesser extent than Bing’s AI Chat did in the early stages. But it often does not provide answers to questions, making it hard to judge.
  • Bard’s answers are generally more long-winded and more taxing to read in comparison to Bing’s. Bing’s answers are purposely shorter as a whole because of this, which makes them easier to consume compared to Bard’s.
  • Bard’s biggest downfall is its lack of referencing for content creators. If Bing is able to do this effectively, then why can’t Google? Bard has a long way to go in order to support the creators that are powering their AI Chat.
  • Bard’s UI takes a different approach to Bing’s but doesn’t feel like anything particularly new has been included. I prefer Bing’s UI currently, but Bard’s result generation is faster, which is core to the success of AI Chat.

I will continue to test out both Google’s Bard and Bing’s Sydney over the coming weeks and will provide updates as necessary. If you would like to stay in the loop, make sure to bookmark my SERP timeline and follow my personal and SERP updates account on Twitter.

Update:

Bard seems to think that I feel highly of it based on my review. While this is true in part, the lack of referencing is an irredeemable downside to Bard. It is something that Google will need to address immediately in order to catch up to Bing.