Screenshot of Google
Screenshot of Google

Google’s reputation is built on its algorithms, which are increasingly being used to give answers out of the search engine’s index of results. But what used to be a list — the first page of results — has become simpler over the past couple of years: Sometimes, Google extracts one result that it thinks will best answer whatever it is you’re asking, and puts that answer in a featured section right at the top of your results. That section is called a “snippet,” and it’s not always getting the answer right.

Here are a few of the questions that the Outline’s weekend report on Google snippets identified, along with the answers Google provided at the time:

  • Is Obama Planning a Coup? “According to details exposed in Western Center for Journalism’s exclusive video, not only could Obama be in bed with communist Chinese, but Obama may in fact be planning a communist coup d’etat at the end of his term in 2016!”
  • Who is the King of the United States? “Barack Obama.” Which, interestingly, was sourced to an article criticizing Google for initially pulling this answer from Breitbart.
  • Presidents in the Klan. Google listed four presidents — citing a dubious article whose headline indicated there were five — for this snippet. To be clear, there’s no credible evidence to suggest that any of the listed presidents were Klan members.

The first of these three examples has since been corrected. However, we were able to replicate the results for “Who is king of the United States?” on Tuesday evening, and “Presidents in the Klan” pulled a similar answer but from a different source.

These snippets are Google’s attempt to directly answer whatever question you might be asking, without making you search through results. It’s mostly great for, say, the date of a holiday or basic information — although sometimes even that can get really messed up. Overall, though, Google’s algorithms have a reputation for being reliable. It’s what made it successful as a search engine. Which is perhaps why it has felt so jarring when would-be researchers discover that Google is capable of being very wrong.

One place where Google has a lot of trouble is when bad information on a topic is more widely discussed and shared than good information. For instance: In the last weeks of the election, the idea that Hillary Clinton was secretly a drunk became an extremely popular right-wing meme. Here is what Google tells you if you ask, “is hillary clinton an alcoholic”

Screenshot Google
Screenshot Google

The link goes to a Gateway Pundit article that quotes a tweet that reads, “Sick Hillary Clinton is an Alcoholic. Source of health problems and falls?” To be clear, the claim that Clinton has a drinking problem is completely unproven. The evidence cited in support of this meme merely indicates that Clinton likes to have a drink, which is a really different thing to speculate.

There are lots of articles in Google’s search results about Clinton’s relationship to alcohol, however, that either nod to or explicitly spread the unproven claim about a problem drinking habit. There are relatively few articles — we did one, for instance — that discuss this question critically. So when Google looks for answers to this question, it gets a lot of bad ones.

If you’re wondering why these snippets are so crucial, imagine that you’re accessing Google not through a browser search, but through your voice assistant. Based on the Outline’s reporting, a BBC journalist asked Google to verbally tell him the answer to, “Is Obama planning a coup?” It read that inaccurate snippet we quoted above, verbatim:

As useful as these snippets may often be, that video should make it easy to see just how important it is for Google not to get these wrong.

Google’s search results have long been subject to quirks and manipulation. But like Facebook’s long-standing hoax problem, Google’s role in spreading misinformation became a bigger story after the 2016 election, based on speculation that “fake news” played a role in influencing how Americans voted.

The company told Quartz that it removes incorrect snippet results manually, when they “feature a site with inappropriate or misleading content.” But the process of selecting these snippets in the first place is “automatic and algorithmic.”

Automatic and algorithmic are supposed to convey neutrality: It’s out of the hands of the flawed humans, in other words. Except, as The Intersect has written before, that’s not exactly true. Algorithms are written by humans, and they depend upon the input of humans to get better. In other words, they can have a bias, too.

More reading: