To be really clear, this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting. (Google publishes hundreds of research papers a year.) Still, the fact that a search engine could effectively evaluate truth, and that Google is actively contemplating that technology, should boggle the brain. After all, truth is a slippery, malleable thing — and grappling with it has traditionally been an exclusively human domain.
Per this recent paper, however, it’s not too difficult for computers to determine whether a given statement is true or false. Basically, to evaluate a stated fact, you only need two things: the fact and a reference work to compare it to. Google already has the beginnings of that reference work, in the form of its Knowledge Graph — the thing that displays “August 15, 1990” when you search “Jennifer Lawrence birthday,” or “American” when you search “Obama nationality.”
Google culls those details largely from services like Freebase, Wikipedia and the CIA World Factbook; a separate, internal research database, called Knowledge Vault, can also automatically extract facts from the text on Web pages. Whichever database we’re talking about, Google structures these ‘lil factoids as things called “knowledge triples”: subject, relationship, attribute. Like so:
(Jennifer Lawrence, birthday, August 15 1990)
(Barack Obama, nationality, American)
(Somalia, capital, Mogadishu)
… so to check if a fact found in the wild is accurate, all Google has to do is reference it against the knowledge triples in its giant internal database. And to check whether a Web page or a Web site is accurate, Google would just look at all the site’s knowledge triples and see how many don’t agree with its established body of facts.
The distant suggestion, these researchers write, is that Google’s version of the truth would iterate over time. At some point, perhaps even Google’s hotly debated and much-studied ranking algorithm — the creator and destroyer of a million Web sites! — could begin including accuracy among the factors it uses to choose the search results you see.
That could be huge, frankly: In one trial with a random sampling of pages, researchers found that only 20 of 85 factually correct sites were ranked highly under Google’s current scheme. A switch could, theoretically, put better and more reliable information in the path of the millions of people who use Google every day. And in that regard, it could have implications not only for SEO — but for civil society and media literacy.
It’s worth noting, in fact, that the Barack-Obama-nationality example comes straight from the Google report, which would seem to imply that the technology’s creators envision it as a tool against stubborn misconceptions and conspiracy theories.
“How do you correct people’s misconceptions?” Matt Stempeck, the guy behind LazyTruth, asked New Scientist recently. “People get very defensive. [But] if they’re searching for the answer on Google they might be in a much more receptive state.”
Increasingly, information intermediates like Google have begun to take that suggestion seriously. Just three weeks ago, Google began displaying physician-vetted health information directly in search results, even commissioning diagrams from medical illustrators and consulting with the Mayo Clinic “for accuracy.” Meanwhile, Facebook recently launched a new initiative to append a warning to hoaxes and scams in News Feed, the better to keep them from spreading.
It’s unclear exactly what Google plans to do with this new technology, if anything at all. Still, even the possibility of a search engine that evaluates truth is a pretty incredible breakthrough. And it definitely gives new meaning to the phrase “let me Google that for you.”