Everyone now knows the Web is filled with lies. So then how do fake Facebook posts, YouTube videos and tweets keep making suckers of us?
To find out, I conducted a forensic investigation of the fake that fooled my social network. I found the original creator of that CG plane clip. I spoke to the Facebook executive charged with curbing misinformation. And I confronted my friend who shared it.
The motives for a crazy plane report may be different from posts misdirecting American voters or fueling genocide in Myanmar. Yet some of the questions are the same: What makes fake news effective? Why did I end up seeing it? And what can we do about it?
Fake news creators “aren’t loyal to any one ideology or geography,” said Tessa Lyons, the product manager for Facebook’s News Feed tasked with reducing misinformation. “They are seizing on whatever the conversation is” — usually to make money.
This year, Facebook will double the number of humans involved in fighting constantly morphing “integrity” problems on its network, to 20,000. Thanks in part to those efforts, independent fact-checkers and some new technologies, Facebook user interaction with known fake news sites has declined by 50 percent since the 2016 election, according to a study by Stanford and New York University.
But if you think you’re immune to this stuff, you’re wrong. Detecting what’s fake in images and video is only getting harder. Misinformation is part of an online economy that weaponizes social media to profit from our clicks and attention. And with the right tools to stop it still a long way off, we all need to get smarter about it.
Seeing is believing
The crazy plane video first appeared Sept. 13 on a Facebook page called Time News International. Its caption reads: “A Capital Airlines Beijing-Macao flight, carrying 166 people’s, made an emergency landing in Shenzhen on 28 August 2018, after aborting a landing attempt in Macao due to mechanical failure, the airline said.”
No real commercial plane did a 360 roll so close to the ground, but an emergency landing really did happen that August day in Macau.
Four days later, in Los Angeles, film director Aristomenis Tsirbas started getting messages from his friends. A year earlier, the computer graphics whiz had created and posted to YouTube a video he’d made showing a plane doing a 360. Someone had taken his work and used it at the beginning of a fake news report.
“I realized, oh, my God, I’m part of the problem,” Tsirbas told me. The artist, who has worked on “Titanic” and “Star Trek,” has a hobby in creating realistic but implausible videos, often involving aliens. He posts them on YouTube, he said, in part to demonstrate CG and in part to make a little money from YouTube ads.
The photorealism of Tsirbas’s clip played a big role in making the fake story go viral. And that makes it typical: Misinformation featuring manipulated photos and videos is among the most likely to go viral, Facebook’s Lyons said. Sometimes, like in this case, it employs shots from real news reports to make it seem just credible enough. “The really crazy things tend to get less distribution than the things that hit the sweet spot where they could be believable,” Lyons said.
Even after decades of Photoshop and CG films, most of us are still not very good about challenging the authenticity of images — or telling the real from the fake. That includes me: In an online test made by software maker Autodesk called Fake or Foto, I correctly identified the authenticity of just 22 percent of their images. (You can test yourself here.)
Another lesson: Fake news often changes the context of photos and videos in ways their creators might never imagine. Tsirbas sees his work as pranks or satire, but he hasn’t explicitly labeled them that way. “They are clearly fakes,” he said. After we spoke, he wrote to say he’d now add a disclaimer to his CG videos: “This is a narrative work.”
Satire, in particular, can lose important context unless it’s baked into an image itself. Another doctored fake news image, first posted to Twitter in 2017, appears to show President Trump touring a flooded area of Houston, handing a red hat to a victim. Artist Jessica Savage Broer, a Trump critic, told me she Photoshopped it to make a point about how people need to “use critical thinking skills.” But then earlier this year, supporters of the president started sharing it on Facebook — by the hundreds of thousands — as evidence of the president’s humanitarian work.
The outrage algorithm
Why would someone turn Tsirbas’s airplane video into a fake news report?
There’s no clear answer, but there are clues. Time News International, the page that published it, did not respond to requests I sent via Facebook, an email address or a U.K. phone number listed on its page.
Facebook’s Lyons said pages posting misinformation most often have an economic motive. They post links to articles on sites with just-believable-enough names that are filled with advertisements or spyware, which might attempt to invade our online privacy.
Lyons’s team shared with me a half-dozen samples of fake news. But the links to money aren’t always immediately clear. The Time News International page doesn’t regularly link to outside articles, though it posts a lot of outrageous photos and videos about topics in the news. That has attracted it a following of 225,000 people on Facebook — a base it could direct to content it might capitalize on in the future.
Facebook and other social media companies deserve some of the blame. It’s easy to grow an audience for outlandish stories when publishing doesn’t require vetting, and algorithms are tuned to share the stuff that garners the greatest outrage. I saw that crazy video because Facebook decided I should.
Fake news producers also use our friends to add to their credibility. When I saw the plane video, my suspicions weren’t on high alert because it came from my friend, who I trust as a smart guy. He told me he realized later the video was a fake but thought comments on his post would alert his friends. “It’s just funny thinking about the steps by which we get duped,” he said.
Stop spreading the news
Facebook’s response to the plane video shows how far it’s come in the fight with fake news — and how far we have to go.
On Sept. 17, a few days after it was posted, the video was detected by Facebook’s machine-learning systems, programs that try to automatically detect fake news. The company won’t disclose exactly how those work, but it said the signals include what sorts of comments people leave on posts.
Once detected, Facebook passed the video to its network of independent fact-checkers. After Snopes labeled it as “false,” Facebook made it show up less often in News Feeds.
Why does the fake plane video remain up at a time when Facebook is making headlines for taking down other posts? Facebook said deletion is for violations of its community standards, such as pornography. “My job is to prevent misleading and false information from going viral,” Lyons said. “Even if something is false, we don’t prevent people from sharing it. We give them context.”
That comes in the form of a label. Now when the video appears in a News Feed or someone attempts to share it, up pops “Additional Reporting On This,” with a link to reports from fact-checking organizations. Facebook said it also notified people who had already shared it, though my friend doesn’t recall seeing a warning.
“I wouldn’t consider this a success from our side,” Lyons said. Typically, posts that Facebook demotes have an 80 percent reduction in the total number of views, so it’s possible without Facebook’s action, the post could have been seen by hundreds of millions. (Later, Facebook’s automated systems also detected duplicates of the video being uploaded by other pages.)
It’s also an issue of new media literacy. Facebook and others have produced fliers such as “Tips for spotting false news,” but it’s hard to change a response that is both human and pretty fundamental to the social media experience. There have always been hoaxes, but perhaps we need time to internalize just how easy they’ve become to create.
Lyons is already tracking the next generation of CG images dubbed “deep fakes” that don’t even require the expertise of a creator like Tsirbas. Instead, they use artificial intelligence to splice together bits from lots of existing videos to create, for example, a fake speech by a president.
Maybe we’ll eventually learn to be less trusting of our friends — at least the online ones. The people we count on for important information in the real world aren’t always the people who fill our social media feeds.
Or if you want to avoid being that friend: Before you spread the latest outrage online, stop and consider the source.
Read more tech advice and analysis from Geoffrey A. Fowler: