Ctrl + N
Researchers worry that a new feature giving Instagram users the power to flag false news on the platform won’t do much to head off efforts to use disinformation to sow political discord in 2020.
The role of Instagram in spreading political disinformation took center stage in a pair of Senate reports in December, which highlighted how Russian state operatives used fake accounts on the platforms, masquerading as members of activist groups like Black Lives Matter during and well after the 2016 election. Researchers found that some Instagram posts by Russian trolls generated more than twice the “engagement” among users than they did on either Facebook or Twitter.
While Instagram and its parent company, Facebook, have cracked down on the kinds of coordinated campaigns launched by Russia, Instagram still serves as a potent source of memes and images laden with misinformation, especially for younger voters.
“Even though we don’t talk about it as much as Twitter and YouTube, it could potentially sway elections, on the local level especially,” says Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Center. “Instagram is where a lot of younger audiences are, so the threat isn’t necessarily about influencing someone from one candidate or another but what kind of wedge issues are going to be impacted by posts on Instagram.”
Donovan pointed to gay rights and immigration issues as political topics that gain traction on the platform.
“There’s definitely a concern that there’s many more young people on Instagram using it as a news source, and that those groups could be targeted by disinformation.”
Yet it wasn’t until April that Instagram began a pilot to proactively send content to U.S. fact-checking partners. Facebook launched its fact-checking initiative in 2016 and CEO Mark Zuckerberg has praised the program as a powerful tool against false news. While Instagram's use of fact-checkers is in a testing phase in the United States, Instagram is hoping to fast-track results with a new tool it released last week to allow U.S. users to flag a post as “false information.” The flag does not guarantee a post will be seen by a fact-checker, but it is calculated alongside other factors in determining if the company’s algorithms will select the content for a fact-check review — and hopefully making the artificial intelligence smarter at finding content like it next time.
Content that is determined “false” by a fact-checking partner will be removed from a hashtag search and Explore features, a page that surfaces new content to Instagram users. (Facebook fact-checking partners in other countries can still access Instagram content, but the less-aggressive version of the program has been criticized by at least one U.K. partner as ineffectual.)
It’s hard to say how much even that modest change will help in removing false information from the image-sharing platform.
Instagram declined to share how much content is reviewed by fact-checkers or removed from the hashtag search and Explore page because of the process. Researchers have long opined against flagging tools as a one-size-fits-all solution to content moderation, arguing they not only put the burden on users but ignore an unsavory truth many platforms turn a blind-eye to: Extremist content thrives because there’s an audience for it.
“When it comes to political disinformation or extremist content, there are enormous communities [on Instagram] that are existing in plain sight,” says Cristina López G., an extremism researcher and former deputy director for extremism at Media Matters for America. López says these communities exist through hashtags and networks of individual accounts.
A search by The Washington Post two days after the platform announced its new feature found that the hashtag #voterfraud surfaced a number of memes that have independently been debunked as false by organizations that Facebook uses in its fact-checker partner program. One post that turns up on the first page of results, a meme from November 2018, repeats the false claim that Democratic billionaire donor George Soros owns the voting machine company Smartmatic. Another post claims that 90,000 illegal immigrants voted in the midterms, which is also false.
If fact-checking partners rated the posts as “false” to Instagram they would be removed from the #voterfraud results. But because these posts still show up on a hashtag page, it means Instagram’s fact-checkers haven’t found them yet. Instagram spokeswoman Stephanie Otway says that’s where the new flagging tool can help.
“The more reports there are, the more signals we have to determine the pervasiveness of fake news on Instagram,” Otway explains.
When asked if Instagram would institute proactive bans for election misinformation, similar to its attempts to de-emphasize anti-vaxxer misinformation this spring, Otway says the service would block hashtags “designed to prevent or deter people from voting in line with our voter suppression policies” (policies it shares with Facebook). Furthermore, any tag associated with “a certain amount of violating content” is automatically restricted from search until that number “drops back down.” It’s unclear what amount of disinformation is required for a hashtag to warrant such a penalty, but Otway cited instances of removing hashtags being misused to share violating content such as nudity as an example.
Otway says Instagram largely shares its policies with Facebook and works to best adapt programs like the fact-checking pilot to the specific needs of its platform. It also shares Facebook’s concerns about misinformation.
“In general, our misinformation efforts are focused on keeping our elections safe,” says Otway.
Instagram isn’t alone in trying to better understand how political misinformation spreads on its platform.
There isn’t much concrete research on how misinformation and disinformation spreads on Instagram besides the 2018 Senate report. Some high-follower accounts, such as @the_typical_liberal, a meme account described by The Atlantic’s Taylor Lorenz as a popular source of conservative information for teens, are set to private. So even if researchers like López wanted to monitor its content, there’s no guarantee they’d have access to it. Compared with Twitter or even Facebook, Instagram provides researchers with extremely limited access to its internal data.
The company is exploring other things like computer vision technology — which would help detect text overlay on images — but there are still other tricks those seeking to spread fake news or political propaganda without getting caught by Instagram.
Accounts seeking to spread misinformation could easily omit hashtags or certain text from their post captions, says López. Donovan points out there have been cases of sites paying popular meme accounts to share their content without disclosing their advertising partnership, something that could also prove dangerous in 2020.
Note to readers: The Technology 202 will not be publishing tomorrow or next week. We'll be back in your inbox after Labor Day on Tuesday, Sept. 3.
BITS: Microsoft contractors say they listened to conversations captured by Xbox consoles in an effort to improve the device’s voice command features, Motherboard's Joseph Cox reports. The revelations come as other technology companies like Apple and Facebook suspend human review of audio recordings captured by voice assistants like Siri.
Though the Xbox is intended to capture audio only when a person uses a wake word like “Xbox” or “Hey, Cortana,” Cox reports that the recording functionality was sometimes triggered by mistake, capturing recordings when people didn’t intend to activate the device. Contractors working for Microsoft said they reviewed a variety of recordings captured by Microsoft products, including Skype calls.
“Xbox commands came up first as a bit of an outlier and then became about half of what we did before becoming most of what we did,” one former contractor who worked on behalf of Microsoft told Motherboard. The former contractor also said most of the voices they heard were of children.
“We’ve long been clear that we collect voice data to improve voice-enabled services and that this data is sometimes reviewed by vendors,” a Microsoft spokesperson told Motherboard in a statement.
NIBBLES: Amazon is aggressively inviting lawmakers to tour its warehouses and posting about those visits on Twitter, my colleague Jay Greene reports. More than 560 federal, state and local policymakers have visited the company’s warehouses this year, as President Trump and 2020 presidential candidates make the company a political target.
For example, last Thursday the company’s policy arm retweeted Sen. Marsha Blackburn (R-Tenn.), who wrote she had a “great tour” at the company’s Chattanooga warehouse. A spokesperson for Blackburn did not respond to The Post’s request for comment.
The company’s tweets about policymakers’ visits come as Amazon launches a broad campaign to try to change the political debate as politicians on the right and left increasingly call for greater scrutiny of the company’s power. Amazon has also come under fire for paying warehouse workers low wages, as well as for not paying taxes last year. (Amazon raised its minimum wage for all U.S. employees last year, and the company has said that it pays “every penny we owe” in taxes.)
The company has also set up a rotating group of worker volunteers on company-approved Twitter accounts to counter the claims of harsh working conditions. “FC Ambassadors” have opposed unionization efforts, and one tweeted that he can “use a real bathroom when I want.”
Amazon spokeswoman Jodi Seth defended the company’s working conditions and pay.
“We encourage policymakers and the general public to tour our facilities because we want them to see all of this for themselves,” Seth said in a statement.
(Amazon founder and chief executive Jeff Bezos owns The Washington Post.)
BYTES: The MIT Media Lab, one of the country’s top technology research centers, is embroiled in controversy following revelations that its director had ties to Jeffrey Epstein. Epstein, the deceased financier accused of sex trafficking, had given money to the lab and the director’s venture capital funds, according to Tiffany Hsu, Marc Tracy and Emily Flitter of the New York Times.
Two academics affiliated with the lab, associate professor Ethan Zuckerman and visiting scholar J. Nathan Matias, said this week they would cut ties with the lab. The lab’s director, Joichi Ito, apologized for his connections to Epstein last week.
“In my fund-raising efforts for M.I.T. Media Lab, I invited him to the lab and visited several of his residences,” Ito said in a statement posted to the lab’s blog. “I want you to know that in all of my interactions with Epstein, I was never involved in, never heard him talk about and never saw any evidence of the horrific acts that he was accused of.”
Ito did not disclose how much Epstein contributed to the Media Lab or for his own funds, and Ito and MIT declined to comment.
— News from the private sector:
— News from the public sector:
— Tech news generating buzz around the Web: