Facebook’s Menlo Park, Calif., "War Room" will be in action -- with a team on the watch for threats and looking into any reports of abuse. The group will include representatives from more than a dozen of the companies internal teams, ranging from data science and engineering to policy. This is an additional line of defense on top of the 30,000 people Facebook regularly has working on safety and security measures.
“As part of our ongoing work to prevent interference in elections on our platform, we will have a dedicated team proactively monitoring for threats as well as investigating any reports of abuse in real time in the lead up to, during and following the debates,” Facebook said in a statement yesterday.
The companies have been investing more resources in election security for several years now, and as the 2020 campaign kicks off in earnest, they hope to apply lessons from the 2018 midterms and foreign elections to improve their defenses.
“As the presidential debates begin, we are building on our efforts to protect the public conversation and enforce our policies against platform manipulation,” Twitter said in a statement. “It’s always an election year on Twitter. We're a global service and we think globally. We take the learnings from every recent election — including the EU, India, and the 2018 U.S. Midterms — and are using them to improve our election integrity work for 2020.”
Twitter has met with 20 of the presidential teams directly. Facebook is also in contact with the presidential campaigns, which can flag activity they believe is suspicious. The Menlo Park, Calif., company says it has also been in touch with the Democratic National Committee and the Republican National Committee.
“The DNC has alerted the campaigns of the potential for heightened disinformation activity leading up to, during, and after the debates, and is reminding them of the steps they can take to detect and respond to these attacks,” a DNC official told me. “The DNC will be all hands on deck throughout the evening, monitoring activity, and working with social media platforms to address disinformation campaigns it uncovers.”
This is the first time tech companies companies will be monitoring a U.S. presidential debate since overhauling their election integrity strategies post-2016. Debates are a prime target for actors aiming to plant bogus information and amplify tensions because they’re one of the key moments during an election when many Americans are tuned into politics -- even if they don’t typically pay attention to the day-to-day news cycle.
“If you think bang for buck, it’s a natural time to influence people,” said Patrick L. Warren, a Clemson researcher who has analyzed a wide range of Russian disinformation tweets surrounding 2016, told me. “You’ve got people listening who might not be the most informed.”
Experts expect disinformation actors will evolve their playbook during the 2020 election -- potentially using advances in video technology to take their campaigns to the next level. House lawmakers have been growing increasingly concerned about the threat of deepfakes, or videos manipulated with artificial intelligence to make it appear someone is doing or saying something that never actually happened.
“As we begin the first round of debates in 2020, I am increasingly concerned that campaigns will be victims of new forms of disinformation including deepfakes, or as we saw with a recent doctored video of [House Speaker Nancy] Pelosi, ‘cheapfakes,’” Rep. Adam Schiff (D-Calif.) said in a statement to The Technology 202. Schiff is the top Democrat on the House panel that recently hosted a hearing on AI and disinformation.
Tech companies have not clarified their policies for handling deepfakes or more crudely edited and demonstrably false videos, said Michael Posner, the director of New York University Stern Center for Business and Human Rights who researches disinformation. The companies have taken different approaches to dealing with high-profile examples of disinformation, like the recent Pelosi videos that were slowed down to make the House speaker appear intoxicated. YouTube removed the video, while Facebook left it up with an alert to users.
“There’s clearly a greater and greater ability technologically to misrepresent what people say and make it convincing,” Posner told me.
Posner expects YouTube could see a greater onslaught of disinformation this time around. But Google, which owns YouTube, did not respond when asked if it was also taking additional precautions around the debates. A company spokeswoman shared a February 2019 report about how the company addresses disinformation generally.
Even as new threats loom, Posner expects to see some of 2016 tactics repeat this cycle. Russia’s Internet Research Agency did target the Democratic primaries, according to the analysis of tweets that Warren conducted at Clemson. Warren and his team found spikes of activity in the days leading up to the first Democratic primary debate, and then on the day of the second Democratic primary debate with, for example, an onslaught of pro-Bernie Sanders hashtags. The researchers did not find any similar activity during the Republican primaries, he said. The IRA-linked accounts did not engage with the official debate hashtags. Their playbook was to start their own hashtags, and then amplify the most divisive tweets from people who were not linked to the Russian operation.
However, Warren said Russian actors were much more active during the general election than the primaries, and he expects once again, the primary season could be just a warmup for the general election.
Experts also caution the companies alone can’t fight disinformation. Facebook and Twitter are both coordinating their efforts with government entities like the Department of Homeland Security and Federal Bureau of Investigation.
Schiff said that government, campaigns, social media companies, media companies and voters all have a role to play in fighting disinformation.
““If a society loses the ability to discern fact from fiction, there can be nothing more corrosive to the democratic process,” he said in a statement.
BITS, NIBBLES AND BYTES
BITS: Executives from the big three social media platforms will face a grilling from Congress today about how they’re combating online extremism, this time in front of the House Homeland Security Committee. Twitter has come prepared with a warning for lawmakers: It can’t beat the fringes of the Internet alone.
“This is a long-term problem requiring a long-term response, not just the removal of content,” Nick Pickles, Twitter global senior strategist, writes in his testimony, shared with The Technology 202. “As our peer companies improve in their efforts, this content continues to migrate to less-governed platforms and services.”
Twitter — like Facebook and Google's YouTube — struggled to remove a flood of videos of the Christchurch, New Zealand, shooting. House Homeland Security Committee Chairman Bennie G. Thompson (D-Miss.) criticized the companies for coming up short of answers in a March briefing about what they were doing to prevent the next misuse of social media by terrorists.
“As social media becomes more ubiquitous in our daily lives, tech companies have an obligation to ensure their platforms aren’t being misused to endanger Americans or threaten our democracy,” Thompson said in a news release. “Unfortunately, time and time again, these social media companies have shown they are not up to the task.”
But Pickles isn’t alone in encouraging lawmakers to look past the three most popular platforms when it comes to terrorism online. Former Facebook chief security officer Alex Stamos pointed out at a hearing in front of the same committee yesterday that content moderators faced an uphill battle against white supremacist content as long “as sites like 8chan happily cultivate them as cherished users.”
NIBBLES: Not only are Facebook, Google, and YouTube failing to remove medical misinformation from their platform, reports my colleague Abby Ohlheiser, in some cases their algorithms are sending users with life-threatening medical conditions right to it.
On the first page of an April YouTube search for “cure for cancer,” Abby found a video with more than 1.4 million views that cited baking soda as a cure for the disease. YouTube changed the way its algorithm handles medical content last month, it says, but that search was just one example of how easy it is for patients to find fake science. Abby also found “natural” and other dubious cures for cancer advertised on Google’s search engine — despite the content constituting a violation of Google policy. Facebook also hosts a number of groups and pages for unsound medical advice.
Campaigns pressuring tech companies to take greater accountability for their roles in public health crises have found some success in recent months. In March, Facebook agreed to downrank anti-vaccination content after a number of measles outbreaks occurred.
BYTES: The Federal Trade Commission and state lawmakers are cracking down on dozens of scammers collectively responsible for at least a billion robocalls, my colleague Tony Romm reports. The top consumer watchdog's campaign follows months of criticism from lawmakers and the public, who said the agency was not doing enough to protect people from a recent onslaught in robocalls.
The action, which took place over the past nine months and involved both the Federal Trade Commission, 25 state attorneys general and local law enforcement, involved a combination of fines, warning letters and legal charges against some of the biggest players in a rising epidemic of spam calls plaguing Americans. Actions announced as a part of yesterday’s crackdown included a case against First Choice Horizon LLC, a firm that led a 'maze of interrelated operations’ that preyed on Americans who are in financial distress, including seniors,” Tony reports.
Lawmakers are pushing legislation to crack down on the major nuisance. A separate House bill to stop robocalls passed out of subcommittee yesterday; a separate, similar bill passed the Senate with strong bipartisan support earlier this year.
-- The Alliance to Counter Crime Online and the ATHAR Project are warning House lawmakers that Facebook "has also become a repository for massive online criminal markets and terrorist groups," according to a letter they submitted as the House holds a counterterrorism hearing. The groups warn that the company has been hiding behind Section 230 of the Communications Decency Act, a key legal shield that gives tech platforms broad immunity for the content third parties post on their platform. They say it's time for the government to do more to force Facebook to improve its content moderation practices. Because overhauling Section 230 may take a long time, they want lawmakers to pressure the Securities and Exchange Commission to investigate Facebook in the meantime on the grounds that the company's failure to police illegal activity on its platform is exposing investors to risk.
— News from the public sector:
— News from the private sector:
— Tech news generating buzz around the Web:
- The House Homeland Security Committee will bring in representatives from Facebook, Google, and Twitter to discuss their company's efforts to address terror content and misinformation at 10 a.m.
Harvard Professor Cass Sunstein talks to Facebook Founder and CEO Mark Zuckerberg about government regulation, shifts to privacy, and innovation at the Aspen Ideas Festival at 4:30 p.m.