with Bastien Inzaurralde
The attention from lawmakers means deepfakes are no longer a fringe issue but a more serious front in the fight against fake news, and tech companies may soon feel pressure to get ahead of them. But any policy solution would have to balance the harm to potential victims against free-speech rights for people who use deepfakes for creative or satirical purposes.
Warner said the easily accessible technology used to make the videos could “usher in an unprecedented wave of false and defamatory content.” In his policy paper, he wrote, “Just as we’re trying to sort through the disinformation playbook used in the 2016 election and as we prepare for additional attacks in 2018, a new set of tools is being developed that are poised to exacerbate these problems."
Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it's coming from the victim. The results are sometimes so seamless that it's difficult to tell with the naked eye that the videos are fraudulent.
Lawmakers caution that it's a tool that could send the fake news crisis into overdrive. Think about it: Realistic-looking videos appearing to show politicians meeting taking bribes or uttering inflammatory statements could be used to try to sway an election. Or doctored footage purporting to show officials announcing military action could trigger a national security crisis.
“This all sounds fantastic, it all sounds exaggerated, it all sounds hyperbolic. But the capability to do all of this is real and exists now, the willingness exists now, all that's missing is the execution. And we are not ready for it,” Rubio said in a speech earlier this month at the right-leaning Heritage Foundation. “I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” he said. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.”
To chip away at the problem, Warner has proposed is amending the Communications Decency Act to hold social media platforms liable under state law if they don’t take down deepfakes and other manipulated content shown in court to be defamatory. Right now, the law provides immunity for platforms in such cases.
“Currently the onus is on victims to exhaustively search for, and report, this content to platforms — who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future,” Warner wrote in his policy proposal. The platforms, he said, were “in the best place to identify and prevent this kind of content from being propagated.”
Legislation to do this would almost certainly run into opposition from civil liberties groups. This year, organizations such as the Electronic Frontier Foundation lobbied unsuccessfully against a similar carve-out in the Communications Decency Act that sought to hold media platforms liable for sex trafficking. The groups said the move, while well-intended, was so broadly written that it criminalized protected speech.
“Any effort on this front would need to address the challenge of distinguishing true deepfakes aimed at spreading disinformation from satire or other legitimate forms of entertainment or parody,” Warner wrote. “Attempting to distinguish between true disinformation and legitimate satire could prove difficult,” he said, but “courts already must make distinction between satire and defamation/libel.”
Deepfakes started cropping up last year on Reddit after a user superimposed the faces of Gal Gadot, Taylor Swift and other celebrities onto the faces of actors in pornographic videos. They've also been used to lampoon President Trump by pasting his face over Russian President Vladimir Putin and German Chancellor Angela Merkel. And the comedian Jordan Peele used the technology to graft President Barack Obama's face over his own in a widely-circulated public service announcement warning of the dangers of deepfakes.
“It’s only a matter of time until ‘deepfake’ videos become a household term,” Rubio told me in an email.
Rubio hasn’t offered any concrete policy proposals yet. For now, he told me, he’s simply trying to sound the alarm in hopes of bringing new ideas to the table.
“I’m working to raise awareness,” he said, “and find ways to address this threat from foreign actors and criminals and defend our elections this fall and in the future.”
|You are reading The Cybersecurity 202, our must-read newsletter on cybersecurity policy news.|
|Not a regular subscriber?|
PINGED: Warner's deepfakes proposal is one of 20 ideas he proposed to overhaul the rules that govern tech companies. In his policy paper, Warner also proposes “to give users ownership of their data and require their consent before a third party can access that information, and to commit new funding to the Federal Trade Commission and media literacy campaigns,” The Washington Post's Karoun Demirjian reported. However, it is far more certain that Warner would be able to garner support from Republican senators for his measures, especially as the midterm elections approach, my colleague reported.
“Some of Warner’s proposals reflect demands that have been voiced elsewhere around Congress, such as his calls to improve national defenses against cyber intrusions and establish a 'deterrence doctrine' to specify what steps the United States will take in response to cyber attacks,” Demirjian wrote. “But others envision a new legal conceptualization of social media companies, as entities with a fiduciary duty to their users, and only temporary custodians of content and information that users could have the right to take with them from platform to platform, much like the portability of telephone numbers from company to company. Warner imagines laws that would allow for audits of social media companies’ algorithms, as well as 'public interest' laws that would let experts and academics scrutinize how companies are using the data they collect.”
PATCHED: A man claiming to be a Latvian official emailed and called the office of Sen. Jeanne Shaheen (D-N.H.) last year to seek information on U.S. sanctions against Russia, the Daily Beast's Andrew Desiderio and Kevin Poulsen reported Monday. The man offered to set up a phone call between Shaheen and Latvia's foreign minister to discuss sanctions as well as the Russian anti-virus company Kaspersky Lab. Desiderio and Poulsen noted that Shaheen had pushed for a measure requiring the federal government to rid its networks of Kaspersky software. The attempt was thwarted after Shaheen's staff spoke with the Latvian Embassy and realized the operation was not legitimate.
“Ryan Nickel, a spokesman for Shaheen, told the Daily Beast that staffers in her Senate office frequently receive hoax emails and phishing attempts on their official email accounts,” Desiderio and Poulsen wrote. “They shared the more troubling ones, including the approach by the fake Latvian, with law enforcement officials.” However, there are no indications yet that Russian authorities are to blame for the operation against Shaheen. “No malware was attached to the emails, and the fake foreign ministry official did not try to send Shaheen’s staff to a malicious website,” Desiderio and Poulsen wrote. “An Internet IP address in the e-mail headers traces back to a hosting company in Amsterdam.”
PWNED: “One of Iowa’s main hospital and clinic systems has notified about 1.4 million patients that their personal information might have been breached,” the Des Moines Register's Tony Leys reported on Monday. “UnityPoint Health officials said hackers used 'phishing' techniques to break into the company’s email system. The company, based in West Des Moines, said the hackers could have obtained medical information, such as diagnoses and types of care, that was included in emails.” In a notice posted on its website, UnityPoint Health said it discovered the cyberattack on May 31, reported it to law enforcement and launched a forensics investigation.
The company said some employees gave away their log-in credentials after receiving the phishing emails, which were crafted to look as if a “trusted executive” of the company had sent them. “Some of the compromised accounts included emails or attachments to emails, such as standard reports related to healthcare operations, containing protected health information and/or personal information for certain patients,” according to the company's notice. “While unauthorized access to patient information may have occurred, no known or attempted misuse of patient information has been reported at this time.” The company also said it is “more likely” that hackers carried out the cyberattack to ultimately steal money rather than to seize patients' information.
— More cybersecurity news:
— The federal government’s ambition to contain Chinese telecom giants ZTE and Huawei out of concern that they may threaten national security could in return hamper efforts to develop 5G technology in the United States, according to CyberScoop’s Ryan Duffy. “The quest to upend China’s surveillance capabilities may be hurting America’s competitiveness in the race to develop and roll out 5G wireless technology,” Duffy reported Monday. “The dilemma presents the latest — and perhaps fiercest — technological showdown between Washington and Beijing to date.”
— “The U.S. Department of Defense will for the first time be using large-scale artificial intelligence systems that could automate mundane tasks and augment the work of military members as a result of an $885 million five-year contract, said Josh Sullivan, senior vice president at government consulting firm Booz Allen Hamilton,” the Wall Street Journal’s Sara Castellanos reported Monday. “The technology will allow the Defense Department to better compete with nations including China and Russia, said Mr. Sullivan, who leads the analytics business for Booz Allen.”
— More cybersecurity news about the public sector:
Amazon Promises “Unwavering” Commitment to Police, Military Clients Using AI Technology (The Intercept)
— Law enforcement authorities have caught a hacker who allegedly carried out SIM hijacking schemes against cryptocurrency investors, Motherboard’s Lorenzo Franceschi-Bicchierai reported Monday. “On July 12, police in California arrested a college student accused of being part of a group of criminals who hacked dozens of cellphone numbers to steal more than $5 million in cryptocurrency,” Franceschi-Bicchierai wrote. “Joel Ortiz, a 20-year-old from Boston, allegedly hacked around 40 victims with the help of still unnamed accomplices, according to court documents obtained by Motherboard.” Here is how the scam works, according to Motherboard: “SIM swapping consists of tricking a provider like AT&T or T-Mobile into transferring the target’s phone number to a SIM card controlled by the criminal. Once they get the phone number, fraudsters can leverage it to reset the victims’ passwords and break into their online accounts (cryptocurrency accounts are common targets.) In some cases, this works even if the accounts are protected by two-factor authentication.”
— More news about security breaches:
- The Department of Homeland Security holds a National Cybersecurity Summit in New York.
- Senate Commerce subcommittee hearing on “global Internet governance.”
- Senate Intelligence Committee hearing on foreign influence operations on social media tomorrow.
- Black Hat USA security conference on Aug. 8 through Aug. 9 in Las Vegas.
- DEF CON security conference on Aug. 9 through Aug. 12 in Las Vegas.
San Antonio shark miraculously rescued after being stolen from aquarium:
States sue government over 3-D printed guns:
How Bruce Lee changed Hollywood: