A bipartisan group of lawmakers wants to bring the intelligence community into the fight against deepfakes, realistic-looking videos that depict people doing or saying things they didn’t.
Rep. Adam B. Schiff (Calif.), the ranking Democrat on House Intelligence Committee, and two other House members called on Director of National Intelligence Daniel Coats in a letter Thursday to assess how foreign adversaries could use deepfakes to undermine national security. The lawmakers asked Coats to prepare a report by the end of the year on what Congress might do to address “malicious use” of deepfakes and what technological tools the government and the private sector might use to identify them.
“The first step to help prepare the Intelligence Community, and the nation, to respond effectively is to understand all we can about this emerging technology and what steps we can take to protect ourselves,” Schiff said in a statement.
A growing number of lawmakers have sounded the alarm about deepfakes in recent months, but even the most outspoken acknowledge that there’s no easy way to rein them in through legislation. An authoritative assessment from the intelligence community about the dangers could help Congress zero in on technical and legal options for curbing the doctored videos, and it probably would ratchet up pressure on tech companies to do more to stop their spread.
“This is a constructive step. It’s one thing for academics and techies to say that deepfakes are a problem, another for the intelligence community to say the same. It makes the concern something that Congress can address without fear of being second-guessed on how big the problem is,” said Stewart Baker, a former Department of Homeland Security assistant secretary and former general counsel for the National Security Agency.
The lawmakers’ letter came as Facebook said Thursday that it was stepping up its efforts to scan photos and videos on its platform for evidence that they’d been manipulated, as my colleagues Tony Romm and Drew Harwell reported. The social network said it had deployed algorithms in 17 countries, including the United States, to “identify potentially false” images and videos and send them to fact-checkers for review.
Those types of efforts could increase if the intelligence community agrees to weigh in on deepfakes, said Danielle Citron, a privacy law expert at the University of Maryland and co-author of a new paper on the potential impact of deepfake technology. She said there could be a parallel in the response to Russia’s disinformation campaign during the 2016 election: Social media companies only started shutting down fraudulent accounts en masse after lawmakers and national security officials spotlighted the problem.
“Having the director of national intelligence reporting to Congress, having the threat bandied about very publicly, could get platforms to work more on these problems. This is the kind of feedback loop we need,” Citron told me. “They’re working on it, but maybe not as quickly as we might like them to. When we talk national security — and all election issues are now national security — I think they’ll pay attention.”
Deepfake technology works by using artificial intelligence to map images of one person’s face onto another person’s body. Software to make the videos is freely available online, and it’s relatively user-friendly. Sometimes the videos are so lifelike that even trained video producers have a hard time telling they’re fraudulent.
Lawmakers and technology experts warn that bad actors could use hyper-realistic deepfakes of public officials to try to sway an election or even trigger a national security crisis. They’ve floated ideas to tackle the problem, from detection programs that use sophisticated analysis to identify manipulated videos to legislation that would hold media platforms liable for failing to take down defamatory deepfakes. But it’s a complicated battle, and there’s no silver-bullet solution. As detection technologies advance, so do the tools to create increasingly convincing deepfakes. And any legislation would have to account for the free-speech rights of people who use deepfake software for satire and other benign purposes.
Schiff and Reps. Carlos Curbelo (R-Fla.) and Stephanie Murphy (D-Fla.) are hoping the intelligence community can now offer some guidance. As part of the report they’re requesting, they want officials to analyze the “benefits, limitations and drawbacks, including privacy concerns,” of technologies that could counter deepfakes. They’re also seeking recommendations on whether the intelligence community needs additional legal authorities or financial resources to address threats posed by deepfakes, and they want a description of any “confirmed or suspected” uses by foreign governments against the United States.
“As with any threat,” Curbelo said in a statement, “our Intelligence Community must be prepared to combat deep fakes, be vigilant against them, and stand ready to protect our nation and the American people from enemies looking to exploit this new technology.”
|You are reading The Cybersecurity 202, our must-read newsletter on cybersecurity policy news.|
|Not a regular subscriber?|
PINGED: The House Homeland Security Committee on Thursday advanced two bills aiming to reduce cybersecurity vulnerabilities at the Department of Homeland Security. The first bill, which was introduced by House Majority Leader Kevin McCarthy (R-Calif.), would establish a vulnerability disclosure policy at DHS. “As the nation’s leading civilian cybersecurity agency, it is of paramount importance that the department lead from the front and be an example of the good cyber-hygiene practices promoted” in the bill, Rep. John Ratcliffe (R-Tex.) said during the committee's meeting.
Rep. Jim Langevin (D-R.I.) said disclosure programs are “widely regarded as a best practice” and praised the role of ethical hackers in helping identify cybersecurity weaknesses. “A vulnerability disclosure policy is an open hand of friendship to the hacker community,” Langevin said. “And it is important for all of my friends on this committee to understand that these security researchers are, for the most part, very willing to reciprocate that friendship.” Langevin also lamented that DHS did not create a policy on its own. “Unfortunately, it appears that they won’t do so unless Congress requires it of them,” he said of DHS.
The committee also adopted a bill that would direct DHS to establish a bug bounty pilot program for the agency. The bill, which was introduced by Sens. Maggie Hassan (D-N.H.) and Rob Portman (R-Ohio), passed the Senate in April. “I am pleased that our bipartisan Hack DHS bill has moved out of the House Homeland Security Committee and is one step closer to being signed into law,” Portman said in a statement. “This legislation ensures DHS will execute a bug-bounty program and reap the cost-effective benefits to the security of their networks and systems.”
PATCHED: A bipartisan group of House lawmakers on Thursday expressed “serious concerns” in a letter to Google chief executive Sundar Pichai over reports that the company is planning to launch a censored version of its search engine in China. The 16 lawmakers noted that the Chinese government exerts tight control over free speech and “routinely monitors” those it sees as opponents. “As policymakers, we have a responsibility to ensure that American companies are not perpetuating human rights abuses abroad,” the letter said. The lawmakers who signed the letter include Rep. David N. Cicilline (D-R.I.), House Homeland Security Committee Chairman Michael McCaul (R-Tex.), Rep. Dana Rohrabacher (R-Calif.) and Rep. Pramila Jayapal (D-Wash.).
NEW: Google should not be helping China crack down on free speech and political dissent. I just sent this letter with some of my Republican and Democratic colleagues raising our serious concerns and questions about what they’re doing. pic.twitter.com/fZ0wlabzS7— David Cicilline (@davidcicilline) September 13, 2018
Also on Thursday, the Intercept's Ryan Gallagher reported that a Google scientist resigned to protest the censored search engine project for China, which has been called Dragonfly. The scientist, Jack Poulson, said he had an “ethical responsibility to resign in protest of the forfeiture of our public human rights commitments,” as quoted by the Intercept. But his worries went beyond the implementation of a censored search engine. “He said that he was concerned not just about the censorship itself, but also the ramifications of hosting customer data on the Chinese mainland, where it would be accessible to Chinese security agencies that are well-known for targeting political activists and journalists,” Gallagher wrote. Poulson's last day at Google was Aug. 31.
PWNED: As Hurricane Florence targets the Carolinas, the Wall Street Journal's Kim S. Nash and Catherine Stupp reported that companies can be more vulnerable to cyberattacks during hurricanes. “Corporate technology managers should expect more phishing attacks and intrusion attempts as cybercriminals target companies that are moving computers to fallback sites and switching on backup networking equipment, experts said,” the Journal reported Thursday. “Employees, out of usual work routines and interacting with cybersecurity and technology staff they might not know, could be more susceptible to trickery.”
For instance, phishing attempts posing as donation campaigns followed Hurricane Harvey last year, the Journal reported. Moreover, physical damage to a company's equipment from a hurricane could also weaken its cyberdefenses. “Cybersecurity infrastructure, including large systems for managing employee identities and detecting intrusions, tends to be more centralized than general data infrastructure, said Cory M. Mazzola, an executive fellow at the Tuck School of Business at Dartmouth College,” Nash and Stupp wrote. “If floods, high winds and other hurricane effects take down a central cybersecurity system, a company’s data will be less protected, said Mr. Mazzola, who is a security operations executive at a Fortune 500 company.”
— Kevin Mandia, chief executive of the cybersecurity company FireEye, told the Senate Homeland Security and Governmental Affairs Committee on Thursday that the United States ought to develop a cyber doctrine to help deter attacks. “Every modern nation doesn't know where the border is for behavior,” Mandia said in an answer to a question from Sen. Thomas R. Carper (D-Del.) on preventing cyberattacks on election infrastructure. “There aren't international rules of engagement.”
Speaking at a hearing on “evolving threats to the homeland,” Mandia said cyberattacks will only “keep escalating” unless the United States draws red lines in cyberspace. “The private sector is doing what's in its realm to defend itself, and it's looking to the government to do its best to get attribution right and to impose risks and repercussions and have some predictable doctrine so that we can govern the behaviors,” Mandia said.
— Delaware is moving to adopt new voting machines that will include a paper trail. “Delaware is set to have new voting machines for the 2020 presidential election, with the goal of putting them in place by May’s school board elections,” the Delaware State News's Matt Bittle reported Thursday. “A task force given the responsibility of approving a contract with a vendor to replace the current machines unanimously approved the selection Tuesday, although the choice must still go before the Joint Committee on Capital Improvement. That committee will meet Monday, enabling lawmakers to review and vote on the selection of Election Systems & Software.”
— “Rep. Jacky Rosen (D-Nev.) on Thursday unveiled legislation to create a Department of Labor grant program for apprenticeships in cybersecurity,” the Hill's Jacqueline Thomsen reported. “The bipartisan bill, known as the ‘Cyber Ready Workforce Act,’ would establish grants to help create, implement and expand registered apprenticeship programs for cybersecurity.”
— “Military combatant commands were inadequately resourcing their cyber missions and not effectively communicating about cyber requirements as recently as 2014, according to an investigative report,” Nextgov's Joseph Marks reported Thursday. “In some cases, that included assigning cyber tasks to people who were already filling other jobs, according to the partially declassified 2014 inspector general’s report, which Nextgov obtained through the Freedom of Information Act.”
— More cybersecurity news from the public sector:
— "North Korea accused the United States on Friday of circulating 'preposterous falsehoods' and conducting a vicious smear campaign, after Washington charged an alleged hacker for the North Korean government in connection with a series of major cyberattacks, including a 2014 assault on Sony Pictures Entertainment," my colleague Simon Denyer reports. "The North Korean statement, signed by a researcher at a Foreign Ministry institute, said the charges could undermine the implementation of agreements reached between President Trump and North Korean leader Kim Jong Un in Singapore in June."
— More cybersecurity news from abroad:
- BSides Idaho Falls conference in Idaho Falls, Idaho, tomorrow.
- Security of Things World USA conference in San Diego on Sunday through Tuesday.
- Senate Armed Services subcommittee closed hearing on “interagency coordination in the protection of critical infrastructure” on Tuesday.
Politicians react to Trump's Puerto Rico tweet:
Gas explosions set dozens of homes on fire north of Boston:
Watch as a chef hears his fate from Michelin: