with Tonya Riley

Ctrl + N

Rep. Michael Waltz called for Navy to beat Army in this year’s football game. That's essentially treason for a former Army Green Beret.

And it's not something he'd ever say in real life: The statement came from a newly released political deepfake -- a video doctored with artificial intelligence. 

Waltz teamed up with Rep. Don Beyer (D-Va.) to craft a mock deepfake for the House Science subcommittee to illustrate just how realistic this kind of disinformation can be. The SUNY-Albany and University of Chicago researchers took a recorded video statement from Beyer and transposed it onto Waltz's image -- designed to be a jarring sight for subcommittee chair and former Navy pilot Mikie Sherill (D-N.J). 

The resulting video is a warning for lawmakers -- and the public -- that bad actors could abuse this technology for much more nefarious purposes than having a friendly joke about a sports rivalry. Watch it here: 

“You see how dangerous and misleading it could be; I’m sure we fooled a couple of people,” Beyer said. “For instance, what if instead of ‘Go Navy, Beat Army,’ I said, 'It’s time to impeach the president'? That would be viral everywhere.”

“My friends might appreciate that, but I think he would not,” he added of his Republican colleague.

As the 2020 election looms, Washington lawmakers are increasingly concerned that bad actors will use deepfake technology to sow chaos and stoke divisions among the American public — much like Russian actors did with traditional social media posts during the 2016 election.

Waltz and Beyer are warning that United States needs to be investing in technology to quickly detect such fakes to keep up in an arms race, as the tools to create misleading videos become even cheaper and more widely available. 

“These videos and this technology have the potential to truly be a weapon for our adversaries,” Waltz said. 

Expert witnesses who specialize in computer science and disinformation gave the lawmakers sobering testimony in a hearing yesterday about the state of the country's preparedness to address deepfakes and other hoaxes. 

Siwei Lyu, who led the SUNY researchers in developing the mock deepfake, said he was able to train his software in eight hours using a minute-long video they found of Waltz on YouTube. Though Lyu doesn’t widely share the software tools he used to make the video, he told lawmakers that right now similar technology is widely available online. 

"The technical capability of making high quality deepfakes is already at the disposal of whoever wants to make it," Lyu said. 

Disinformation isn't a new problem, but it's one that is being exacerbated by social media, which can help it spread much more quickly, the witnesses said.

Hany Farid, a professor at the Univeristy of California, Berkeley, warned lawmakers that the major technology platforms like Facebook and Google need to play a role in addressing deepfakes, but the tech companies have been slow to address the problem. 

"You have to understand here that we are fighting against business interests," Farid said. "In the last six months, the language coming out of the technology sector is encouraging, but I don't know there's a lot of action yet."

Technology companies are increasingly partnering with academics to address deepfakes and build better technology to detect such videos on their platforms. But the companies haven't yet publicly released policies explaining how they will address deepfakes spotted on their platforms.

Farid said the recent video of House Speaker Nancy Pelosi that was altered to appear as if she was slurring her words underscores the impending challenges. Facebook allowed the video to remain on its platform, and the company defended that decision saying it did not want to be responsible for separating reality from fiction online. 

"I can help with the technology problem, but I don't know what I can do with the policy problem when you say you aren't arbiters are the truth," Farid said. "They have to start getting serious about how their platforms are being weaponized to great effect and disrupting elections, inciting violence and sowing civil unrest."

Farid said it's difficult to predict exactly when a convincing deepfake will be released to disrupt a U.S. election. 

"I think it's coming, but I don't know whether it will be in 2020, 2022 or 2024," Farid said. "Largely because the cheap stuff still works. I think we'll eventually get ahead of that and then this will be the next front." 


BITS: Top Facebook executives toured Washington this week to promote the company’s election security efforts. The tour included briefings with policymakers and their staffs, a person familiar with the meetings who spoke on the condition of anonymity because they were not authorized to speak publicly told me. The tour comes a week after CEO Mark Zuckerberg made the rounds to discuss a wide scope of policy issues

Facebook's delegation included Cybersecurity Director Nathaniel Gleicher, Global Elections Director Katie Harbath and Engineering Director for Civic Integrity Kaushik Iyer. They met with about 150 bipartisan staff in the House and Senate as well as presidential campaign staffers, national political campaign committees, national security experts and advocacy groups. The briefings touched on Facebook's efforts to better enforce its political advertising policies and advancements in the use of artificial intelligence to detect violating behavior.

The goodwill tour may not be enough to overcome growing doubts that Facebook simply doesn’t have enough resources — or desire — to make meaningful changes. Facebook communications head Nick Clegg drew criticism when he announced in his speech that the company would not intervene if politicians violated its content standards. Earlier this week Facebook also took down a coordinated network of Ukrainian-run pages spreading political content targeting Americans, raising doubts about Facebook's ability to curb foreign influence on its platform before the 2020 elections.

Meanwhile, other top Facebook executives — including COO Sheryl Sandberg — were in Atlanta meeting with civil rights advocacy groups at an event organized by Color of Change, where the groups called on Facebook to make changes to its content moderation policies to better address hate and violence. 

“The credibility of Facebook’s efforts to create a safe and inclusive platform depends on meaningful engagement with our communities’ expertise — and an urgent, thoughtful response to the concerns and priorities we shared today,” Color Of Change President Rashad Robinson said in a statement. 

NIBBLES: Sen. Cory Gardner (R-Colo.) introduced legislation today that would require technology companies to disclose whether their smart home devices have cameras or microphones. Gardner cited an incident earlier this year when Google failed to disclose that its Nest Secure devices had a hidden microphone as part of the inspiration for the bill.

The Protecting Privacy in our Homes Act would charge the Federal Trade Commission with devising regulations requiring manufacturers to notify consumers whether Internet-connected devices contain cameras or microphones.

“Consumers face a number of challenges when it comes to their privacy, but they shouldn’t have a challenge figuring out if a device they buy has a camera or a microphone embedded into it,” Gardner said.

While the legislation centers on Internet-connected devices, Gardner points out that unknown consumer surveillance extends well outside the home. Earlier this year, USA Today reported that several airlines used seat-back cameras, though the airlines claimed they didn't intend to use them. Gardner hopes the new legislation will empower consumers to ask companies what data they’re collecting and where it’s being sent.

BYTES: DoorDash says an unauthorized third party accessed the personal data of nearly 5 million consumers, contractors and merchants in May, according to a company blog post. The data accessed included 100,000 contractors' driver's license numbers, and it could trigger fines under certain state data breach laws. 

Hackers also accessed names, addresses and phone numbers. The breach affected only users who joined before April 5, 2018, according to the company. The last four digits of payment cards for consumers and the last four digits of bank account numbers for contractors and merchants were exposed, but the hackers did not gain access to enough information to allow for fraudulent charges.

The company said it only noticed the “unusual activity” from a third-party service provider this month and immediately blocked the user. DoorDash said it has since added additional security protections and brought in an outside expert to assess its systems. The incident comes just a year after DoorDash denied reports of a separate breach.


— News from the private sector:


— News from the public sector:


—  Tech news generating buzz around the Web:


  • Kate O'Connor will join the senior staff for Energy and Commerce Committee Republicans as chief counsel for the subcommittee on communications and technology. O'Connor previously served as chief of staff and deputy director of congressional and intergovernmental affairs for the National Telecommunications and Information Administration (NTIA). 


— Today:

  • The House Energy and Commerce Committee will host a hearing to discuss securing America's wireless future and the deployment of 5G communications on Friday at 9:30 am.

— Coming up:

  • TechCrunch Disrupt SF will take place Oct 2 - Oct 4.
  • The House Energy and Commerce Committee will host a hearing to discuss the pros and cons of Section 230 of the Communications Decency Act on October 16.


From earbuds to eyeglasses, Amazon wants Alexa to be everywhere you are: