Ctrl + N

The race is on to develop technology that would detect “deepfakes” — videos doctored with artificial intelligence that make it appear someone is doing or saying something that never happened. 

Facebook and Microsoft are teaming up with the Partnership on AI and academics from six universities to build a “Deepfake Detection Challenge.” They hope running a formal challenge with prizes and grants will kick-start research and development in this area, and also help the industry develop better benchmarks to spot deepfakes. 

Facebook chief technology officer Mike Schroepfer said in a blog post the company is investing more than $10 million in the challenge, which will also include research collaborations. Microsoft declined to comment on how much it planned to invest in the challenge. 

“This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Schroepfer wrote. 

Facebook and Microsoft are unveiling the challenge as fears are mounting that adversaries could use deepfakes to undermine trust in politicians and spread disinformation during the 2020 election. Lawmakers have been calling on the technology companies to invest more heavily in their defenses against deepfakes and to develop more transparent and explicit policies around how they would address such videos and images. 

Rep. Adam Schiff (D-Calif.), the chairman of the House Intelligence Committee, told me in a statement that the challenge is a “promising step.” His committee hosted a hearing earlier this year about the threat of manipulated media, and he has written letters to Facebook, Google and Twitter pressing them for formal policies on deepfakes. The companies acknowledged the threat deepfakes pose in their responses, but none said they have specific policies addressing deepfake content. 

“Social media platforms have a unique responsibility to identify and remove disinformation before it goes viral, and with voting in the first 2020 primaries less than six months away, the platforms must urgently prepare for increasingly sophisticated disinformation campaigns,” Schiff said in a statement. “These efforts by Facebook, Microsoft, the Partnership on AI, and their academic partners will be very important to the process, and I hope these companies will continue to pursue comprehensive measures to prevent harmful deepfakes from poisoning our national dialogue online.”

Schiff said corporate investment in the technology could complement some investments in the public sector, particularly within the intelligence community. The House passed a July spending bill including a provision directing the Intelligence Advanced Research Projects Agency to hold a $5 million prize competition for research into automatically deleting deepfakes. 

Fears of deepfakes have been mounting as top artificial intelligence researchers warn there aren't enough humans in the field to allow detection of deepfakes, and they're far outmatched by the number of people working on advancing synthetic video technology.

The AI-generated videos haven't yet caused major political havoc, but researchers and some lawmakers fear it is only a matter of time. Earlier this year, a crudely edited video went viral of Nancy Pelosi that falsely made the House speaker appear drunk, underscoring how video and images can be a powerful tool to spread disinformation. That video was just simply tweaked, and not even doctored with more advanced AI technology. 

Facebook and Microsoft are unveiling the challenge just days after a new report from New York University predicted deepfakes could pose a significant threat in the 2020 election, as we covered in The Technology 202 on Tuesday. Paul Barrett, the professor who wrote the report, praised the companies for the initiative. 

“It's a good idea for at least two reasons: It might produce improved defenses against deepfakes, and it almost certainly will heighten awareness of the danger,” he told me. “Both are important.”


BITS: A bipartisan coalition of states attorneys general is launching an investigation into Facebook for potential antitrust violations, according to a Washington Post report. The attorneys general of New York, Colorado, Florida, Iowa, Nebraska, North Carolina, Ohio, Tennessee and the District of Columbia are participating.  

New York Attorney General Letitia James released a statement announcing the probe this morning, saying the investigation “focuses on Facebook’s dominance in the industry and the potential anticompetitive conduct stemming from that dominance.” She also tweeted:

The probe comes as state attorneys general are taking on a greater role on tech antitrust issues, as we reported earlier this week in the Technology 202. The Facebook probe is in addition to another state attorneys general investigating Google, which my colleague Tony Romm reports is expected to be announced on Monday. The Wall Street Journal reports the groups of states attorneys general overlap. 

The Journal also writes the state scrutiny could expand to other companies beyond Google and Facebook. 

Facebook also faces antitrust scrutiny at the federal level. The company confirmed in a recent earnings call the Federal Trade Commission had opened an antitrust probe into its business. House lawmakers have also stepped up their scrutiny of the company as they launch a broad antitrust investigation of Silicon Valley, and the company testified at a recent antitrust hearing hosted by the panel.

BITS: A lawyer representing 8chan owner Jim Watkins said 8chan could be back online as soon as next week, according to The Verge's Makenna Kelly. The site has been down since last month when its role in several mass shootings came under increased scrutiny, and several web infrastructure providers discontinued service for the site. 

“This isn’t written in stone, but somewhere around a week, they hope to be back,” Watkins’s lawyer, Benjamin Barr, told Makenna. 

“They’ve already executed the security,” Barr also told The Verge. He said 8chan is currently offline because the owners are “working on having a stable hosting solution where they can’t be deplatformed or it will be a lot more difficult to deplatform them.”

Watkins gave closed-door testimony in front of the House Homeland Security Committee yesterday, where according to prepared remarks, he told lawmakers that the site is offline voluntarily and plans to come back online once it “is able to develop additional tools to counter illegal content under United States law.” Watkins also told members that he "has no intent of deleting constitutionally protected hate speech."

Rep. Bennie G. Thompson (D-Miss.), chairman of the Committee on Homeland Security, and Rep. Mike Rogers (Ala.), the ranking Republican on the committee, credited Watkins with providing "vast and helpful information to the Committee about the structure, operation, and policies of 8chan and his other companies.”

“We look forward to his continued cooperation with the Committee as he indicated his desire to do so during today’s deposition,” they wrote in a statement.

8chan's administrator, Ron Watkins, who goes by "CodeMonkeyZ" on Twitter said in a tweet that it is likely 8chan will face more questions from the committee. He urged lawmakers to have any further questions in a public hearing. 

NIBBLES: Sen. Edward J. Markey (D-Mass.) is requesting that Amazon CEO Jeff Bezos provide more details on the coordination of doorbell camera firm Ring with more than 400 police departments across the country, as my co-worker Drew Harwell first reported last week. Markey said the partnerships, which encourage police to request footage from homeowners, “raise serious privacy and civil liberties concerns,” in the letter provided to Drew. (Bezos owns The Washington Post.)

“The integration of Ring’s network of cameras with law enforcement offices,” Markey wrote, “could easily create a surveillance network that places dangerous burdens on people of color and feeds racial anxieties in local communities.” 

Ring's partnerships with police, which are often initiated without community involvement or awareness, have gained increased attention recently for the unusual extent to which Amazon is involved with how police market the technology and the limited details on how the surveillance footage is used. Markey called the “targeted language” used by the company to convince users to opt in “troubling,” and asked Ring to provide more details on the standard agreements with police departments as well as any security safeguards it uses to protect the footage. Markey also asked about Ring's plans for facial recognition.

BYTES: More than half of U.S. adults trust law enforcement to use facial recognition technology responsibly, despite numerous media reports detailing flaws in the emerging technology, according to a new Pew study out yesterday. The share is larger among white respondents (61 percent) than black (43 percent) or Hispanic respondents (56 percent). All races reported lower rates of trust for use by companies (36 percent) and advertisers (18 percent).

Researchers and advocates have expressed concerns about the dangers of police use of facial recognition technologies in the past year, especially given documented inaccuracies when identifying people of color. Police have also used the technology in questionable ways, as the Georgetown Law Center on Privacy and Technology shared with my colleague Drew Harwell in May. In some cases investigators altered footage of suspects in hopes of getting more matches, or used celebrity look-alikes, something researcher Clare Garvie compared to police deciding to color in a smudged fingerprint.

Congress has also taken notice. Earlier this year, Sen. Brian Schatz (D-Hawaii) and Roy Blunt (R-Mo.) introduced a bill for legislative oversight of commercial applications of the technology. Rep. Elijah E. Cummings (D-Md.) and Rep. Jim Jordan (R-Ohio) are expected to introduce a new bipartisan bill that would curb the government's use of facial recognition, but the details of the bill are still being worked out. But advocates worry the regulation might not go far enough. This week, more than 30 organizations from across the political spectrum launched a campaign calling for a federal ban on law enforcement use of facial recognition technology.

“Facial recognition is one of the most authoritarian and invasive forms of surveillance ever created, and it’s spreading like an epidemic,” Evan Greer, deputy director of Fight for the Future, one of the organizations leading the call for the ban, said in statement. “We need to ban this technology outright, treat it like biological or nuclear weapons, and prevent it from proliferating before it’s too late.”



Facebook finally launched its much-anticipated dating app today. But Twitter users were skeptical about allowing the app, which will let users create dating profiles based on details from their Facebook profile and Instagram pics, control over their love life — and data.

Facebook has said the data won't be used for targeted advertising, but Slate's Ashley Feinberg pointed out that leaking your crushes may be way worse.

Ashley Carman from the Verge wondered whether the merging of elements from Facebook and Instagram could be game-changing:

Nikki Sunstrum, director of social media and public engagement at University of Michigan:


— News from the private sector:

“Google is aiding and abetting the promulgation of climate science misinformation.”
Amazon directs the destinations, deadlines and routes for its network of contract delivery drivers. But when they crash, the retail giant is shielded from responsibility.
The New York Times
The removal of likes is designed to improve the lives of consumers, but influencers are starting to feel the impact of the change.
Business Insider

— News from the public sector:

T-Mobile US’s pay-as-you-go wireless brand allegedly sold used phones as new devices and overcharged customers, a New York City agency said in a lawsuit.
Wall Street Journal
The agency wants the public to weigh in on its plan to change its data gathering practices.
The House of Representatives’ antitrust panel will hold a hearing next week to discuss the effect of consumer data collection by big tech platforms, like Alphabet’s Google (GOOGL.O) and Amazon (AMZN.O), on online competition.
Biometric systems used by ICE to round up migrants and separate families didn’t come from nowhere. They’ve been built over decades by both parties.

—  Tech news generating buzz around the Web:

It didn’t go well.
The Atlantic
A Portland tech executive shared how the death of his 8-year-old son forced him to rethink how he has oriented his life around work.
Taylor Telford
Nefarious figures were impersonating the actor on his platform.

— Today:

  • The American Enterprise Institute will host an event titled "Should we reform Section 230?" at 10 a.m. 

— Coming soon:

  • Apple will host a special iPhone event on Tuesday at 10am PT in Cupertino, Calif.
  • The Senate Judiciary Subcommittee on Intellectual Property will host a hearing on how to make the patent system stronger on Wednesday at 2:30 p.m.
  • The House Antitrust Subcommittee will hold a hearing on the role of data and privacy in competition as a part of its series of hearings about online platforms and market power on Thursday at 9:00 a.m.
  • The Senate Judiciary will host an oversight hearing on the enforcement of antitrust laws on September 17 at 2:30 p.m. EST
  • The Senate Judiciary will host a hearing to “explore issues relating to competition in technology markets and the antitrust agencies’ efforts to root out anticompetitive conduct.” on September 24