with Tonya Riley

Ctrl + N

Fake videos doctored by artificial intelligence are not just a threat to politicians in 2020. Corporate brands should be on alert for "deepfakes" too, experts are warning. 

Chief executives including Apple's Tim Cook, Tesla's Elon Musk and Facebook's Mark Zuckerberg have already been targeted in deepfakes published online, according to an analysis from CREOpoint, a firm that helps businesses filter and contain the spread of disinformation. Many deepfakes also impersonate celebrities, who frequently act as brand ambassadors for corporations and spread company messaging. 

"The lines between fake or fact are constantly being undermined, resulting in an alarming destabilization of corporate reputations and societal and political norms," said Jean-Claude Goldenstein, CREOpoint founder and chief executive. 

If business leaders step up pressure on Big Tech to make a bigger investment in detecting and limiting the spread of deepfakes, they could be powerful allies to politicians who are increasingly concerned that the spread of misleading videos could hurt their elections. 

Brands concerned about deepfakes might be able to move faster than lawmakers on Capitol Hill to address the issue. Earlier this week, House lawmakers passed legislation that aims to increase research on developing technology to improve detection of manipulated media. But the legislation awaits action in the Senate. 

Not all of the deepfakes CREOpoint identified were malicious. Many are parodies or are clear fabrications. But experts say the rapid evolution of the technology could easily be abused by bad actors. For instance, deepfakes could be used to spread information meant to move a company's stock price or undermine a company's relationship with customers. 

Guillaume Chaslot, a former YouTube engineer who is a Mozilla fellow, tells me that brands that customers rely on for safety are particularly vulnerable to this threat. Take self-driving carmakers, for instance: One doctored video of a fake crash could decimate consumers' trust in a brand. 

“Everyone is extremely worried about this, especially when truth matters to your business,” Chaslot said. 

Goldenstein says brands concerned about their reputation on social media should be honing in on the deepfake threat. He conducted an analysis of 50 popular doctored videos, which included deepfakes as well some videos that were doctored without machine learning, known as "cheapfakes." He found that 65.4 percent were impersonating brand ambassadors and entertainers, and 1.2 percent were targeting company executives. Meanwhile, 16.9 percent were targeting politicians. 

Meaningful brand-safety investments are a must-have given the total market cap in trillions of impacted brands,” Goldenstein told me. “Significant market value is at risk from even a small drop in brand equity resulting from the disinformation amplification on social platforms.”

There are already signs of how damaging deepfakes could be to a company's bottom line. While much of the focus has been on videos, Axios reported earlier this year that deepfake audio has emerged as a significant threat. Symantec identified at least three successful audio attacks on companies earlier this year, where scammers impersonated CEOs or chief financial officers' voices, requesting an urgent transfer of funds. Millions of dollars were stolen from each business, whose names were not disclosed. 

Deepfakes are one of the most technically complex challenges confronting Silicon Valley social media giants as they race to ensure they've invested enough in election security ahead of 2020. But some are concerned the technology companies haven't invested enough, given the scope and complexity of the problem. 

Earlier this week, Facebook and Microsoft kicked off their deepfake challenge, a $10 million investment aimed to spur more research on detecting the doctored videos. But critics say that's just a drop in the bucket for some of the richest companies in the world, and it doesn't match the stakes of the challenge.  

A measly $10 million? Seriously? To fix one of today’s most globally alarming issues?” Goldenstein said. “That’s truly a drop in the ocean of fake news and just half a day of Facebook profits. These companies have the responsibility to invest several orders of magnitude more to develop reliable and transparent fixes, far beyond tactical detectors scanning for signs of facial manipulation.”

The companies are also racing to develop policies to address deepfakes on their platforms. Twitter recently released a policy draft and requested feedback. 


BITS: The Federal Trade Commission has considered seeking a court order against Facebook that could block the social media giant's plans to integrate its apps, my colleague Tony Romm confirms. The legal move would mark a major new threat for Facebook as the regulatory agency forges ahead with an inquiry into competition concerns at the company.

Officials worry that allowing Facebook to follow through on its plans to further integrate its products would make it harder to eventually split up the company in an antitrust case, as John D. McKinnon and Emily Glazer at the Wall Street Journal first reported. Facebook has worried for months that the FTC would seek an injunction and halt its plans to make it easier for users to port their data between Messenger, Instagram and WhatsApp, the Journal reports. 

The FTC and its Republican chairman, Joe Simons, declined to comment, as did Facebook.

FTC could be hamstrung if it doesn’t seek to stop Facebook from integrating its services, competition experts tell Tony.

“If the FTC thinks it has a plausible basis for challenging Facebook’s previous acquisition of Instagram or WhatsApp, it is critical to seek an injunction to prevent Facebook from mixing all the key assets from these divisions,” said Gene Kimmelman, a senior adviser to the consumer group Public Knowledge and former antitrust official at the Justice Department. “Without an injunction, winning a case in court might prove fruitless, like trying to unscramble eggs.”


NIBBLES: Facebook has told advertisers that it doesn't need to make changes to its data-collection practices to comply with California's new privacy law, Patience Haggin at the Wall Street Journal reports. The decision could result in a clash with the California government over enforcement of the law, which goes into effect Jan. 1.

Facebook argues that the way it transfers user data to third parties does not constitute “selling,” meaning it does not have to provide consumers tools to opt-out of collection as required by the new law. The company claims that publishers can also adjust their settings to block its Web tracker, “pixel,” for users who opt out.

But privacy advocates disagree.

“To the extent that the pixel is sending back information to Facebook that Facebook can then access without any restrictions, that absolutely is a sale,” said Alastair Mactaggart, the real estate developer who funded initial efforts to get the landmark privacy law on the books.

Other players in digital advertising could adopt Facebook's argument, Patience reports, causing complications for California's attorney general tasked with reining in violations by the $130 billion U.S. digital-ad industry. Google, in contrast to Facebook, has introduced new tools to comply with the law's mandate to allow users to opt out of data collection.

BYTES: One of MeetMe's most prominent streamers rose to popularity just weeks after being released from prison for a sex offense involving a minor, despite the social media company's policy of screening for registered sex offenders, my colleague Reed Albergotti reports. MeetMe's ongoing struggles to keep predators off the app underscore how ineffective popular apps are at stamping out harmful content and behavior.

26-year-old Deonte Fisher, who streamed as “Yogi Bear," amassed about $50,000 worth of virtual gifts on the site, was invited as a VIP to company-sponsored events and received payment from Meet Group, which owns MeetMe. But even after another user outed Fisher as a registered sex offender, the company allowed him to continue streaming.

MeetMe removed Fisher's account only after The Post contacted the company. MeetMe said it will now check names on banking records when it pays streamers such as Fisher and check whether the names in email addresses conflict with the names users provide the company when they create an account. There’s no evidence that Fisher displayed any unwanted sexual behavior on MeetMe.

A 2014 lawsuit brought by San Francisco City Attorney Dennis Herrera alleged MeetMe had been used by predators to target minors. But even after a settlement, which included updating its privacy policy, MeetMe continued to see user misconduct. In June a user pleaded guilty to molesting an 11-year-old girl he met on the app. The company is currently being sued for wrongful death and false advertising after an adult user was killed by his date.


— News from the private sector:


— News from the public sector:


— News about tech workforce and culture:


—  Tech news generating buzz around the Web:


  • Salesforce promoted president and chief product officer Bret Taylor to president and chief operating officer.


New York Times reporter Mike Isaac provides a humorous lesson in mobile ordering: