Ctrl + N

Fake news articles and bogus political ads are so 2016.

As the 2020 election approaches, experts plan to tell House lawmakers today that it's time for them to intervene to address a predicted onslaught of manipulated videos that could skew voter opinion.  

Artificial intelliegence and legal experts will this morning school the House Intelligence Committee about the ways "deepfakes" — or videos altered with artificial intelligence to make it appear someone did or said something that never happened — could be used to undermine trust in public officials or incite violence. Witnesses will tell lawmakers they need to act now to stay ahead of the threat, which could be deployed by adversaries like Russia ahead of the next presidential election.

“The U.S. government should rapidly develop policies to promote appropriate use of artificial intelligence in media content creation and support technological development to verify the authenticity of video and audio content,” Clint Watts, a former FBI official who is now a fellow at the Foreign Policy Research Institute, will say, according to testimony reviewed by The Technology 202. 

Efforts to manipulate media are as old as media itself. But legal experts, researchers and lawmakers are increasingly concerned that recent advances in video editing technology will make it harder than ever before for people to separate facts from hoaxes online -- and there's a growing sense that no one is ready for the implications that could have.    

“I don’t think we’re well prepared at all. And I don’t think the public is aware of what’s coming,” Rep. Adam B. Schiff (D-Calif.), who chairs the committee hosting the hearing, told my colleague Drew Harwell. 

Even researchers working at the forefront of artificial intelligence feel outmatched, as Drew detailed in a story yesterday. There are far more people working on developing technology to enable deepfakes than on technology to detect them. 

“We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley, told Drew. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1."

Deepfakes are becoming increasingly common — look no farther than Instagram where a bogus video of Mark Zuckerberg attracted many views this week. And although a deepfake has yet to go viral and cause damage in the United States, a recent doctored video of House Speaker Nancy Pelosi (D-Calif.) underscored how even crudely edited videos can undermine trust in politicians. 

That's why experts plan to tell Congress to step up its work on this issue. Watts will suggest the government partner in several ways with the private sector on the issue, working to implement digital verification signatures to timestamp content to help confirm its authenticity. He also will propose a public education effort so people can learn to spot the characteristics of deepfakes. Congress should also pass a law that prohibits government officials from creating and circulating manipulated content, Watts will say.

Other witnesses will delve into how existing law could be applied to deepfakes -- and where changes might need to be made to keep pace with the new technology.

Danielle Citron, a law professor at the University of Maryland Francis King Carey School of Law, will tell lawmakers that no current civil or criminal laws specifically address deepfakes. She'll warn  against banning the technology outright, but urge lawmakers to consider changing Section 230 of the Communication Decency Act so platforms are only granted legal immunity if they engage in reasonable content moderation practices. Right now, companies enjoy broad immunity for the content that third parties post on their websites.

Citron will warn lawmakers that in the absence of action, a well-timed deepfake could derail an election -- or even a financial transaction like a company's initial public offering. And there will be little legal avenues for recourse. 

"If you time it just right, you can do a whole lot of damage," Citron told me. 


BITS: Lawmakers want to modernize a two-decade-old law protecting children's privacy for the social media age, and a new report from my colleague Craig Timberg explains why:

Sex, drugs, violence, hate speech, conspiracy theories and blunt talk about suicide rarely are more than a few clicks away,” Craig writes. “Even when children are viewing benign content, they face aggressive forms of data collection that allow tech companies to gather the names, locations and interests of young users.”

Tech companies have largely been able to avoid scrutiny from the FTC, the regulatory body responsible for enforcing the law, by prohibiting users under 13 in their terms of service. But advocates tell Craig that violations of the law still run rampant and that weak protections are “incentivizing companies to not know that children are on their sites.”

The biggest offenders flagged by advocates include Facebook, YouTube, and its parent company Google. For example, a 2018 study from the University of California at Berkeley found that 57 percent of the apps in Google Play store’s section for children “showed signs that they may be violating COPPA, including its limits on collecting the personal data of young users.” A coalition of advocate in Decembers filed a complaint with the FTC alleging that Google violated COPPA, but it’s unclear what if any action is being taken by the agency.

Sens. Edward J. Markey (D-Mass.) and Josh Hawley (R-Mo.) are proposing an update to the dial-up-era bill that would broaden the criteria for enforcement and increase the age of protection from 13 to 16 years old. “We believe that parents in the United States want the same protection for their children as Europeans want for their children,” Markey told Craig.

NIBBLES: Facebook discovered internal emails that could link chief executive Mark Zuckerberg to questionable privacy practices, the Wall Street Journal’s John D. McKinnon, Emily Glazer, Deepa Seetharaman and Jeff Horwitz report. The emails were uncovered as the FTC probes the company’s privacy record — and there are concerns within Facebook that they could damage the company’s image if they are publicly released.

The emails are contributing to Facebook’s efforts to reach a quick settlement with the FTC after the Cambridge Analytica scandal, the Journal reports. Facebook has been operating under a privacy-related order with the agency since 2012, and emails sent around then imply that Zuckerberg and other executives weren’t prioritizing compliance.

One 2012 email suggests that Zuckerberg knew a developer claimed to have access to information that should have been protected by privacy settings on Facebook, a source tells the Journal. The Journal was unable to review the emails and it's unclear whether any direct violations of the 2012 FTC consent decree are described.

Facebook has disputed the account. “At no point did Mark or any other Facebook employee knowingly violate the company’s obligations under the FTC consent order nor do any emails exist that indicate they did,” a representative told the Journal after publication. Facebook has been trying to settle the dispute with the FTC for a proposed multibillion-dollar fine, but lawmakers are urging the agency to bring strong punishments against the company and Zuckerberg.

BYTES: Facebook in the past year failed to remove hundreds of posts containing hate speech in India, reports BuzzFeed's Megha Rajagopalan. Equality Labs, the human rights nonprofit group behind the data, says that 93 percent of the posts it reported to Facebook remain up on the site.

The new report paints a bleak picture of Facebook's efforts to improve its moderation of non-English-language posts. Content flagged by the group included posts attacking Muslims and other religious minorities, the LGBT community, and other groups. In one example, “an Indian meme-swapping Facebook group called a baseball bat an 'educational tool' for wives,” Rajagopalan writes.

Equality Labs is hoping the report forces Facebook to be more transparent about its moderation practices in southeast Asian and other international markets where English is not the dominant language. Facebook did not respond to a request from BuzzFeed News asking how many of its content reviewers are focused on the 22 official languages in India, a country boasting the world's largest number of Facebook users. A Facebook representative told Rajagopalan that it has “invested in staff in India” and takes the issue “extremely seriously.”

“Without urgent intervention, we fear we will see hate speech weaponized into a trigger for large-scale communal violence,” the report says. “After a year of advocacy with Facebook, we are deeply concerned that there has been little to no response from the company.”


-- Tech news from the public sector:

Google has fired several of its largest lobbying firms as part of a major overhaul of its global government affairs and policy operations amid the prospect of greater government scrutiny.
Wall Street Journal
Trump is using the advantages of incumbency and a huge pile of campaign cash to build a digital operation superior to anything Democrats have.
LA Times
The Republican chairman of the Senate's antitrust panel criticized plans by...
The state said “League of Legends” video game maker Riot Games refused to give it "adequate information" to analyze whether Riot pays women less than men.
LA Times
State attorneys general are warning Silicon Valley's biggest companies they are also planning to get in on the tech crackdown.

-- Tech news from the private sector:

Facebook obtained personal and sensitive device data on about 187,000 users of its now-defunct Research app, which Apple banned earlier this year after the app violated its rules.
What does a gaming company do after raising $1.25 billion? Acquisitions seem like a pretty good place to start. Epic Games — of Unreal Engine and the ridiculously successful Fortnite phenomenon — has just picked up Houseparty.
Two very different electric scooter companies have united.
The popular encrypted messaging service Telegram is once again being hit with a distributed denial of service (DDoS) attack in Asia as protestors in Hong Kong take to the streets. For the last several days, Hong Kong has been overrun with demonstrators protesting a new law that would put the munici…

Two female tech executives were photoshopped into a photo of 15 prominent men in the tech industry that ran in GQ last week, Buzzfeed's Ryan Mac reports. Mac spotted the suspicious photo from a trip to Italy to visit famous fashion designer Brunello Cucinelli and with the help of Twitter, surfaced the original photo of just the 15 men. From Mac:

The photo has now been removed from GQ’s story, which has been updated to include: “An image provided by a Brunello Cucinelli representative that did not meet GQ’s editorial standards was removed from this story.”

Women on Twitter, including former Reddit chief executive Ellen K. Pao, used the incident to poke fun at the very real issues with diversity in Silicon Valley.

Wired's Nitasha Tiku wrote:

The female CEOs -- who actually were in attendance at the trip but just not in the picture -- also weighed in. From Sunrun chief executive Lynn Jurich:

Peek founder and chief executive Ruzwana Bashir, one of the women photoshopped into the picture, added herself to some other big events:

The real winner, however, was LinkedIn. From Mac:


—  Tech news generating buzz around the Web:

Influencer-style pictures are simply the way we document our lives now.
The Atlantic
The notorious Silk Road site was shut down in 2013. Others have followed. But the online trafficking of illegal narcotics hasn’t abated.
New York Times
Lego has spent the past seven years trying to make its blocks with plastic derived from plants, but finding the material to hit its target is proving difficult.
Wall Street Journal