Shortly after the playful photo-transforming FaceApp went viral Wednesday as the most downloaded smartphone app in America, a nationwide panic began to set in: Who was this shadowy Russian tech firm everyone had been sending their photos to? And what did they want with millions of people’s faces?
Still, experts said, the FaceApp anxiety highlighted how quickly public attitudes about the Internet have changed amid a widespread reckoning over data privacy and election interference, with more people beginning to think twice about the personal data they freely give up — and the companies they decide to trust.
FaceApp allows anyone to morph their face into a vision of their future self, and social media feeds quickly filled with computer-generated portraits marked with wrinkles and graying hair. But the app’s development by a largely unknown Russian firm, and its widely permissive rules for how people’s photos could be used, triggered alarms in Washington and beyond.
The Democratic National Committee on Wednesday sent an alert to 2020 presidential campaigns, state parties and others in the “Democratic ecosystem” urging everyone to delete the app “immediately,” citing concerns that whatever the photo-morphing app was doing with people’s data wasn’t worth the risk.
Senate Minority Leader Charles E. Schumer (D-N.Y.) followed shortly after with a letter to the FBI and the Federal Trade Commission, calling for officials to launch a national security investigation into the app and potentially to take steps “to mitigate the risk presented by the aggregation of this data.”
“It would be deeply troubling if the sensitive personal information of U.S. citizens was provided to a hostile foreign power actively engaged in cyber hostilities against the United States,” Schumer wrote.
Burned by Russian hackers during the 2016 presidential race, the party has taken an aggressive stance toward cybersecurity, investing in nationwide education and training programs to boost people’s online defenses and prevent a damaging repeat.
But Wednesday’s alerts weren’t based on any intelligence reports of secret dangers, officials said. Instead, they were a reaction to the broader anxiety swirling across social media and news reports — and a proactive, if evidence-light, response over the possibility that another online fad could turn dangerous.
FaceApp’s terms of service grant the company a “perpetual, irrevocable, nonexclusive, royalty-free (and) worldwide” license to use people’s photos, names and likenesses — a wide-open allowance that some worried could erode people’s data privacy or control.
Experts said many other apps, from social media giants such as Facebook to pregnancy-tracking apps, carve out the same perpetual corporate rights to user data.
Joseph Jerome, policy counsel at the Center for Democracy and Technology, described the intense reaction to FaceApp as a “perfect storm” of colliding factors: a general distrust of Russian and Chinese tech companies driven by political turmoil; heightened concerns over the use of facial data; and growing worries over a lack of privacy protections online.
“This is not the exception. This is the rule,” Jerome said of the app’s terms of service. “Privacy policies are not readable. They are broad [and] they don’t actually tell you what companies do and don’t do with your information.”
Neema Singh Guliani, legislative counsel for the American Civil Liberties Union, said the panic around FaceApp reflects a broader frustration from people about how their data can be misused, in large part because federal privacy laws can do little against invasive terms of service or privacy policies.
Elizabeth Potts Weinstein, a small-business law attorney in Silicon Valley, told The Post she also worried about where that user data would go if the company’s fortunes changed.
“They could go under and all their data and all their assets could get bought by somebody that is nefarious or could get appropriated by somebody in the national government,” she said. “We in the United States don’t have jurisdiction over them.”
Before its whirlwind rise, FaceApp was started by Wireless Lab in early 2014, according to a LinkedIn post by its chief executive, Yaroslav Goncharov.
Goncharov studied computer science at one of Russia’s largest universities, St. Petersburg State University, before moving to Redmond, Wash., where he spent three years as a technical lead at Microsoft. He later co-founded a software company that was acquired in 2011 by the search firm Yandex, which many call Russia’s Google.
Goncharov told the Moscow publication Afisha.ru in 2017 that he was inspired during his time at Microsoft to design FaceApp, by applying the latest in artificial-intelligence and machine-learning techniques to the mass processing of digital photos. That idea is now commonplace in apps such as Snapchat and Instagram, which use AI software to instantly contort images of cats, nature scenes and people’s faces, often with convincing results.
Goncharov said he spent his evenings writing code for projects, including an automated bot with which he could play poker. He called the bot’s “neural network” — an AI term for how it processes information — “the simplified analog of the human brain implemented in computer code.”
An early version of Goncharov’s company was incorporated in Delaware in 2014 as Hotel WiFi Test Inc., referencing a separate service built to help guests judge hotels based on the speed of their Internet, company filings show. The start-up reported about $43,000 in sales for 2017.
That year, the company launched FaceApp and saw it explode across the Web — gaining attention both for its photo-realistic results and widely criticized design choices, including “ethnicity filters” that some said were tantamount to virtual blackface. The app has since been used more than 80 million times.
Goncharov told The Washington Post that FaceApp photos are stored on servers run by U.S. tech giants Amazon and Google and that the company does not share or sell data with third parties. But a Post analysis found data flowing to the third-party Facebook and Google trackers that many apps use for online ads, and FaceApp’s privacy terms state the company can save a user’s uploaded photos and other data, even if a user decides to delete them.
Goncharov said the company deletes “most” photos from its servers after 48 hours but wouldn’t say which ones are stored or for how long. No user data, he said, goes back to Wireless Lab’s research-and-development team in St. Petersburg. A 2016 Delaware tax report for FaceApp listed another office about 50 miles west of its St. Petersburg headquarters, in the town of Sosnovy Bor.
If you use #FaceApp you are giving them a license to use your photos, your name, your username, and your likeness for any purpose including commercial purposes (like on a billboard or internet ad) -- see their Terms: https://t.co/e0sTgzowoN pic.twitter.com/XzYxRdXZ9q— Elizabeth Potts Weinstein (@ElizabethPW) July 17, 2019
The Russian connection was, to some experts, not as alarming as some in Washington first suspected. Russia’s educational system has gained prominence for its burgeoning AI sector, and Google and other tech firms employ engineers and other technical positions in Moscow.
Samsung last year opened an “AI Center” in Moscow’s White Square business district, home to the American corporate giants Deloitte and McKinsey & Company. This spring, AI engineers at the Moscow lab unveiled a breakthrough: a new style of “deepfake” technology that can automatically create convincing animations of a person’s face from just a single photo.
FaceApp is far from the only popular foreign-born app with curious data practices. The viral video-sharing app TikTok is owned by one of China’s most valuable tech firms, Bytedance, worth more than $75 billion. The Beijing-based app has been downloaded more than 100 million times in the United States and more than 1 billion times worldwide.
“I wouldn’t look at a project and judge it based on the city of origin. I would judge it based on the quality of work and the particular application,” said Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, a research center in Seattle. “Just because it’s from St. Petersburg or Beijing does not at all mean it’s bad. And just because it was developed by the NSA or the U.S. doesn’t mean it’s good.”
The real issue, Etzioni said, was how the data was used — whether users understood how the photos might be used for different purposes or what they were giving up. “That’s a subject of concern for all of us. Not just, ‘Oh my god, it’s Russia,’” he said.
Natalia Abbakumova, Alice Crites, Magda Jean-Louis and Hamza Shaban contributed to this report.