Ctrl + N

People know that the U.S. government has their photo when they apply for a visa -- or are arrested. But they likely don't know that the government could also be using those pictures to test facial recognition software. 

Researchers this week announced their discovery that the U.S. government has been using massive sets of photos from government agencies -- collected during typical law enforcement and immigration processes -- to test facial recognition software developed by companies and universities. Developers can even download some of the photo data sets -- such as those from people who were arrested but are now deceased. 

“The government is using any data it can get ahold of,” said Nikki Stevens, a software engineer and PhD student at Arizona State University. “If we’re not in there now, soon we will be.”

The revelations are raising new questions about whether the government is abusing its power over citizens who often can't opt out of taking these photos.

The researchers from University of Washington, Arizona State University and Dartmouth say that people's images should not be used without their consent -- especially to improve technology that could potentially be used against them in the future for purposes like surveillance. It's time for new laws that mandate how the government uses facial recognition technology, they say. 

“If the government wants to use this technology, then the government should make sure that it is fair, equitable, working in a way that is consensual, has appeal mechanisms and that is not built on thousands of dead arrestees,” said Os Keyes, a PhD student at the University of Washington.

Facial recognition technology is booming -- and putting in place privacy safeguards for government testing of the technology could set the tone industry-wide, researchers say. 

After all, the government is one of the earliest and largest buyers of facial recognition software, which Keyes says powers everything from routine tasks at the Department of Motor Vehicles to entrances at high security military bases. 

“If you put conditions on what facial recognition systems can do and how they have to work for the government to buy them, you are effectively putting conditions on facial recognition systems,” said Keyes. “A company designing facial recognition system that major governments won’t buy is a company that will be subject to an involuntary buyout pretty soon.”

The findings were publicized in a Slate op-ed published Sunday amid a broader debate brewing about whether facial recognition researchers should obtain consent of ordinary people before they use their images to refine and test facial recognition technology.

Facial recognition systems can't be built or refined without massive sets of data — and the public is just learning about the ways researchers amass them. Last week, NBC News exposed that IBM, in its efforts to create a large, diverse set of images to train facial recognition, scraped millions of photos from the photo sharing website Flickr. NBC contacted some of the photographers whose images were in the set, who were surprised and disconcerted to learn how their photos were being used by the tech giant. The photographers also said none of the people pictured in their photos knew their likenesses would be used in that way. 

“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” NYU School of Law professor Jason Schultz told NBC.

Stevens and Keyes are particularly worried about how racial minorities, who were overly represented in the government data sets they found, could be impacted by the use of their photos to train facial recognition technology. They say because of racial inequities in the United States, these groups are more vulnerable to the ways that facial recognition could be abused to surveil or limit the civil liberties of these groups. 

The researchers say they uncovered the use of the data sets as they were working on a paper about the National Institute of Standards and Technology. This government agency allows companies to submit their facial recognition software for tests that have become known a standard way for industry to benchmark their accuracy. Though the researchers filed some Freedom of Information Act requests, they say much of the data they cite in their op-ed is publicly available online once you know where to look. 

“You don't have to look hard to find what we found,” Keyes said. 

The researchers also say their findings raise serious questions about whether the agency can be trusted to help shape standards governing the federal government’s use of artificial intelligence. President Trump recently signed an executive order that says NIST is responsible for planning how the federal government will develop technical standards for artificial intelligence.

NIST, however, is defending its work. Jennifer Huergo, an agency spokeswoman, says all of the images it uses from other agencies comply with Human Subject Protection review and other applicable federal regulations to preserve people's privacy rights in research.

Huergo did contest some of the claims the researchers made in the article. She pushed back on claims that some testing programs depend on images of children that have been exploited for pornography. Huergo said that the government did test algorithms against images of exploited children in an effort to help the Department of Homeland Security determine if facial recognition could be used to combat child abuse. However, Huergo said NIST never took over that data — it remained housed within DHS, and NIST employees “never look at the images.”

The op-ed also said that images are drawn from documentation of people boarding aircrafts, but Huergo said those images were taken from a simulation that DHS ran of people boarding aircrafts in a warehouse. 

Huergo says the agency’s work is focused on making facial recognition better and fairer for everyone.

“We’re here to help these technologies to reduce errors and bias and provide technical underpinnings to make sure people can make decisions on their use,” she said. “You want these things to be working properly.”


BITS: Facebook said the initial live stream of the massacre in Christchurch, New Zealand, was viewed roughly 4,000 times before the company removed it, The Washington Post's Meagan Flynn reports. The company says it did not receive a report of the 17-minute video until 29 minutes after the broadcast started — and had already begun spreading across the Internet. 

Facebook said it was first alerted to the video after a Facebook user called New Zealand authorities. The company says it removed the video within minutes of being notified by law enforcement, but by that point, it had already been downloaded and uploaded to file-sharing website 8chan. 

Facebook is taking the rare step of issuing details about the content moderation practices as it faces a barrage of criticism from policymakers around the world, who say the company needs to do more to quell the spread of violent videos online. New Zealand Prime Minister Jacinda Ardern said Tuesday she would be looking into the role social media played in amplifying the attack. 

“There is no question that ideas and language of division have existed for decades,” Ardern said Tuesday from the Parliament floor. “But the form of distribution, the tools of organization — they are new. We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published. They are the publisher, not just the postman. This cannot be a case of all profit, no responsibility.”

In addition to disclosing details about the removals, Facebook said it was collaborating with other technology giants to stop the spread of the video: 

NIBBLES: Rep. Devin Nunes (R-Calif.) told Fox News he is suing Twitter and several of its users for defamation and negligence, alleging that the social media platform “knowingly acted as a vessel for opposition research” and censors conservative voices, according to The Washington Post's Allyson Chiu.  However, as of late Monday, the suit was not listed in Virginia state court records, where Fox News reported it had been filed.

Fox published a copy of the complaint, which names Liz Mair, a Republican political consultant, and two anonymous accounts known for mocking Nunes — one claiming to be his mom and another purporting to be a cow that he owns. Nunes is seeking $250 million in compensatory damages, as well as $350,000 in punitive damages, according to the lawsuit Fox published. 

“The lawsuit — the latest one accusing a social media site of anti-conservative bias — drew criticism Monday as many, including legal experts, questioned its motivations and wondered if it could be a 'publicity stunt,'" Chiu reported. 

From USA Today's Brad Heath:

BYTES: Facebook wanted to start a local news service, but the social network found that 40 percent of Americans live in places where there isn't enough original local reporting to support it, the Associated Press's David Bauder reports. 

Some 1,800 newspapers have closed in the United States in the last 15 years, according to University of North Carolina, and newsroom employment declined by 45 percent. As the AP notes, the success of tech giants like Facebook have contributed to the broken business model facing the journalism industry. 

The company deems a community unable to support its service, known as “Today In,” if it cannot find a single day in the month with at least five news items to share. 

“It affirms the fact that we have a real lack of original local reporting,” Penelope Muse Abernathy, a University of North Carolina professor who studies the topic, told the AP.  Facebook plans to share the data it collected on local news with researchers at Duke, Harvard, Minnesota and North Carolina who are studying news deserts created by the shrinking news industry. 


— More technology news from the private sector:

The spiraling conflict could throw the $50 billion car company further into uncertainty and imperil Musk’s ability to remain as company chief.
Drew Harwell
The failure of social media companies to block videos of Friday's massacre in New Zealand highlights the difficulties of policing platforms whose very business model creates the systems that are so easily manipulated.
Craig Timberg, Drew Harwell, Elizabeth Dwoskin and Tony Romm
Privacy advocates say such searches are likely unconstitutional and have raised concerns about the data collected by such devices.
Hamza Shaban
In other news: Turns out Netflix is not a tech company, Hastings says.

— More technology news from the public sector:

National Security
According to the latest-available data, the NSA in 2017 collected more than 530 million call detail records linked to 40 “targets.”
Ellen Nakashima
Vestager spoke with Recode’s Kara Swisher in front of a live audience at South By Southwest.
The House will hold a vote on Democrats’ bill to reinstate the Obama-era net neutrality rules next month, House Majority Leader Steny Hoyer (D-Md.) announced on Monday.
The Hill

MySpace still exists -- but it might have lost the photos, videos or audio files people uploaded to the site before 2016, according to CNET's Sean Kane. The company said the incident resulted from a server migration project. 

People on social media responded to the news that the site may have lost their digital memories from the early 2000s, when the company was a top social network. 

Some were surprised the one popular social network was still around:

The Chicago Tribune told readers to alert their Top 8: 

Others welcomed the news: 


— Tech news generating buzz around the Web:

Home & Garden
In most cases, data can be recovered. Here’s how.
Jeff Blyskal | Washington Consumers' Checkbook