Federal immigration officials have abandoned their pursuit of a controversial machine-learning technology that was a pillar of the Trump administration’s “extreme vetting” of foreign visitors, dealing a reality check to the goal of using artificial intelligence to predict human behavior.
But ICE dropped the machine-learning requirement from its request in recent months, opting instead to hire a contractor that can provide training, management and human personnel who can do the job. Federal documents say the contract is expected to cost more than $100 million and be awarded by the end of the year.
After gathering “information from industry professionals and other government agencies on current technological capabilities,” ICE spokeswoman Carissa Cutrell said, the focus of what the agency now calls its Visa Lifecycle Vetting program “shifted from a technology-based contract to a labor contract.”
An ICE official briefed on the decision-making process said the agency found there was no “out-of-the-box” software that could deliver the quality of monitoring the agency wanted.
That artificial-intelligence system, which followed Trump’s executive order in January 2017 calling for strict screening rules and slashing travel from several majority-Muslim countries, was expected to flag thousands of people a year for deportation investigations and visa denials, government filings show.
Such an application would have to be custom-designed, at significant cost, and be subject to a cumbersome internal-review process to ensure it would not trigger privacy or other legal violations or be redundant with other technologies already in use by the government, the official said.
“We’re always looking to streamline our operations, and in this particular instance a labor contract was just the best fit,” said the ICE official, who spoke on the condition of anonymity because the official had not been authorized to discuss the selection process in detail.
Civil rights, immigration and privacy groups have criticized the contract as a “digital Muslim ban” that would subject visitors to an invasive level of personal surveillance just for entering the country. Others questioned whether the agency was looking for an impossible technology: an AI sharp enough to predict human intentions based on an Internet search.
Some legal experts said they welcomed the change but worried that ICE officials, having signaled their interest to contractors, might pursue a similar automatic-screening technology later on.
“Have they realized only that it doesn’t exist now, which is important in its own right, or have they also recognized that this really was an idea that was built on a complete fantasy?” said Rachel Levinson-Waldman, senior counsel at the Brennan Center for Justice, a left-leaning policy institute. “That you can somehow take these hugely disparate sources of information, in lots of different languages, and make a prediction about what somebody’s value and worth is?”
The proposed technology was a key element of the broader screening initiative Trump has said would increase Americans’ safety. After eight people were killed in a New York City truck attack in October, Trump tweeted that he “ordered Homeland Security to step up our already Extreme Vetting Program. Being politically correct is fine, but not for this!”
ICE’s Counterterrorism and Criminal Exploitation Unit, Cutrell said, receives 1.2 million “investigative leads” per year — from visa overstays, tips and other immigration violations — and prioritizes them by potential threat. The agency, Cutrell said, believed an automated system would provide a more effective way to continuously monitor the 10,000 people determined to be the greatest potential risk to national security and public safety.
Among those are foreigners who enter the country on temporary or visitor visas and then apply for permanent residency. ICE said it believed that system could provide nonstop tracking of social-media behavior for “derogatory” information that could weigh against their applications, including radical or extremist views.
Contract-request documents in June 2017 said the automated system should contribute to its agents' work and “generate a minimum of 10,000 investigative leads annually.” The ICE official said the revised labor-contract request, instead of using that quota, will probably call for roughly 180 people to monitor the social-media posts of those 10,000 foreign visitors whom ICE flagged as high-risk, generating new leads as they keep tabs on their social-media use.
The monitoring program would only look at publicly visible social-media posts, according to ICE, and would stop once the inspected person was granted legal residency in the United States.
But industry critics and Democrat lawmakers said social-media-scanning algorithms would chill free speech and are unproved in their ability to forecast a possible crime or terrorist attack. Three ranking Democrats on the House Committee on Homeland Security wrote a letter to the Department of Homeland Security last month saying the program would be “ineffective, inaccurate … and ripe for profiling and discrimination.”
ICE’s acting director, Thomas Homan, responded that the program had been intended to bolster the agency’s “analytical tools” that agents use to vet foreign visitors, including through social media. Leads “enhanced using analytical tools,” he added, were reviewed by senior analysts before being used in investigations.
Several major tech and contracting firms, including IBM, Booz Allen Hamilton and Deloitte, attended an “industry day” session in Virginia in July 2017 in which immigration officials discussed the contract. But some companies later voiced unease over the proposal: An IBM spokeswoman said the firm “would not work on any project that runs counter to our company’s values, including our long-standing opposition to discrimination.”
Though AI systems are being deployed more often to help flag objectionable and dangerous content, tech giants such as Facebook still depend largely on their human workforces to make content-moderation decisions, saying the software isn't yet nuanced enough to comprehend speech, assess context and decide objectively on its own.
Asked whether a social-media-scanning AI could predict the difference between a good person and a terrorist, the Cambridge Analytica whistleblower Christopher Wylie told the Senate Judiciary Committee Wednesday that “the most advanced neural network made yet” still reflects the systemic prejudices of the data on which it’s been trained.
“There is no mathematical way to determine whether someone is a bad person,” Wylie said.