But at the same time, the U.S. government is targeting social media in ways that undermine Americans’ basic constitutional rights.
The Department of Homeland Security has been capitalizing on the availability of social media data, asking foreign visitors for their social media handles. This summer saw the latest development from one DHS agency, U.S. Immigration and Customs Enforcement (ICE), which is seeking to mine social media to make sweeping predictions about foreign visitors to the United States — and, by extension, monitoring the Americans in their network.
When President Trump issued his Jan. 27 executive order curtailing travel and immigration from six majority-Muslim countries — a many–times overturned order known as the Muslim ban — it included a little-noticed provision requiring the government to develop a screening procedure to vet every visitor. This is a herculean, and arguably impossible, task. The order mandates that visitors from any country, coming here for any reason, must be scrutinized to determine whether the traveler would be “a positively contributing member of society,” “make contributions to the national interest” or commit a crime or terrorist act.
Attempting to comply, ICE notified contractors in July that it was in the market for an automated system to determine who will contribute to society and who intends to do the country harm. The winning company will be required to continuously monitor a breathtakingly wide range of online data: “media, blogs, public hearings, conferences, academic websites, social media websites such as Twitter, Facebook, and LinkedIn, radio, television, press, geospatial sources, Internet sites, and specialized publications.” (Originally named the Extreme Vetting Initiative, the program has been rebranded as Visa Lifecycle Vetting, but the parameters remain the same.) These efforts are in service of a mission that is at best sketchily defined: Notably, American law has no language defining what it would mean to positively contribute to society or the national interest, leaving third-party contractors to fill in the blanks, with people’s lives in the balance.
Setting aside for a moment whether a program could even do what ICE hopes it will accomplish, it is an enormously dangerous proposition. As a group of more than 50 civil society organizations pointed out to DHS in a letter released in November, it is likely to be custom-built for discrimination. In addition to the fact that the scheme was birthed from the travel ban, a watered-down version of which was found by an appeals court to “drip with religious intolerance, animus, and discrimination,” it arises in the context of Trump’s derogatory statements about a range of immigrant groups. It is easy to imagine that a system built in that environment will disadvantage groups targeted by the president, whether intentionally or through the use of ill-conceived tools to measure an individual’s contributions to society.
The initiative is also likely to chill speech and association that is protected by the U.S. Constitution and by international human rights frameworks. Any visitor who knows that her online communications will be scrutinized by the U.S. government — not just as a one-time matter, but continuously for the duration of her stay — will choose her words carefully, perhaps choosing not to post messages that are critical of government policies or that discuss views disfavored by the current administration. U.S. citizens might exercise greater caution in speaking, meeting or collaborating with foreign visitors, worried that their speech will attract attention as well. And as a practical matter, it is highly implausible that a program purporting to scrutinize the Internet would pick up only materials that are posted by or related to foreign visitors; rather, it will sweep in vast quantities of content about lawful permanent residents and citizens as well.
Finally, ICE’s vetting initiative is simply impossible as a technological matter. As authorities in engineering, math, computer science and automated decision-making, including the former deputy U.S. chief technology officer, the president of the Association for the Advancement of Artificial Intelligence and the founder of X-Lab, told DHS in a concurrently released letter, “no computational methods can provide reliable or objective assessments” of the traits the agency wants to measure. As a result, any algorithm that is built will rely on “proxies” that may “bear little or no relationship” to the characteristics that ICE hopes to calculate — a process that is likely to “arbitrarily flag groups of immigrants under a veneer of objectivity.”
What does this have to do with Russia? Even before the latest revelations, relying on social media to make predictive, machine-driven determinations about individuals would have been cause for acute concern. Social media is extraordinarily contextual and notoriously hard to interpret — just ask Leigh Van Bryan, a British would-be visitor who jocularly tweeted in early 2012 that he planned to “destroy America” (English slang for partying) and “dig Marilyn Monroe up” (which he claimed was a reference to the show “Family Guy”). Homeland Security officials saw Van Bryan’s tweets before he boarded his plane, interrogated him when he arrived at the Los Angeles airport, and shipped him and his travel companion back to Britain. The same goes for Jelani Henry, the New York teenager sent in 2012 to Rikers Island — where he spent nine months in solitary confinement — in large part for appearing in and “liking” pictures on Facebook. Indeed, algorithms have struggled even to accurately identify positive and negative Twitter posts, and the task becomes that much harder when foreign languages are involved.
The recent Russia revelations starkly illustrate yet another way that social media can be misconstrued, particularly by automated decision-making tools — and the high stakes for doing so. Imagine a Facebook user who clicks that she “likes” Blacktivist, one of the groups created by Russian intermediaries. In a world of surreptitious social media manipulation, where a Facebook group is created by a foreign power seeking to destabilize an American election, an algorithm could conclude that “liking” the group signals allegiance to the sovereign country’s aims and categorize an individual accordingly. This is an especially acute concern where the automated tool might be privy to information not available to the user — for instance, the specific location of the group’s creator, or the identity of other people in the creator’s social network. And while Facebook now officially purges information about content posted by groups that have been removed from the platform, mitigating some concerns about the use of that outdated data by algorithmic tools, evidence also suggests that a number of posts by groups that are obviously fake to an educated eye remain online and easily available to more credulous users. A user’s online activity could thus unwittingly send a message that she never intended to transmit — and she might have no way of uncovering the judgments that are made about her.
Incidents like Russian agents’ creation of a Facebook page boasting 200,000 followers that organized pro-Trump, anti-Clinton rallies in New York, Florida and Pennsylvania have gotten attention for demonstrating how easily the platforms used to share pictures and keep up with family and friends can be weaponized for political purposes. But these incidents also highlight important questions about the legitimacy and effectiveness of law enforcement and intelligence agencies’ using social media to draw conclusions about users’ beliefs, associations, political and religious leanings, risk of violence and more. If we can’t even determine the motivations or backgrounds of those behind overtly political messaging from social media, how can it be used as a stand-in for truth when considering whether someone is a potential good citizen, criminal or terrorist?
Social media is a powerful tool to communicate, share information, organize like-minded people and more. But if we’ve learned anything from the past 12 months, it is just how unreliable our online identities can be. If ICE’s automated decision-making tool goes into effect, it will produce error-riddled determinations, lead to discriminatory outcomes and thwart scientific, academic and intellectual exchange. ICE should halt the development of the Extreme Vetting Initiative now.