Microsoft chief executive Satya Nadella has said that “human-centered AI can help create a better world.” (Gerard Julien/AFP/Getty Images)
Kirsten Ostherr is a media scholar and digital health technology researcher at Rice University.

Even as tech companies have weathered scandals, many have also redirected attention toward their more socially redeeming activities by promoting the concept of humanistic technology. Tom Gruber of Apple describes Siri as “humanistic AI — artificial intelligence designed to meet human needs by collaborating [with] and augmenting people.” Microsoft chief executive Satya Nadella has said, “Human-centered AI can help create a better world.” Google’s Fei-Fei Li has called human-centered AI, “AI for Good and AI for All.” Facebook chief executive Mark Zuckerberg believes the company can build “long term social infrastructure to bring humanity together.”

The word “human” crops up in conversations across the technology industry, but it’s not always clear what it means — assuming it means anything at all. Intuitively comprehensible, it sounds nonthreatening, especially in contrast to alienating jargon such as “machine learning.” It also builds on the popularity of human-centered design in recent years, a practice that is best known for its emphasis on cultivating deep empathy between developers and users. But calling the results “humanistic” is ultimately rhetorical sleight of hand that suggests much and means little. Unless these companies reconsider their underlying approach, their words will remain empty.

Among the big tech companies, Google has voiced the clearest expression of the idea of humanistic AI In March, Li, chief scientist for AI research at Google Cloud, penned a New York Times op-ed in which she writes, “A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being.” Yet even as it was promoting the idea of human-centered AI, Google was actively pursuing Project Maven, a major Department of Defense contract to develop artificial intelligence for use in drones. Effectively acknowledging the disconnect, Google announced that it would not renew the DOD contract and laid out a set of ethical guidelines in which it clarified that it would not be “developing AI for use in weapons.” Recognizing the potential negative publicity that this application of its technology could generate, in an internal company email, Li warned: “Google Cloud has been building our theme on Democratizing AI in 2017, and Diane [Greene] and I have been talking about Humanistic AI for enterprise. I’d be super careful to protect these very positive images.”

These “positive images” of “humanistic AI” include … well, it’s not clear what they include. If these algorithms are humanistic, it’s mostly insofar as they tend to internalize our worst instincts.

Consider computer vision, a type of AI that was key to Project Maven (and is central to self-driving cars). Photographic images from cameras mounted on drones are widely used to gather visual evidence and provide forensic truth value for military decision-makers. But those images are not transparently legible, and it takes a huge amount of human labor to interpret the data, especially in the categories of age, sex and race. Numerous examples of misinterpreted drone footage identifying the wrong target already exist. Human beings depend on subtle contextual cues, as well as value-laden theoretical frameworks to guide our interpretation of the world around us, and even then, competing interpretations abound (as disagreements around the Rodney King video reveal). And yet even as technology lags behind human capability when it comes to contextual sensitivity, we still hope to entrust it with life or death decisions.

Humanistic tech proponents such as Li acknowledge that to become “human-centered,” AI needs to develop more of the nuance of human intelligence, and human visual perception in particular. To illustrate AI’s current failings, Li cites a case in which an algorithm correctly identified a man on a horse while failing to note that it was a bronze statue. A more revealing example would be the egregious case of Google’s image-labeling algorithm that classified black people as gorillas.

It’s here, however, that we see an opportunity to get beyond the rhetoric of humanism, moving instead toward a truly human-centered approach. Doing so would involve calling out, critiquing, and correcting the legacies of racist classification that enabled this error, legacies embedded in the ways we collect and comprehend data. It would focus on the humanity of the people who were harmed, rather than simply erasing the embarrassment.

That’s not what Google did, however. When the gorilla incident made news in 2015, Google apologized, acknowledged the limitations of machine learning and removed the category of “gorilla” from the system. As of January 2018, Wired reported, the image search algorithm still excluded gorillas, along with chimps, chimpanzees and monkeys. Google also excluded the categories “African American,” “black man,” and “black woman” from their image-labeling technology.

Many researchers have pointed out how machine-learning systems tend to reproduce the biases of their programmers. A truly humanistic approach to categorizing racially sensitive images would incorporate history and aesthetics, resulting in a more accurate analysis with more transparent accountability for its classification choices. Thirty years ago, film scholar Richard Dyer showed how the invention of photography privileged whiteness at the very origin of its technological creation. Armed with these insights, a humanistic AI developer might have anticipated and prevented the gorilla incident by challenging the use of training sets for facial recognition that are primarily made up of images of white males.

In addition to addressing the harms of representational exclusion, humanists could also help AI developers think differently about archival training data. African American film scholars have long documented the racist iconography that pervades entertainment and news image archives. After decades of presenting a colonial perspective on exoticized images of racial “others,” National Geographic magazine has begun to reckon with its own legacy by openly reinterpreting the meaning of its photographic archive. Scholars of postcolonialism have assembled archives of films from global sites of empire that provide deep historical context for how images were coded by both the colonizers and the colonized. These archives offer an alternative frame for interpreting the history of visual culture that could be integrated into AI programs to provide more nuanced context for computer vision algorithms. This approach would foreground the implicit assumptions embedded in the data sets we use for training, while also emphasizing the importance of perspective in making meaning out of different images.

These examples hint at what humanistic development of AI might look like if it were going to be more than a rhetorical flourish. Humanistic AI would ask what the broader social and ethical purpose is in developing a particular algorithm, rather than waiting to see what unintended consequences might arise. This approach would reframe both what kinds of questions are asked, as well as the archives we draw on to answer them.

By working from multiple perspectives, humanistic AI might make it harder to see the world in binary terms. Li notes that achieving human-centered AI will require programmers to collaborate with experts in other fields, including the humanities. She’s right. But simply adding humanistic researchers to examine the social impact of AI after it is deployed, without also changing the development process, probably won’t get us very far. Calling AI humanistic without truly integrating experts in the humanities who can bring diverse perspectives to the ethical reasoning behind these initiatives will lead only to continued cases of bias and further erosion of public trust. The intellectual capital to solve the problem is not present in the tech sphere alone. The call for “humanistic AI” should be followed by a call for humanists to help create better AI through collaboration from the very start of the development process.