Fei-Fei Li and John Etchemendy are co-directors at the Stanford Institute for Human-Centered Artificial Intelligence. (Peter DaSilva for The Washington Post)

PALO ALTO, Calif. — A Stanford University scientist coined the term artificial intelligence. Others at the university created some of the most significant applications of it, such as the first autonomous vehicle.

But as Silicon Valley faces a reckoning over how technology is changing society, Stanford wants to be at the forefront of a different type of innovation, one that puts humans and ethics at the center of the booming field of AI.

On Monday, the university will launch the Stanford Institute for Human-Centered Artificial Intelligence (HAI), a sprawling think tank that aims to become an interdisciplinary hub for policymakers, researchers and students who will go on to build the technologies of the future. They hope they can inculcate in that next generation a more worldly and humane set of values than those that have characterized it so far — and guide politicians to make more sophisticated decisions about the challenging social questions wrought by technology.

“I could not have envisioned that the discipline I was so interested in would, a decade and a half later, become one of the driving forces of the changes that humanity will undergo,” said Fei-Fei Li, an AI pioneer and former Google vice president who is one of two directors of the new Stanford institute. “That realization became a tremendous sense of responsibility.”

The institute — backed by the field’s biggest leaders and industry players — is not the first such academic effort of its kind, but it is by far the most ambitious: It aims to raise more than $1 billion. And its advisory council is a who’s who of Silicon Valley titans, including former Google executive chairman Eric Schmidt, LinkedIn co-founder Reid Hoffman, former Yahoo chief executive Marissa Mayer and co-founder Jerry Yang, and the prominent investor Jim Breyer. Microsoft co-founder Bill Gates will keynote its inaugural symposium on Monday.

The money raised will not only go to research grants and academic gatherings but also to buying data processing power and luring back some of the talent that has fled academia for lucrative industry jobs in recent years. It will be housed in a new 200,000-square-foot building at the heart of Stanford’s campus.

“We recognize that decisions that are made early on in the development of a technology have huge ramifications,” said John Etchemendy, a philosopher and former Stanford provost, the second director of the AI institute. “We need to be thoughtful about what those might be, and to do that we can’t rely simply on technologists.”


The Stanford Institute for Human-Centered Artificial Intelligence will be housed in a new building at the center of the campus. (Peter DaSilva for The Washington Post)

The idea for the institute began with a conversation in 2016 between Li and Etchemendy that took place in Li’s driveway about a five-minute drive from campus.

Etchemendy had recently purchased the house next door. But the casual neighborly chat quickly morphed into a weightier dialogue about the future of society and what had gone wrong in the exploding field of AI. Billions of dollars were being invested in start-ups dedicated to commercializing what had previously been niche academic technologies. Companies like Facebook, Apple and Google were hiring the world’s top artificial researchers — along with many of their recently minted graduates — to work in new divisions dedicated to robotics, self-driving cars and voice recognition for home devices.

“The correct answer to pretty much everything in AI is more of it,” said Schmidt, the former Google chairman. “This generation is much more socially conscious than we were, and more broadly concerned about the impact of everything they do, so you’ll see a combination of both optimism and realism.”

In the years following that conversation in the driveway, the dangers and ills of AI have become more apparent. Seemingly every day, new statistics emerge about the tide of job loss wrought by the technology, from long-haul truckers to farmworkers to dermatologists. Elon Musk called AI “humanity’s existential threat” and compared it to “summoning the demon.”

Researchers and journalists have shown how AI technologies, largely designed by white and Asian men, tend to reproduce and amplify social biases in dangerous ways. Computer vision technologies built into cameras have trouble recognizing the faces of people of color. Voice recognition struggles to pick up English accents that aren’t mainstream. Algorithms built to predict the likelihood of parole violations are rife with racial bias.

And there are political ramifications: Recommendation software designed to target ads to interested consumers was abused by bad actors, including Russian operatives, to amplify disinformation and false narratives in public debate.

“The question comes down to whether this revolution of AI — and of today’s machine learning techniques — will contribute to the progression of humanity,” said Hoffman, who chairs the institute’s advisory council. He called Stanford’s institute a potential “key lever” that would act as a “catalyst,” trusted adviser, and source of intelligence for industry, the government and the public. (Hoffman ran into trouble last year after reports showed that he had funded a disinformation campaign on Facebook during the Alabama election. He said he did not know his money was used in that way.)

While universities in recent years have drawn criticism for raising large amounts of money — Stanford is among the biggest fundraisers of all — the cash is particularly necessary if universities are to remain competitive in the field of AI, said James Manyika, an advisory council member and director of the McKinsey Global Institute. Not only will the money be used to retain talent, but also to fund costly data processing machines that can run artificial intelligence applications at scale.

“The goal is to have resources that will enable Stanford to be competitive,” Manyika said. “If you gave researchers at Stanford access to compute, that will slow down the brain drain quite a bit toward these corporate labs.”


Li is an AI pioneer and academic who also worked at Google. (Peter DaSilva for The Washington Post)

Schmidt said he had observed a “tipping point” in the last year or so, where computer science programs across the country are adding courses in AI ethics and big companies such as Google are announcing AI principles and creating internal programs to attempt to take the bias out of the software they are building. Schmidt said that Stanford’s program would elevate and centralize these ad hoc efforts, but also contribute to the development of the field overall.

One of the bigger questions HAI has yet to answer is the extent to which it will take policy positions on some of the toughest current issues, in which Li and others involved with HAI have been directly involved. Last year, when Li was running artificial intelligence for Google Cloud, Google became embroiled in controversy for obtaining a Pentagon contract to improve artificial intelligence that can scan video footage coming in from drones. Many Google employees protested the contract and some even quit.

Li cautioned her colleagues against using the term AI when discussing the contract because of the sensitivity of the topic, according to a New York Times report, and confirmed by Li. Etchemendy said HAI would not take sides or dictate decisions to other organizations.

Etchemendy said that 200 faculty members, from departments like law and anthropology, have already applied for funding from the think tank. Fifty-five have already received seed grants to research AI’s implications for topics including medical decision-making, gender bias and refugee resettlement. One of the institute’s biggest strengths would be its commitment to diversity within the profession, he said, and its recruitment of experts from fields not traditionally associated with AI.