The Washington PostDemocracy Dies in Darkness

Google fired its star AI researcher one year ago. Now she’s launching her own institute

Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups

Timnit Gebru, who was fired from Google a year ago, is launching a new artificial intelligence institute (Kimberly White/Getty Images for TechCrunch)
4 min

Timnit Gebru, a prominent artificial intelligence computer scientist, is launching an independent artificial intelligence research institute focused on the harms of the technology on marginalized groups, who often face disproportionate consequences from AI systems but have less influence in its development.

Her new organization, Distributed Artificial Intelligence Research Institute (DAIR), aims to both document harms and develop a vision for AI applications that can have a positive impact on the same groups. Gebru helped pioneer research into facial recognition software’s bias against people of color, which prompted companies like Amazon to change its practices. A year ago, she was fired from Google for a research paper critiquing the company’s lucrative AI work on large language models, which can help answer conversational search queries.

DAIR received $3.7 million in funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation and the Rockefeller Foundation.

“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” Gebru said.

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Gebru said DAIR will join an existing ecosystem of smaller, independent institutes, such as Data & Society, Algorithmic Justice League, and Data for Black Lives. She hopes DAIR will be able to influence AI policies and practices inside Big Tech companies like Google from the outside — a tactic Gebru said she employed during her time at Google.

Even as the high-profile co-lead of Google’s Ethical AI group, Gebru said she was more successful at changing Google’s policies by publishing papers that were embraced externally by academics, regulators and journalists, rather than raising her concerns internally about bias, fairness and responsibility.

Gebru said she hopes to use the funding to break free of the broken incentives of Big Tech, where she said outspoken researchers can be sidelined, potential harms are evaluated only after an AI system is in use and profitable AI projects — such as large language models, the subject of Gebru’s contested paper at Google — are treated as inevitable once they have been deployed in the real world. There was little consideration for concepts like AI applications that did not use big data sets, or focused on less profit-oriented aims, such as language revitalization, she said.

Gebru also hopes the funding will insulate her staff from the perils of academia, where researchers have to publish on a grueling schedule, and where the victims of unethical AI are not rewarded for drawing attention to its harms.

To better assist communities harmed by irresponsible AI, DAIR plans to experiment with a way to reward, pay or acknowledge research subjects.

Nonetheless, Gebru is mindful that even independent institutes are beholden to their investors and she hopes to find ways for DAIR to sustain itself through consulting with organizations that need help on ethical AI. “Let’s say I antagonize a funder — not these, but others,” she said. “There’s all of these Big Tech billionaires who also are in big philanthropy now.”

Gebru also wants to focus on making sure research is shared with and understood by affected communities, rather than moving on to the next research topic. For example, DAIR’s first research fellow, Raesetja Sefala, has been using satellite imagery to study South Africa’s history of segregating marginalized groups to remote areas. She and Gebru are working on ways to allow people to contribute to, or correct the data set, as well as visualizations that can benefit impacted groups.

Safiya Noble, a recent recipient of the MacArthur Genius grant and author of Algorithms of Oppression, is a member of DAIR’s advisory committee, along with Ciira wa Maina, co-founder of Data Science Africa, who has researched food security, climate change and conservation.

The case against Mark Zuckerberg: Insiders say Facebook’s CEO chose growth over safety

Gebru said she has to continue focusing on Big Tech because the harms can be so severe.

“We’re always putting out fires. Before it was large language models, and now I’m looking at social media and what’s happening where I grew up,” she added, in reference to reports about Facebook’s role in stoking ethnic violence in Ethiopia. (Gebru, whose family’s ethnic origins are in Eritrea, was born and raised in Ethiopia and came to United States at 16 as a refugee receiving political asylum.)

Gebru recalled how in 2019 she tried to bring Google’s attention to the need for non-English expertise in technology like detecting toxicity in online comments. A vice president of the company responded to her comment in an internal document saying there was already sufficient expertise on international audiences.

“I said, 'What do you mean ‘international’? I’m from the African continent and my mother tongue is not even on Google Translate,” she said.

Google declined the Washington Post’s request for comment.