The Washington PostDemocracy Dies in Darkness

This AI model tries to re-create the mind of Ruth Bader Ginsburg

Its creators say the AI app helps ordinary people understand how artificial intelligence is progressing. Critics contend there’s much work to be done.

Supreme Court Justice Ruth Bader Ginsburg in 2013. (Nikki Kahn/The Washington Post)
Placeholder while article actions load

When reports surfaced in May that the Supreme Court wanted to overturn abortion rights, many wondered how the late Justice Ruth Bader Ginsburg might have responded. Now, they don’t have to wait.

“I think they’re wrong on the law, but on the facts, no,” said a simulation of Ginsburg, who died in 2020, when asked about the Supreme Court’s upcoming decision on Roe v. Wade.

The answer came not from Ginsburg’s numerous court opinions, but an artificial intelligence model of the late justice released Tuesday. “Whether it’s good or bad, it’s settled, and, therefore, it’s not my business to think about it,” the RBG bot concluded.

The model, called Ask Ruth Bader Ginsburg, is based on 27 years of Ginsburg’s legal writings on the Supreme Court, along with a host of news interviews and public speeches. A team from the Israeli artificial intelligence company, called AI21 Labs, fed this record into a complex language processing program, giving the AI an ability, they say, to predict how Ginsburg would respond to questions.

“We wanted to pay homage to a great thinker and leader with a fun digital experience,” the company says on the AI app’s website. “It is important to remember that AI in general, and language models specifically, still have limitations.”

The tool comes during fierce argument around the ethics of creating technology that replicates human life, particularly when the humans involved aren’t around to offer input. But its creators argue their invention is a useful and easy-to-use tool to help ordinary people, who might not know much about technology, understand how the field of artificial intelligence is progressing.

The Google engineer who thinks the company’s AI has come to life

“There are not many places where the general public can go and play with real AI,” said Yoav Shoham, co-founder of AI21 Labs. “But now you can.”

In recent years, research labs and companies across the world have been racing to build technology that replicates or surpasses human intelligence, offering ways for people examine and interact with their work along the way.

OpenAI, an Elon Musk-backed artificial intelligence company, unveiled a text-generator, called GPT-3 that can write movie scripts and undergirds an image generator, DALL-E 2, which translates text commands into inventive, sometimes psychedelic visuals. In 2020, Shoham’s company created Wordtune, a tool that suggests different ways to write sentences. They followed the release a year later with Wordtune Read, which summarizes the main points of long, dense passages.

But as AI technology has gotten better, Shoham said many surrounding the field are divided. “People project all kinds of [thoughts] on … automation that has nothing to do with reality,” he said. “I don’t want people to be disappointed by the underperformance of current AI and I don’t want them to monger fear.”

The general public, he said, needs to make up their own mind, and his team’s RBG model is an accessible, hands-on way of engaging with the technology.

To build it, the researchers used Jurassic-1, a neural network they created that analyzes large troves of data and develops its own language to spit out results to questions or prompts. Neural networks are computer architecture that attempt to mimic the human brain, by processing information.

They fed the model roughly 600,000 of Ginsburg’s words and created a tool that lets anyone ask it questions, to which it gives answers based on the massive trove of writing. “It gives you access to the kind of wisdom possessed by a person we hold in high regard,” Shoham said.

AI models beat humans at reading comprehension, but they’ve still got a ways to go

Paul Schiff Berman, a law professor at George Washington University who clerked for Ginsburg from 1997 to 1998, said that when he saw the bot, he was amused.

Right away, he tried asking it a question he would have been interested in getting Ginsburg’s opinion on: “Should federal courts defer to the factual findings of state courts?”

The response left a lot to be desired, according to Berman. The model didn’t directly answer the question and its reply implied Ginsburg didn’t believe in the judicial concept of deference, which is not true, he said. Berman also noted that the model did a poor job in replicating her unique speaking and writing style.

“I would have thought that’s something the AI could have imitated better,” he said. “If this is the best the [technology] can do, we’ve still got a ways to go.”

Meanwhile, several AI technology experts raised concerns with the experiment.

Emily Bender, a linguistics professor at the University of Washington, said she recognizes the experiment’s creators come from a place of respect for Ginsburg, but insinuating the technology can think and reason like the late justice is not accurate. “It can spit out words and the style of those words are going to be informed by the style of text they fed into it, but it’s not doing any reasoning,” she said.

Bender added that linguistics research shows that when people encounter “coherent-seeming texts” on a topic they care about, there’s a risk they will take it seriously when they shouldn’t. “People might use this to make arguments out in the world and say, ‘Well, RBG would have said,’ this AI [model] told me so.”

The military wants AI to replace human decision-making in battle

Meredith Broussard, an associate professor and artificial intelligence researcher at New York University, said the bot is engaging but should not be confused with actual legal advice. “It’s really fun to play with, but we should not take it seriously as well as we shouldn’t pretend that that’s a lawyer,” she said. (AI21 states that the model is “just an experiment” and that it can give inaccurate responses that should be taken “with a grain of salt.”)

Broussard added that the technology does not seem to be much more advanced than ELIZA, a chatbot created by MIT researchers in the 1960s, where a computer program replicated a therapist well enough to make people think it was human. She added there could be a limit to how advanced this type of artificial intelligence technology can ever get.

“There is a ceiling on the technology because it’s not a brain, it’s a machine,” she said. “And it’s just doing math.”

Loading...