The Washington PostDemocracy Dies in Darkness

Maybe 2022 should be the year we turn over decision-making to the AI

An increasingly popular idea is to outsource choices to algorithms — even choices like New Year’s resolutions. It doesn’t sound half bad.

A photo of a virtual human representing artificial intelligence. AI increasingly could be used to help make professional and personal decisions. It can't do worse than humans, can it? (iStock)

This time of year always brings thoughts of how badly we messed up the past 12 months and how much better we’ll definitely make the next dozen.

For the many of us who have not spent 2021 at the gym calling our mothers while planning our weekly soup-kitchen volunteer schedule, we know the insectoid life span such New Year’s resolutions can have.

So the Smithsonian has another idea for 2022: What if instead of relying on our own resolutions we asked an AI what it thinks we should do? Starting this weekend, the “Futures” exhibit both online and at its Arts and Industries Building offers a “Resolutions Generator,” an AI that makes suggestions on what commitments we should undertake for 2022. (Enforcement is...loose.)

It sounds like a slightly weird idea, and I’d be lying if I said it didn’t turn up some weird results.

“Change my name to one of my favorite shapes,” it suggests, or “Every Friday for a year I will wear a different hat.” And, “Every time I hear bells for a month, I will paint a potato.”

Designed by AI researcher-writer Janelle Shane, the generator’s odd results are deliberate; she purposely trained the AI (the powerful GPT-3) with some of the wackier resolutions humans have put online, then set its parameters wide.

“We wanted the AI to come up with the kind of interesting resolutions we’re not thinking of,” Shane said. “We wanted whimsy,” added Rachel Goslins, the director of the Arts and Industries Building, “with a little bit of real.”

Okay, so probably not many people will really “Go into a library, climb up onto a shelf, yell down ‘I am a giant giraffe!’” But it’s a lot easier than trying to lose those 15 pounds. And this way you end up in a library.

Plus they have a point. The truth is by accessing the collective corpus of human resolutions, AI might conceive of ideas that our pale human pea brains cannot.

Anyway, it’s not like we’ve been doing such a good job handling the world’s problems as it is. Climate change. Social division. Inflation. Omicron. The continued dominance of Tom Brady. So we won’t decide to marry someone because an AI recommends it. But maybe we’ll let it choose our next trip? Thanks to a host of AI-driven apps, AI has probably already influenced what car we bought for that trip and the route we’d take to get there.

And there are growing piles of evidence that deploying AI that can think faster and even differently will pay dividends in the real world. A Stanford study last month concluded that AI sped up discoveries on coronavirus antiviral drugs by as much as a month, potentially saving lives. Canadian researchers in September found that AI made consistently better choices than doctors in treating behavioral problems. Even a button-down institution like Deloitte has a staffer who has persuasively argued that we should use AI, not humans, to update government regulations.

It makes sense why so many of us feel uncomfortable, though. There’s a difference between a tool and a goal. Deciding to visit Grandma in Milwaukee is a substantive choice. Getting there is just a utilitarian need. AI is okay for the how, not so much for the what.

But from an algorithmic standpoint there may be … not such a huge difference? Better results, after all, are better results. And with venues like online dating apps and its algorithms that decide who pops up in them already fueling our marriage choices, substantive life decisions are kind of already shaped by AI.

Plus there’s the psychological benefit. Think about the fears of making the wrong decision now disappearing. Our blood pressure would plummet. Then we wouldn’t need an AI to tell us what blood-pressure medication to take.

(Sure, there’s reason to worry that AI would unknowingly replicate our deep-seated biases. Not to mention all kinds of potential for global catastrophe as humans give up important areas of control to a black box. But, you know, trade-offs.)

Asked how she would feel about having an AI choose something important like a heart surgeon, Goslins said: “I’d be okay if it narrowed it down to five doctors. But then I’d want to call friends and consult them for the final decision.” That hybrid approach may be where many of us land: use an AI for the big and difficult sorting, then our instinct can take us that last tricky mile.

As she devised the resolutions generator, Shane said she had to factor in herself in an interesting way too. “If I didn’t do a lot of pruning of the list, I’d be constantly fighting the AI,” she said. “It would go really obvious or really irrelevant.”

The reality is that while AI can help us make decisions, we still get to choose what it bases that decision on. The future of algorithmic decision-making might not so much be that a computer tells us what to do as we find a way to tell it what we want it to tell us to do. Think of it as Alfred from Batman. Yes, it can know us better than we know ourselves. But only from quietly hanging around us for so long.

In talking to Shane, I asked if she could ask GPT-3 to customize some resolutions for me. I shared some personal tidbits (loves hockey, has dogs, film and tech aficionado). It returned:

♦ “Treat every dog I meet like a celebrity.” (Can do.)

♦ “Write a film script based on the rise and fall of the Whac-A-Mole empire.” (Oh that’s actually a good idea.)

♦ “Whenever a goalie lets in a goal I will shout ‘Is that your final answer?’ ” (I already do that one.)

And finally: “Every time I see a mirror I will remember that it is the gateway to another dimension.”

Done. Just as long as it doesn’t lead to the gym.