Ariel Procaccia is Gordon McKay professor of computer science at Harvard University.

One of the most triumphant events in the history of artificial intelligence is also remembered as one of the most cringe-worthy. In 1997, the IBM supercomputer Deep Blue defeated Garry Kasparov, the reigning chess world champion, in an iconic tournament. Immediately following his loss, the distraught Kasparov cried foul. He claimed to have detected intelligence of the non-artificial variety in Deep Blue’s moves, which could be attributed only to a chess grandmaster covertly pulling the computer’s strings.

By contrast, Kasparov’s recent commentary is sympathetic to AI — a change of heart that’s only natural in light of the field’s progress and, in particular, its game-playing prowess. In the past few years, computers have wiped the floor with top human players in board and card games such as go, poker and Hanabi, as well as in challenging video games such as Dota 2, Starcraft II and Quake III Arena. At times it seems like bingo may well be humanity’s last stand.

Any new reverence for an unbeatable AI, however, should be tempered by the fact that recreational games are very unusual. They take place in closed systems, where outcomes depend only on clearly defined actions and in some cases chance; they have obvious objectives; and they allow AI programs to learn from experience by running millions of matches.

Instead of the current focus on recreational games, the question more researchers, companies and government agencies should be asking is whether AI can help us win the games we play in real life. There are endless examples, ranging from voting in elections to negotiating with our kids (I’m convinced the latter game can’t be won by adults), but high-stakes strategic interactions are top of mind: business and trade deals, political campaigning, peace treaties and, inevitably, armed conflicts.

It’s instructive to think of the escalation of hostilities between the United States and Iran earlier this year as a game of poker. Starting with the U.S. withdrawal from the Iran nuclear deal in 2018, both sides had been raising the stakes for years. When the conflict came to a head with the death of an American contractor in Iraq and the storming of the U.S. Embassy there, Washington went all in by killing Qasem Soleimani. Tehran’s short-term response amounted to folding, but it could still have a card up its sleeve. Now imagine what it would mean to have an AI program that can play this type of game optimally; the first country to develop such a program would certainly hold all the aces.

The challenge is that participating in the U.S.-Iran poker game is like preparing for a Texas hold ’em match and then finding yourself playing High Chicago — while balancing on a tightrope without a safety net. Real-world strategic interactions are so unstructured and unpredictable that current AI systems have no hope of even figuring out how to play the game, not to mention how to win it. To bridge the gap between games and reality, we need additional scientific machinery; fortunately, the field of game theory provides some of the tools for building it.

Game theory revolves around objects, literally called “games,” which distill the key aspects of any strategic interaction between two or more “players”: what actions are available to them, what they know and don’t know, and what value they place on different outcomes. Games are expressed in the language of mathematics, which is, conveniently, the mother tongue of all computers. That’s why we can easily tell an AI program everything we know about a specific game, which doesn’t have to be much — just enough to help it understand the game’s structure. To capture reality in all of its gory detail, this game skeleton must be fleshed out using lots of data.

This approach is demonstrated in recent work on wildlife conservation by my Harvard colleague Milind Tambe and his research group. They abstract the contest between rangers and wildlife poachers as a game with a predetermined structure, configuring its features automatically using fine-grained information about human and animal activity. Now that there’s a concrete game, an AI algorithm can be unleashed: Its strategy tells the rangers which patrol routes are most likely to stop poachers in the real world.

Admittedly, catching poachers is to managing international conflicts as tic-tac-toe is to chess. But this innovation is an important step in the right direction. With an appropriate investment of attention and resources, we’re likely to see computers play a bigger and bigger role in strategic interactions. When it comes to how we compete and cooperate, AI will be the name of the game.

Read more: