Professional poker player Daniel McAulay, right, prepares to compete with the Libratus computer system as one of its creators, Noam Brown, looks on. (Andrew Rush/Pittsburgh Post-Gazette via Associated Press)

— Twelve days into the strangest poker tournament of their lives, Jason Les and his companions returned to their hotel, browbeaten and exhausted. Huddled over a pile of tacos, they strategized, as they had done every night. With about 60,000 hands played — and 60,000 to go — they were losing badly to an unusual opponent: a computer program called Libratus, which was up nearly $800,000 in chips.

That wasn’t supposed to happen. In 2015, Les and a crew of poker pros had beaten a similar computer program, winning about $700,000. This time, the pros had initially kept things more or less even by finding flaws in how the computer played; fans following this “Brains Vs. AI” competition at the Rivers Casino here put the odds of the AI winning at only about 1 in 4.

But by the second week, the flaws had disappeared; the odds of the computer triumphing rose. “On Day 1, it had played well, but it wasn’t impressive,” Les said. “What’s impressive is how this thing has learned and evolved, how much better it has gotten every day.”

Machines have learned a lot about how to play games. Twenty years ago, they figured out checkers, and 10 years ago they toppled the Russian grandmasters of chess. Even China’s game of go has been solved. But poker remained firmly in the hands of humans.

That’s because unlike checkers and chess, where all the pieces are visible, poker is a game of limited knowledge and uncertainty, of hidden cards and bluffs. It is perhaps truer to life, which may explain why it has been difficult for silicon chips to grasp.

“AIs have had a lot of trouble with poker,” said Noam Brown, a graduate student at Carnegie Mellon University who developed Libratus with CMU computer scientist Tuomas Sandholm. “It’s the holy grail of imperfect information games.”

A victory for Libratus, Brown said, would not be much of a threat to human poker players. Its brain is a supercomputer that costs millions of dollars per year to run, so using it to play poker would not be a great way to make money. But Libratus could be a step toward helping artificial intelligence deal more broadly with uncertainty.

That’s because poker is not simply a game of chance. Neither does it require being able to read an opponent’s facial expressions, although Hollywood might like us to believe otherwise. What guides Libratus’s decisions is powerful mathematics, math that could be applied to auctions, negotiations, finance, security and other real-world arenas in which information is hidden.

Serious mathematicians have long been fascinated by poker. John von Neumann, a pioneer in game theory, the branch of mathematics that deals with competition, explored the ins and outs of the card game early in the past century. So did John Nash, whose struggle with schizophrenia was depicted in the movie “A Beautiful Mind.” In 1950, Nash published a paper showing that there is a best strategy for many games, including one-on-one poker, regardless of how your opponent plays. That strategy, now called a Nash equilibrium, may not always win, but it does better than any other approach.

Finding the Nash equilibrium for simple games such as tic-tac-toe or rock, paper, scissors is easy. Finding it for a game as complicated as poker is hard. An artificial intelligence developed at the University of Alberta has been able to master a basic version of poker called heads-up limit Texas Hold ’em, in which two players compete against each other with a restricted ability to bet. But a hand of no-limit heads-up poker, in which the players can wager as much as they want to, involves a huge number of possibilities: 10 to the power of 160, which is a one followed by 160 zeros. That’s more than the estimated number of atoms in the universe. For poker games involving more than two people, the possibilities become seemingly incalculable.

People ranging from university academics to enthusiastic retirees have tried to create artificial intelligences to simplify the problem. Every February, they pit their creations against each other at a machines-only competition. A winner is declared, but a person who simply folded each hand would do better than many of these AIs. “Every year, the computers play billions of hands against each other,” says Jonathan Schaeffer, a University of Alberta computer scientist who helped to start the contest. “Every year, we see incremental improvement.”

Aside from the people behind Carnegie Mellon’s Libratus, only the Alberta team has made the claim of being able to beat humans. The Canadian program, called DeepStack, uses a neural network, a piece of software that works a bit like the human brain, making fast estimates that its creators compare to an intuition and reconsidering its options as new cards are laid on the table. A research paper posted on Jan. 10 claims that DeepStack played 40,000 hands against dozens of poker players and won, becoming the “first computer program to beat professional poker players in heads-up no-limit Texas Hold ’em.”

But the poker pros facing off against Libratus brushed off that victory, pointing out that the people recruited for that study were not specialists in one-on-one, heads-up poker. “Those guys don’t play our game type,” said Dong Kim, one the high-stakes poker players in the tournament. “They might play other kinds of poker, but even small-stakes heads-up players on the Internet would crush them.”

The Alberta researchers declined to comment, pending the acceptance of their paper in a scientific journal.

Libratus prepared for its epic match by first playing trillions of hands against itself to build a database about which choices tend to work better than others. While playing, it pauses once in the middle of each hand to rethink its strategy, assessing not only what moves it can make but also other moves it could have made if the situation were different.

This method has led to some seemingly unusual decisions by Libratus that fly in the face of traditional poker wisdom, Sandholm says. When an opponent raises the stakes on the last bet, for instance, the computer may match that raise, even with weak cards that are unlikely to win.

“If my 10-year-old daughter made that move, I would teach her not to,” Sandholm said. “But it turns out that this is actually a good move. It helps to catch bluffs.”

Those strategies paid off. By the end of the 20-day competition, Libratus was declared the winner, up more than $1.7 million in chips. “This is a major milestone for AI,” said Andrew Ng, a computer scientist at Stanford University who followed the tournament.

Les and his companions each walked away with a share of a $200,000 purse (real money, not chips) — and perhaps some lessons in how to play cards.

“We are definitely learning from how this computer thinks,” Les said. “I think I will come out of this a better poker player.”

Read more:

Mark Zuckerberg builds an AI assistant to run his house — and entertain his toddler

Mind-controlled devices offer hope for the disabled

Everything you think you know about AI is wrong