OpenAI made headlines last year when it proved a bot could beat a professional gamer head to head at one of the world’s most complex video games. But it had one more gaming goal to conquer — to beat a professional team of five.

Now, after proving the bot can beat teams that rank in the top 1 percent of amateur players for its game of choice, OpenAI will get its chance to shine at the International, one of the world’s most established video game tournaments. The tournament is where the researchers hope to showcase how far Elon Musk-backed OpenAI has come in terms of its ability to control its five-character team as well as any team of five humans can.

While machines have beaten humans at games — from IBM computer Deep Blue’s chess victory in 1997 to a Google bot’s win over Go champion Lee Sedol in 2016 — each game has offered a new challenge for artificial intelligence to solve. With Dota 2, the game that OpenAI will play at this year’s tournament, two teams of five players battle each other for control of a map. There are complex problems to solve. For example, the bot can’t see the whole board at the same time, and it must come up with a strategy as it encounters enemies. In other words, the bot has to show great intuition. Researchers hope its ability to solve these problems proves its aptitude in taking on more difficult, nongaming challenges.

“This technology can apply to a huge range of problems,” OpenAI chief technology officer Greg Brockman said. “My hope is, this is the last game milestone.”

The intuition OpenAI’s bot has developed is what’s exciting, Brockman said. On Tuesday, he told members of the House Science Committee that developing that kind of instinct could help AI excel at starting companies, making business deals or writing books.

The hope is that the game bot will draw more attention to the real-world applications of what OpenAI’s work can accomplish, such as managing resources in health care or keeping an eye on problems that crop up in a city’s transportation system.

OpenAI’s bot was trained daily using rapid simulations that equaled 180 years' playing time by humans, pitting it against itself and the occasional human. Work to scale up from its solo-playing program started in August, with OpenAI’s human team generally kicking the bot’s butt within 10 or 15 minutes at the start.

Then the matches started taking 20 minutes. Then 45 minutes. Within a few weeks, OpenAI’s team had to call for reinforcements because the bot had outstripped the playing talent of the people on the team.

“At some point, we couldn’t tell if it was making good decisions,” said Brooke Chan, software engineer on the project. The difficulty of opponents ramped up quickly from there. The bot played a professional gamecaster, then amateur and semipro teams — holding its own for longer and longer before eventually beating the humans.

There are some limits to the bot. As with the one-on-one play, programmers have taken out some of the more complicated aspects of the game. It uses just five of the 115 characters available, and human players have to use the same characters.

Still, it quickly became a sophisticated player, with very little strategic direction from its programmers. Developers told the bot only when it had done something good. They didn’t make gameplay suggestions or set priorities.

Yet as the bot learned, it started using techniques that professional players had adopted. That included some — such as making big sacrifices at the start of the game — that seemed counterintuitive. Chan said that while the team at OpenAI hadn’t heard of some of these strategies, it turned out the bot had independently hit on a strategy growing in popularity on the professional circuit.

Professional gamers have also learned from the bot. After it beat a solo player last year, Brockman said, some players adopted its more aggressive style. The notion that the bot can change the way people look at problems encourages OpenAI when it thinks about how humans and these bots interact.