Earlier this year, researchers' artificial intelligence beat a human in the dazzlingly complex board game known as Go. Not just once, but four times. It was a milestone in machine learning.
Now, the same Google-backed researchers who designed AlphaGo have their sights set on dominating a new game: Starcraft, the classic computer strategy game that has attracted millions of fans, some of whom duel online in professional tournaments hosted by real-life sports leagues.
Researchers from U.K.-based DeepMind want to train a bot that can play StarCraft II in real time — making decisions about which military units to send on scouting missions, and how to allocate resources and ultimately conquer other players.
Beginning next year, the game will serve as a research platform for any AI researcher who wants to use it, potentially allowing myriad player-algorithms to train off of the same game. And joining the effort is the game's publisher, Blizzard, which is working with DeepMind to set up the platform.
Unlike Go, StarCraft represents an entirely different challenge. Whereas players of the ancient board game take turns putting down stones to control physical territory, StarCraft players have to manage a constantly shifting digital economy to achieve victory. They have to mine minerals and gases, build defensive structures and offensive troops, survey the terrain and, finally, close with and engage the enemy.
The best players have to know not only what's going on at their home base but also what may be happening in distant corners of the battlefield. Efficiency of motion is key; commentators talk of “actions per minute” as a way of measuring a human player's productive capacity.
“StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world,” DeepMind wrote in a blog post Friday. “The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”
At this point, you may be wondering what kind of “real-world tasks” a computerized military genius might put its mind to — hopefully, that doesn't include sending siege tanks or space marines after us.
The reality is that we're nowhere near to building the kind of “general” artificial intelligence that science fiction has trained us to fear. Our most sophisticated machines tend to be strong at pattern recognition but relatively weak at logic and deductive reasoning.
DeepMind is not the first to think of using StarCraft as a training tool. In fact, AI researchers have spent years thinking about StarCraft precisely because of the unanswered problems for AI created by the game's open-ended style of play. And all joking aside, the implications are enormous.
“Optimizing assembly line operations in factories is akin to performing build-order optimizations” in strategy games, according to one paper by an international group of researchers in 2013. “Troop positioning in military conflicts involves the same spatial and tactical reasoning used in [real-time strategy] games. Robot navigation in unknown environments requires real-time path-finding and decision making to avoid hitting obstacles.”
Since at least 2011, one annual competition has pit dozens of bots against one another in games of StarCraft. And that's increasingly true of other games: Last month, developers of the turn-based strategy game “Civilization VI” unleashed eight computer players on one another to see what would happen. You can watch that here.
Until a few years ago, “computer gaming” used to mean sitting down in front of a keyboard and mouse yourself. Now, it seems to mean teaching the computer to play, too.