Chess, go… and now, a computer has beaten humans at POKER.

Scientific American prepares us for our cybernetic overlords with a machine that knows when to hold ’em and when to fold ’em… how to call a bluff and how to read a strong hand:

DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly. Using this strategy allowed it to defeat its human opponents.

Twenty years ago game-playing AI had a breakthrough when IBM’s chess-playing supercomputer Deep Blue defeated World Chess Champion Garry Kasparov. Last year Google DeepMind’s AlphaGo program shocked the world when it beat top human pros in the game of go. Yet there is a fundamental difference between games such as chess and go and those like poker in the amount of information available to players. “Games of chess and go are ‘perfect information’ games, [where] you get to see everything you need right in front of you to make your decision,” says Murray Campbell, a computer scientist at IBM who was on the Deep Blue team but not involved in the new study. “In poker and other imperfect-information games, there’s hidden information—private information that only one player knows, and that makes the games much, much harder.”

Heads-up, no-limit Texas hold’em presents a particularly daunting AI challenge: As with all imperfect-information games, it requires a system to make decisions without having key information. Yet it is also a two-person version of poker with no limit on bet size, resulting in a massive number of possible game scenarios (roughly 10160, on par with the 10170 possible moves in go). Until now poker-playing AIs have attempted to compute how to play in every possible situation before the game begins.

Before ever playing a real game DeepStack went through an intensive training period involving deep learning (a type of machine learning that uses algorithms to model higher-level concepts) in which it played millions of randomly generated poker scenarios against itself and calculated how beneficial each was. The answers allowed DeepStack’s neural networks (complex networks of computations that can “learn” over time) to develop general poker intuition that it could apply even in situations it had never encountered before. Then, DeepStack, which runs on a gaming laptop, played actual online poker games against 11 human players.

DeepStack is not the only AI system that has enjoyed recent poker success. In January a system called Libratus, developed by a team at Carnegie Mellon University, beat four professional poker players (the results have not been published in a scientific journal).