How Are Artificial Intelligence and Poker Interlinked?

Libratus, an artificial intelligence developed by Carnegie Mellon University computer science professor Tuomas Sandholm and doctoral student Norm Brown, recently beat the world’s best human players in no-limit hold’em (poker that can be played at any bet at any time). in no-limit hold’em. No machine has ever beaten a human in such a complex card game before. While AI has beaten the best players in checkers, chess, Othello and Go, there is one more hurdle to overcome in no-limit hold’em. Poker is an “incomplete information” game. Poker is an “incomplete information” game, which means that many cards are hidden, so you need luck as well as skill.

This isn’t the only AI trying to solve real-world problems involving incomplete information, not just poker. According to a paper published by a team of opponents at the University of Alberta, their AI DeepStack has beaten good human poker players with a significantly different strategy. (As of February 2017, this paper has not been peer-reviewed).

DeepStack uses deep neural networks to mimic human intuition and is designed to resemble Google’s Go AI, AlphaGo. Go is complex, but has complete information, like chess.

Texas Hold’em, on the other hand, is a card game with incomplete information. Players are dealt two “hole” cards, which only they can see, then three joint cards, which are dealt face down on the table, followed by a fourth, a fifth, and so on. Players can bet at each stage of the deal. In No-Limit Texas Hold’em, you can bet as much as you want at any stage.

The strategy of poker is to win as much money as possible, not necessarily each best poker hand. As the game progresses, guessing what cards your opponents have becomes a contest based on all the bets that all players have made in the game, not just the most recent bets. It also uses a bluffing method.

That’s why it’s so hard for artificial intelligence to play poker. Libratus does its calculations on a supercomputer at the Pittsburgh Supercomputing Center, and it has a big advantage over humans. That is, it can “play” tens of thousands of game scenarios in a matter of seconds and then decide on the best game to make.

DeepStack, however, takes a different approach. It doesn’t need to see the possible scenarios to the end. Instead, it uses a neural network to guess the outcome of each game. The team at the University of Alberta “trained” this DeepStack neural network using thousands of poker situations and looking at the stakes and cards. In this way, the neural network “learns” to determine which bets will be more successful. It doesn’t calculate all possible outcomes for each poker winning hand, but it does make quick, approximate estimates, and while DeepStack has beaten many good players, it’s still a long way from beating the top players. Meanwhile, Libratus has caught the attention of the poker world.

In a 20-day poker tournament that ended on January 31, 2017, Libratus defeated four of the world’s top professional poker players and won over $1.7 million in prize money!

“The ability of the best AI to make strategic inferences with incomplete information now exceeds the ability of the best humans,” added Professor Sandholm. The main improvement of Libratus over Claudico, an earlier AI from the same CMU, is its ability to bluff.

One of the four top poker players participating in the tournament was Kim Dong, who also played in a similar tournament with Claudico in 2015. Commenting on the challenge from Libratus, Kim said, “It was about half the challenge. I didn’t think they would come back. There were very few mistakes in the algorithm. We’ve been outplaying Claudic and bluffing all over the place, but this time we think it’s the other way around”.

Against Claudic, the human player won $700,000 in 80,000 hands and won almost every day of the tournament. However, against Libratus, they won only five days out of 20.

Back To Top