The Tao of Gaming

Boardgames and lesser pursuits

Archive for the ‘Artificial Opponents’ Category

Baba is you

Saw the (switch) trailer, bought it immediately after (on steam). A few simple levels, and now Brain-burney.


Written by taogaming

March 22, 2019 at 6:50 pm

Posted in Artificial Opponents

Tagged with

Don’t Pray for Mojo. Automate the Prayer Process for Mojo!

Factorio 0.17 released (and 0.17.1 fifteen minutes ago)! Downloading now….

Honestly, I’d burned out after the new years (I would often start a campaign and do an hour or two of mindless zen, then quit), so I’m hoping this will rekindle my joy with the new science format. If not, well, I had a good run.

Written by taogaming

February 26, 2019 at 5:04 pm

Posted in Artificial Opponents

Tagged with

Another game fallen to our robotic masters — Jenga

Written by taogaming

February 7, 2019 at 6:17 pm

More DeepMind — This time Starcraft

I’ve never played SC (etc), so watching these replays isn’t very informative. But still interesting. If you want to skip to the first replay, its at about minute 44 (after 30 minutes of count down and 14 of discussion). The second match starts at around 1:32. If you watch one match, G4 of the second match at around 2h mark.

Here’s the main article

A few thoughts (mentioned in the video):

  • The AI (‘AlphaStar’) actually “clicked” less than the PROs (200-300 “Actions per minute” vs 500-600 for the human). That being said the commentators kept praising AlphaStar’s micro-management (“micro”) in terms of ability to maneuver in battles (for example —  retreat a single almost dead unit from a battle while having the rest press) was praised and may have actually been one of those precision benefits that humans just can’t match.
  • In the 5 game match, they actually fielded five different agents (with order randomized). They used a semi-guided genetic strategy then tried to minimize ‘exploit-ability’. The human was not aware of this (until after the fact), which may have affected strategy. After a game the human “adjusted” but was then facing a totally new strategy.
  • This wasn’t Lee Sedol where they were playing the undisputed, but Starcraft is a much more complex (and real time) game, so impressive.



Written by taogaming

January 25, 2019 at 5:17 pm


Just finished Factorio mod SpaceX which greatly extends the endgame. (Basically, if you consider one rocket to be 10Million “effort,” SpaceX takes 2.7 Billion effort). I’ve gotten good enough so that it only took about 51h, (by comparison, my first game took ~40h). That’s the power of geometric growth. I could probably cut that by 10-15h now that I know a bit more about what it entails. Maybe 20. Anyway, it’s a nice mod, makes you build bigger without adding ridiculous complexity (like Seablock or Angel/Bob).

Written by taogaming

November 17, 2018 at 7:00 pm

Posted in Artificial Opponents

Tagged with

Pay no attention…

Factorio 0.16, (the final pre-release, improved graphics and optimizations and a few new features) dropped this week. Expect nothing and ye shall receive.

But I got my copy of Wild Blue Yonder in the mail, so I may actually play that at some point as well, so expect something and ye may receive.


Written by taogaming

December 14, 2017 at 9:34 pm

Re: Mastering Chess and Shogi by Self-Play (etc etc)

More stunning news from the DeepMind crew:

In this paper, we generalise this approach into a single AlphaZero algorithm …. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case. –Abstract

And how did they do?

In chess, AlphaZero outperformed Stockfish after just 4 hours; in shogi, AlphaZero outperformed Elmo after less than 2 hours; and in Go, AlphaZero outperformed AlphaGo Lee after 8 hours.

(Stockfish and Elmo are the best computer programs in the world. AlphaGo Lee beat Lee Sedol, but was later improved on by AlphaGo Zero).

And after 3 days, AlphaZero played a 100 game match against Stockfish and won 28, drew 72 and lost zero.

It did this in spite of a slower search speed:

AlphaZero searches just 80 thousand positions per second in chess and 40 thousand in shogi, compared to 70 million for Stockfish and 35 million for Elmo. AlphaZero compensates for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations – arguably a more “human-like” approach to search

The full paper is worth a read. I don’t understand all the details, although its non-linear evaluation function apparently may have biases (like a hand crafted evaluation function) but the authors argue that their Monte-Carlo search (with its inherent randomness) will “average over [AlphaZero’s] approximation errors” but the traditional Alpha-Beta search propagates them.

Looking at some of the commentary by more experienced chess players, this jumped out at me:

The paper also came accompanied by ten games to share the results. It needs to be said that these are very different from the usual fare of engine games. If Karpov had been a chess engine, he might have been called AlphaZero. There is a relentless positional boa constrictor approach that is simply unheard of. Modern chess engines are focused on activity, and have special safeguards to avoid blocked positions as they have no understanding of them and often find themselves in a dead end before they realize it. AlphaZero has no such prejudices or issues, and seems to thrive on snuffing out the opponent’s play. It is singularly impressive, and what is astonishing is how it is able to also find tactics that the engines seem blind to….

In this position from Game 5 of the ten published, this position arose after move 20…Kh8. The completely disjointed array of Black’s pieces is striking, and AlphaZero came up with the fantastic 21.Bg5!! After analyzing it and the consequences, there is no question this is the killer move here, and while my laptop cannot produce 70 million positions per second, I gave it to Houdini 6.02 with 9 million positions per second. It analyzed it for one full hour and was unable to find 21.Bg5!!

(Emphasis mine, also to note that AlphaZero and stockfish had 1 minute per move).

The DeepMind team just keeps delivering stunning advances. Someone is probably jumping with a reasonable approximation of joy.

Written by taogaming

December 7, 2017 at 11:08 pm

Posted in Artificial Opponents

Tagged with , ,