The Tao of Gaming

Boardgames and lesser pursuits

Archive for the ‘Artificial Opponents’ Category

Play Dominion

Not in general (well, not just just in general). I’m referring to the website www.playdominion.com.

Man I miss the old isotrope server, but this isn’t bad.

I’m not paying any money, but even the free ‘base set’ campaign setups often have an expansion card or two in them. The AI isn’t great by any stretch, but there are some sets were it can win fairly easily and I have several losses in a row.

Written by taogaming

August 28, 2016 at 11:19 am

A bit more Space Empire, and a game day

I played a few more games of Space Empires solitaire, and I enjoy it. I’ve tweaked the system to feel slightly more intelligent and smooth out the bumps, the APs get less resources, but also tend to waste less (by fighting hopeless battles) and their growth is ‘on board’ which means it can be trimmed down. While TaoLing was at camp I left the board setup and managed to get in a half-dozen games … most of which were full evening affairs.

Now that TaoLing is back I took a day off to run back-to-school errands in the morning and we had a game day in the afternoon. A fair number of Puzzle Strike games (getting close to 50!), a few games of Dominion, some Mottainai, some Splendor, and even a bit of Pandante. (I think I’m going to take Pandante to my poker night this weekend and play a few hands — not for money — to see if there’s any interest).

In other news, I’ve realized that I’m not playing most of my collection, even though I’ve trimmed it down fairly aggressively this decade. So, more games are marked ‘for trade’. I may hold a geeklist auction later this year, but I’m open to trades/sales whenever. I did just break down and  buy the Brittania expansion of Sansa at Channukah.  So I knocked that off my “want in trade list.”

I suspect game days with the Taoling are going to slowly drift into memory — they may never end entirely, but the teenage years wait for no parent. I’ve been watching the Solitaire Games on Your Table threads for ideas as to which games I may try next…at least, until the Mage Knight urge rises.

Written by taogaming

August 18, 2016 at 9:14 pm

Links for Early March

The New Yorker has a nice article on the bridge cheating scandals. And there’s a youtube video explanation. But not to worry, there’s a plagiarism scandal in crossword puzzles, as well.

An article about the Polgar sisters, the 10,000 hour rule, nature vs nurture in the role of genius, and chess. You know, important stuff.

The Military Maxims of Napoleon.

Rick and Morty will be back this year! Time for a new catchphrase! And more songs!

Some links that are already too old: Can AlphaGo defeat Lee Sedol? Commentary on one of Alpha Go’s previous games (against Fan Hui). Why are they too old? Because the first game of the match was played today … and AlphaGo won.

“I was very surprised because I did not think that I would lose the game. A mistake I made at the very beginning lasted until the very last,” said Lee, who has won 18 world championships since becoming a professional Go player at the age of 12.

Lee said AlphaGo’s early strategy was “excellent” and that he was stunned by one unconventional move it made that a human never would have played.

Some commentary on the game:

Everything was fairly standard until Lee played Black 7, which was quite unusual. It appeared that Lee had prepared a strange move to test AlphaGo’s strength in the opening.

Perhaps he also wanted to play an opening that AlphaGo was unlikely to have seen before, while crunching millions of positions as part of its training process.

Regardless, it wasn’t a good idea, because it didn’t lead to a good result for Black.

Simple Answers to Complex Questions discusses how complex systems can be navigated by simple rules, and recommends the book “The Index Card” for retirement planning. I haven’t read the book, but I’ve basically been using the advice for most of my life. I see a lot of people (in CS and engineering) spending a lot of time trying to beat the market. Which is fine if you find that fun, but (IMO) a huge waste if you don’t. Mrs. Tao wonders how I can play games and be so uninterested in the minutiae of finance, but one of the first games I played was Stocks and Bonds (and I wrote a moderator in Basic on my Apple ][e, represent!) and I took away the following facts:

  1. There seemed (to 14yo me) a best strategy,
  2. In the real world, I could not model stocks nearly so perfectly, and it seemed pointless to try.

I haven’t been able to bring myself to live up to the simple rule for dieting — “Eat food, not too much, mostly plants.” But I do have the first part down.

I personally figured that the Lakers beating Golden State was the seventh seal, but it turns out to only be the 23rd biggest upset in NBA history.

Update — AlphaGo won game two.

Written by taogaming

March 9, 2016 at 7:22 pm

Posted in Artificial Opponents, Linky Love

Tagged with

Computer Bridge

There is currently a bridgewinners discussion on “When will computers beat human bridge experts?“. This is (unsurprisingly) triggered by the recent advances in Go playing computers, based on the deep learning system. The news from Google — taking time out of their military robotics schemes to focus on less Skynet-y ventures — was an interesting demonstration. My only expertise in this (apart from the fact that I’m not exactly a stranger to military robotics programs, but also medical robotics!) is that I’ve followed computer opponents in classic games somewhat.

There are three salient points to the system — the training method, the use of monte carlo systems in evaluation, and the hybrid engine.  For now, lets just consider a simplified bridge AI. It plays standard american, and expects its opponents to do the same. Teaching a program to handle multiple bidding systems is one of scale and scope, and not that different (in practice).

Training — The Go program was trained with 30 million expert positions, then played against itself to bootstrap. This method could be used with bridge, assuming a large enough corpus of expert deals exists. However, there are some issues.

Every go (and chess) program starts from the same board position, a fact that isn’t true of Bridge. To counter balance that the search space for an individual deal is much much smaller. Still, it’s not clear that 30 million deals is enough. Presumably you could use some non-expert deals for bidding (take random BBO hands and if enough people bid them the same way, that’s probably good enough). Top level deals can be entered, especially those with auctions duplicated at two tables.

Card play could use a similar method — for a hand and auction, if the opening lead is standard, you could assume (absent further training) that it is right. A clever AI programmer could have a program running on BBO playing hands, and then comparing it’s results (already scored, no less!) against others. Your scoring system may want to account for weird results (getting to good slams that fail on hideous breaks, etc), but that’s pretty simple.

So, there may be a problem getting enough expert deals, but there should be enough to get a large corpus of good deals (particularly if the engine weights others and then uses better players as a benchmark).

Randomness — Some people on the BW thread are saying that randomness will stop an AI.

No.

The news out of Google is ahead of schedule, but it didn’t surprise me as much as Crazy Stone (the precursor to Alpha Go). Crazy Stone’s innovation was that if it couldn’t decide between two moves (because they were strategic, not tactical, or if the search depth got too great), it would simply play a few hundred random games from each position, and pick the move that scores better. Adding randomness to the evaluation function (of a non-random game!) greatly improved the structure, so much so that I believe I commented on it at the time. (Sadly, that was before the move, so I don’t have a tagged post. See my posts tagged go for some tangential comments.

Randomizing bridge hands would present different challenges, but the idea of just saying, “I don’t know, let’s just try each lead a few hundred times against random hands (that match with what we expect” is obvious, as well as using randomness (to decide whether to continue or shift suits). Because bridge doesn’t have Go’s massive search depth, you could also drop each hand into a single dummy solver for each position, or have it play randomly only until breaks are none (so plays randomly but not with a known position).

The thing about random play is that it’s fast. So you’ve won the opening lead, what to play? Whip up 100 random deals (not hard since you can see two hands plus a few other cards, plus all your bidding inferences) and try them out.

Hybridization — The trick is that you only resort to randomness if your trained algorithm isn’t confident of its training. This happens quite a bit in Go. (Go is amazingly frustrating in that expert or even master level players will be unable to communicate why a play is correct. I remember a lecture at the Pittsburgh Go Association and the lecturer, an amatuer 3 dan or so, was reviewing a game between two pros. And someone asked “Why did so-and-so play that move on that spot. Isn’t one space to the right better?”

Neither move had a tactical flaw, and the lecturer stumbled, then called out to a late arrival (a graduate student from Japan and — I believe — soon to turn Pro after getting his degree). The arrival went up to the big magnetic board, stared, said “Ah! It’s because of” and then laid out 10 moves for each side. Then reset, shifted the stone, and laid out ten different moves for each side then walked the few people who could understand the differences through it.

The point of my story? Go is hard. Go is hard enough so that the professional players routinely make moves that  amateur experts cannot reasonably understand. Go experts can look much farther ahead than bridge players (and computers) — yet random simulation coupled with deep learning can handle it.

The Go playing program might very well have learned to play the move on the correct spot, and not one-to-the-right, in our example. How did it learn this? Because the experts did it. It gained a feel for what to do in those situations. But even assuming that it hadn’t learned, and was sitting in the back of the room (like a 20 year old me) and couldn’t see a difference between the two. It might still grope its way to the correct move using a Monte Carlo simulation on both moves. (This is assuming that it’s near term tactical engine couldn’t find both sequences and judge one obviously better).

Right now bridge computers have many advantages, and can play perfectly once enough is known about the hand. You’d never use a random engine at that point. This hybridized strategy would be for your master solver’s club type things where experts disagree.

And, if you are deciding between those two things, you are (by definition) an expert.

So, I stand of the opinion that Bridge hasn’t been solved because nobody has thought to attack it. Or perhaps there is not a large enough body of expert deals that can be conveniently fed into a computer. A clever programmer (which I am not) could probably have a system learn just by having it log onto BBO, assuming that it could learn which players to trust and which to not (and which ones to use as bidding examples). 30 Million deals, each played 4 times by experts may not be enough, but it’s probably in the ballpark.

Why hasn’t this been done? Probably nobody cares. Go is (by far) the sexiest game right now because it’s search space is unfathomably deep. Go players routinely scoff at the simplicity (by comparison) of chess. In terms of search space (for a single hand) bridge doesn’t compare. If Google put its money behind it, I think a Bridge computer would do well in a match against a top team. Also, there were prizes offered for Go programs that could play at a high enough level, which spurred on development over the last 20 years.

Written by taogaming

February 7, 2016 at 12:05 am

Posted in Artificial Opponents, Bridge

Tagged with , ,

Lessons from Nethack

So, I downloaded the new Nethack (3.6.0) a week ago, and that’s what I’ve been doing. I’m old enough that I first played Nethack (or possibly one of it’s variants) on the University unix system, before the invention of the WWW. Needless to say, I was terrible, and usually starved to death in a few minutes, or died. I rarely got past level 3.

Of course now, there are entire wikis devoted to nethack. You can watch videos on you tube. Now I’m doing better, my scores have gone from maybe 1,000 to 10,000 and in and lucky case, 35,000. (According to the Nethack Wiki, the odds of a random piece of armor having +5 or better enchantment is 0.0001761%, so getting a +5 of anything is about 2 in a million. Getting a +5 Mithril armor? Priceless).

In some ways I’m cheating by reading the wiki, but I come by my cheating honestly — with hundreds or thousands of games played. But even knowing the odds and whatnot, Nethack games (fast as they are) have a joy.

And that is the Joy of Ignorance.

I may know what all the scroll options are but now, in this game, what does the READ ME scroll do? Identify stuff? Cause a (point blank) fireball? Punish the caster? Remove a curse? And is that particular scroll blessed? Cursed? Does this ring aggravate monsters, or allow me to go without food, or let me control polymorphs? Even if you know all of the code and tricks — and I don’t, still. Stupid Wights — you enter each game woefully ignorant.

That’s something that older RPGs don’t capture. At least, not if the player’s read the player’s guide. But suppose that you just took the monster descriptions and randomized them from what the book said? (Maybe not the basic shape, but the details). That looks like a jaguar …. oh, it’s a rust-monster. Oops. I believe that some of Monte Cook’s new games have the idea that where you come from (jungle, tundra, islands, etc) play a real factor, in that a native can basically walk through the  terrain fearing only monsters, but a non-native will struggle to survive. The idea of implicit knowledge, and the lack thereof.

Board games struggle with this. And that’s fine. Being able to determine the odds, knowing a deck composition, and the like are all valuable skills. But there’s something to be said for having no idea. As much as I mocked the X-Com game, I wanted to like it. I want to have to learn things, in game. I want each game to be different, within a certain framework. And now that I’m playing Nethack again, I’m reminded why no other Dungeon Crawl game seems good.

Written by taogaming

December 15, 2015 at 8:12 pm

Posted in Artificial Opponents, Reviews

Tagged with

Purchasing and Porting

I’m almost certainly ordering Cornucopia (despite my general reaction) since I play Dominion with TaoLing. So the real question is — what else am I ordering?

  • Mimsy Exotica Expansion — Almost Definitely
  • Rallyman — Intrigued
  • Yggdrasi — I’d like to try it, but probably not a blind purchase
  • Fighting Formations — I’m just now getting back into Combat Commander. I don’t need another WWII squad level game (even with tanks). But I’d like to play it and I’ll admit I’m tempted.
  • Troyes — I fear the JACE in this one. Try first.

In other news: I haven’t really been following the FFG vs Puffin Software (Summary — the game uses the Command and Colors Mechanics but has apparently been careful not to use copyrighted text/images & trademarks). I know that some videogame podcast actually discussed the legal issues with a lawyer.  As for how I feel, it depends on the answer to: Did they ask Richard Borg for his blessing? (They did approach FFG).

I just saw a Time’s Up app for iPhone this week. (Built in timer, name lists, scoring). If I owned a device I’d probably spring for it, just because it handles the bag cleanly. No idea if it’s authorized, although I imagine it is (since it uses the name. Actually, I don’t know that).

Anyway — Open thread in two parts. 1) What should I buy? 2) Porting games to iDevices — how do you feel about knockoffs?

Written by taogaming

May 7, 2011 at 10:57 am

So, who wants to PBEM Black Vienna?

Cause I could go for some PBEM Black Vienna. As a matter of fact, some locals played it 2-3 weeks ago, and I spent a fair amount of time mocking them because so many of my games have ended with bitter tears from the incorrect answer cascading into an avalanche of suffering. But online? I’m in.

Kudos, Greg.

And yes, I put this in the correct category. Online personas shouldn’t think of themselves as people.

Written by taogaming

March 30, 2011 at 4:45 pm

Posted in Artificial Opponents

Tagged with