The Tao of Gaming

Boardgames and lesser pursuits

Archive for the ‘Thoughts about Thinking’ Category

Bah — Finance Majors

Or — as we call them around here — idjits.

I mean, how else can you explain the fact that when presented with a game rigged in their favor and offered the chance to bet real money repeatedly playing the game … in any amount they choose … as many times as they could play in 30 minutes

A full third of them lost money! More than a quarter of them went broke! Some of the test subjects were making a living working in investment firms!

Idiots, I say.

Certainly not gamers.

Definitely not gamers with a background in engineering or math.

Of the 61 subjects, 18 subjects bet their entire bankroll on one flip, which increased the probability of ruin from close to 0% using [the optimal strategy] to 40% if their all-in flip was on heads, or 60% if they bet it all on tails, which amazingly some of them did.

Read the full paper (Rational Decision-Making under Uncertainty: Observed Betting Patterns on a Biased Coin) then consider where your retirement savings are.

(Mine are in index funds).

Advertisements

Written by taogaming

November 4, 2016 at 8:34 pm

Posted in Misc, Strategy, Thoughts about Thinking

Tagged with

Screwtape discusses voting

My Dearest Wormwood,

After receiving your most recent letter, on your advice I watched the video on quick and easy voting for normal people. I am surprised that this comes as a revelation to you, since We who are down below routinely allow our charges to vote for a wide variety of things using what our patients semi-jokingly refer to as the Chicago Method (“Vote early and often”) and what your video refers to as Approval Voting.

And, as befitting our station, we scrupulously respect their votes whenever suits our mood. Which is more often than not, because all voting methods have flaws. Surely Our Father has taught you all the details of Arrow’s Impossibility Theorem, which has dozens of applications to suffering and gaming. I myself learned it at an early age.

(A more pedantic member of our kind – although I doubt you will ever encounter one – may state that Arrow’s formal proof  does not  strictly apply here. Math is a realm of The Enemy – and as such I have no done no more than dabble, lest I be accused of heresy again –  but I believe the idea generalizes. I will check with several experts I am dining on tonight).

Whenever a vote is proposed, you should of course make sure the outcome is as you desire. The stakes are high!

The video numbers make for a poor example for more interesting applications, so let us juggle them a bit. Surely even a youngster such as yourself is familiar with creative accounting?

  • The five vegetarians prefer: Veggies, Burgers (w/Veggie option), Steak (in that order)
  • The three carnivores prefer: Steak, Burger, Veggie
  • The lone Burger guy prefers: Burger, Steak, Veggie

In all cases the 1st two are “acceptable,” so burgers get nine votes, and is an acceptable compromise.

First of all, note the obvious flaw with the system. It punishes excellence. This means that, despite all of its problems, you should suggest Approval Voting whenever possible. Your goal should be to promote mediocrity and lazy thinking in all aspects. Do this consistently and your patients will always dine out on the most milquetoast and bland meals possible, never taking chances, never risking sublime beauty!

Do not mistake my critique of this system – which is done as a general exercise to instruct my favorite nephew – for a serious criticism!

Now, let us make a small change.

If, on the final restaurant named, people don’t vote on something acceptable because they prefer the currently winning option. Now, so long as Burgers are listed last, Veggies will win, because the Vegetarians, being more delighted with the currently winning option (named first or second), decline to raise their hands for Burgers. Which will now lose 5-4, despite being a unanimous winner before!

Then simply force those shuffling carnivores towards their tofu. Demand their happiness while they respect the group’s decision. Be sure to smile broadly as you choke down your okra. Sing praises towards democracy, which levels all of our patients in the same way that water always strives for the lowest resting place.

(As to my prior criticism, I simply state that while Vegetarian restaurants can be excellent in theory, much like excellent non-alcoholic beer it does not occur in practice).

As always, he who sets the vote order (and he who votes slowest, deciding after others who have raised their hands) has an immense amount of control, particularly if they well judge the preferences of others.

These tricks (along with a few more which I dare not reveal, lest this letter is intercepted) will let you control the outcome with ease, which is why we are serving a slightly maggoty meatloaf for the thousandth night in a row instead of the exquisite venison or lovely pouched trout, both clearly visible in the cafeteria.

Your affectionate uncle,

Screwtape


[H/T to Chris Farrell’s twitter feed]

My first (semi-joking) comment was that the Tao of Gaming method was to have everyone list all their options, then reject them all and walk away. This prevents mediocre games, although I admit that also has problems. I had thought I tweeted a joke about that but, much like Screwtape, I prefer the old method and send my messages encoded in the pitches and volumes of screams, although I do keep up with the times and try to limit my conversation to at most 140 screams.

An amusing coincidence — I was already thinking about the Impossibility Theorem earlier today, since my side project incorporates a quote by Kenneth Arrow in the next chapter.

Written by taogaming

June 28, 2015 at 2:59 am

The AI Researcher Who Crowdsourced Harry Potter Fans

(Authors Note — I wrote this yesterday, shopped it around a bit, and decided to post it here instead. The dates are the real dates of when I originally wrote this. Contains some not too surprising spoilers for a Harry Potter Fanfiction).

Writers of Fan Fiction come from all walks, united by their love of the underlying book, movie, game (or whatever). And Harry Potter has an immense following at www.fanfiction.net, with over who knows how many stories and hundreds of thousands chapters posted. Eliezer Yudkowsky writes one of the most popular, Harry Potter and the Methods or Rationality (or HPMOR). This story is explicitly a pedagogical device – a Rationalist tract to teach readers how to think better. (One of Yudkowsky’s other sites is “Less Wrong”) The sugar for this medicine go down is Harry Potter. Specifically, what if Harry Potter had been raised by a loving couple including a scientist, and blessed with a Richard Feynman like intelligence at a young age?

11 year old Harry James Potter – Evans – Verres lectures his friends (and Dumbledore!) about findings from cognitive science and regular science, including proper brainstorming technique, over-condfidence, and Bayesian thinking. Important psychological works like Cialdini’s classic book Influence or Asch’s Conformity Experiments are explained; numerous others are name checked.

It wouldn’t be popular without a great story. Harry fights bullies, leads an army in mock battles at school (replacing Quidditch), makes friends and enemies and conducts experiments on magic’s secrets. Harry pokes and prods, spells, sometimes with fantastic discoveries, sometimes to no avail. As the story progresses, he edges towards becoming a Dark Wizard himself. Harry jokes “World domination is such an ugly phrase. I prefer to call it world optimisation.” He’s a chaos magnet, polite but dangerous, a mile-a-minute mind in a world where almost anything is possible. He’s not infallible and not the Harry Potter you know; this is an 11 year old genius Muggles can’t handle. The Wizarding world has never seen his like.

Lectures mingle with the plot, all while finding time to make allusions, references and jokes about Rowling’s work and other classics. Harry is an 11 year old science geek; he knows all about Ender’s Game,  Batman, Army of Darkness, Star Wars and other comics, films, manga and books. He argues with Dumbledore via Tolkien references.

This peculiar Harry Potter fiction had been on hiatus after nearly 600,000 words when Yudkowsky announced (last year) that the final arc would be published between Valentine’s day and Pi Day (3/14). Fans rejoiced and online discussion blossomed again. For the last two weeks, chapters had been arriving every day or two.

February 28th, afternoon.

Then came Chapter 113, titled “Final Exam” posted on February 28th. This chapter is the hero’s low point, where things look bleakest. Harry is trapped by Voldemort and all the remaining Death Eaters, who have the drop on him. Voldemort (unlike the “canonical’ one from the books) won’t stupidly cast a spell he knows may backfire. This Voldemort agrees with Scott Evil (Doctor Evil’s nephew, played by Seth Green). No elaborate death traps and leaving the hero alone. Just shoot him. Voldemort has a gun (as well as a number of other lethal devices) because he’s worried about magical resonance.

So Chapter 113 ends … and the Author’s Challenge begins : the fans must devise Harry’s escape.

This is your final exam.

You have 60 hours.

Your solution must at least allow Harry to evade immediate death, despite being naked, holding only his wand, facing 36 Death Eaters plus the fully resurrected Lord Voldemort.

Any acceptable solution must follow a ridiculously long list of meticulous constraints: any movement, any spell leads to certain death. Nobody knows where Harry is (or that he was even missing). Harry could use any power he’d demonstrated (within those constraints) but couldn’t gain any new ones. There’s no Cavalry, No Deus ex Magica.  And ….

If a viable solution is posted before 12:01AM Pacific Time the story will continue to Ch. 121…..Otherwise you will get a shorter and sadder ending.

(Emphasis mine). A small section of the Internet exploded in disbelief.

Yudkowsky had done this before with a Science Fiction story called Three Worlds Collide. But this was on his old site with many fewer readers. I’d read the story well after he’d challenged his fans. Now he was working on a bigger scale. Final Exam was posted five years (to the day!) that Chapter 1 first appeared online. HPMOR has well over half a million page views. Readers faced having a story they’d invested weeks of reading (and sometimes years discussing) just end with the hero’s death. There seemed to be no solution. Voldemort, terrified and highly intelligent had planned this trap out in detail; Harry had blundered into it. (Being smart doesn’t magically give you all the critical information you may need, and Voldemort has decades of training and a few insights Harry lacked).

Harry James Potter-Evans-Verres had, in the preceding chapters, solved complex puzzles and all of them played fair (within the constraints of the world) and provided enough clues to satisfy the strictest mystery writer. But this seemed impossible. Fans despaired. I concocted a solution requiring a Patronus, the Cloak of Invisibility, a time turner, the Sorting Hat and still required negligence on Voldemort’s part that would make SPECTRE rip up your bond villain card. Other solutions were not arguably better.

Complex problems are Yudkowsky’s day job, a Research Fellow at the Machine Intelligence Research Institute. He spends his time (when not writing about Hogwarts) dealing with thorny problems related to Artificial Intelligence – its benefits and risks. The big risk, basis for countless fiction from Frankenstein to Terminator, is “Can we control our creation?” Yudkowsky’s research aims to create guidelines for a Friendly Artificial Intelligence, a machine we can trust to guide humanity into a new Golden Age, and avoiding “Unfriendly A.I.”

Other researchers (See update at end) suggest we isolate A.I. from the internet (and machinery) to keep us safe. We’d keep the A.I. “In a box.” Yudkowsky contends that Artificial Intelligence worthy of the name will be so advanced it will simply talk its way out of the box (assuming it couldn’t hack its way out). To further this argument, Yudkowsky developed “The AI Box experiment” where one player takes the role of the AI and tries to convince his opponent (the “Gatekeeper”) that it is safe to release him. He’s done this several times, and published protocols for this thought experiment.

Yudkowsky has taken the role of the AI in those prior games. After all, He’s the expert and trying to prove the point. If he can convince you to let an unknown quantity run free; what problem would an AI have. You’d probably think it’s your idea all along. Yudkowsky does this in order to draw attention to the dangers of unfriendly AI development. Once the AI gets out, nobody will be able to put it back. And if the AI is unfriendly, that’s Extinction. Game over.

(For a much more detailed introduction to this line of thought, I recommend the Wait but Why articles The Road to SuperIntelligence, and Our Immortality or Extinction.)

March 1st, AM.

Some readers (most on the discussion group I follow) knew this; but this was fan fiction, not a serious research effort. Harry Potter, not HAL and Dave. Less than 24 hours after the challenge had been issued, some discussion groups proposed the thesisThe entire story had built up to renact the AI in a BOX thought experiment with Eliezer playing Gatekeeper against his entire fanbase.

The argument seems compelling.

  • Harry James Potter-Evans-Verres is a super-intelligent, rational being, capable of discovering the inner workings of magic (well beyond what Harry did in the Rowling’s series, even though the entire series of HPMOR takes place in his first year at Hogwarts).
  • He was acquiring power at an alarming rate.
  • He was now trapped with Voldemort himself ready to pull the plug.

Worse still, Voldemort knows that Harry Potter is not friendly. You would think this goes without saying, but Voldemort is not simply afraid for himself but for all wizardkind. (There’s a prophecy, and it’s a long, complicated story). Acting out of a fear of an extinction level event, Voldemort has done everything in his considerable power to catch and neutralize Harry Potter. And done it well. Harry can’t cast spells without permission. He can’t speak to anyone but Voldemort, who is about to pull the trigger. He’s even forced Harry to speak only the truth (via magic) and answer questions like “Have you thought of a plan to defeat me yet?” so he’ll know how long he can delay.

The only thing Harry can do is talk to Voldemort.

Your strength as a rationalist is your ability to be more confused by fiction than by reality — HJPEV

All the constraints were, proponents argued, a clue. In an earlier chapter HJPEV explains that a rationalist avoids needless complexity. And all the solutions proposed were fairly insane. Harry’s internal dialogue mentally “assigns penalties” to complex explanations. You can chart orbits with the Earth in the Center of the Solar System, but its much easier if you put the Sun at the center. The proponents for the box theory argued that fans couldn’t find a solution because they had put the earth in the center of the solar system. The fanbase was trying to write a Hollywood ending where Harry wins, the argument went. But in the real world people talk out their differences all the time. And people who are in a bad situation have to accept it. (That was an explicit lesson that Harry even learned in Defense class early in the story).

So, in this reading (which I consider more likely) Harry Potter and the Methods of Rationality is no less than a five year buildup to Eliezer Yudkowsky taking the other side of the Box Challenge – the side played by the less intelligent person. Yudkowsky appears to have engineered a situation where a small but dedicated portion of the humanity simulates his AI for him in the Potter-verse. He’s spent years explaining how to calmly tackle a seemingly impossible problem, list assets, evaluate what they know and discern truth from fiction. He’s unquestionably provided ample motivation. With the deadline approximately 36 hours away, chat rooms are alive with proposals, debates, strategems, tactics, and detailed analysis of any and all relevant documents available on the internet. Arguments are weighed, flaws discovered and discarded and useful nuggets saved and added to a master list.

You know, like an AI might do.

Can the combined super-intelligence talk their creator out of killing their story, with the odds stacked against them? As day turns to evening on March 1st, some discussion groups aren’t interested in what Harry has, they are listing what he knows about Voldemort’s beliefs; what information he can volunteer that would stay Voldemort’s hand. Others are discussing Eliezier Yudkowsky’s beliefs and knowledge, adding another level of meta to the analysis. In the story, Voldemort himself knows (via magic) that Harry Potter cannot lie. What appeared to be a horribly binding constraint is suddenly a fantastic advantage. Could we trust whatever an advanced being with unknown (or malevolent) motives told us?

Watching the discussion forums with a bit over a day to go, I believe this is the broad stroke solution (with lots of in universe details to be worked out), although I’m irrationally attached to my earlier, needlessly complex answer. I believe this is the author’s intent. It’s elegant. In the universe, Harry Potter will (I suspect) exchange some information about Prophecies and then deduce an alternate (correct) interpretation where it is to everyone’s advantage to keep him alive. To let him out of the box.

In the real world, Yudkowsky gets another argument in his favor. “A few hundred or thousand people could do this to me. An AI could do this to you, easily.” I suspect the answer has already been posted, but I haven’t checked. The submissions page for the final exam already has three hundred thousand words. In less than 36 hours. The author has asked for help summarizing the solutions.

How does magic work in Harry Potter’s world? His experiments are still ongoing. Out here, in the real world, Teller (of Penn and Teller) wrote that “You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest.” In our world, Eliezer Yudkowsky spent five years appearing to be writing a story, and just recently the wool has fallen from my eyes.

Footnote #1 — A reader pointed out I did not cite this. I realize that I did not know who proposed this. Some quick googling doesn’t reveal this either. It may be discussed in this Armstrong, Sandberg, Bostrom paper, but I have not bought it. Bostrom’s name is all over the stuff I’ve read, so he probably knows. I’ll try again tomorrow.

Update — March 2nd, 5pm

The deadline is 8 hours away, and Yudkowsky is overwhelmed by the response and requesting help. I have decided to post this now, because I am reasonably confident of the solution, so I am making an advanced prediction. I am less confident of the exact solution, but I do believe that it will involve Aumann’s agreement theorem. My answer certainly will.

I suspect the internet will get a viable solution. However, will the solution make a good story? I’m not sure.

Update 9:30pm (< 5 hours left). I posted my solution to FF.net hours ago. I have no idea how to link to it (since I can’t find it) and I left out a key step hours in any case (oops). But I have posted my actual solution (heavily abbreviated) on reddit in case someone else wants to post it, and as a prediction of the correct answer. I may revise this as errors are noted and I correct them (and add more links), but will put new information in a new post.

Followup post March 3rd — I was wrong.

Written by taogaming

March 2, 2015 at 3:30 pm

Jeremy Silman plays Nations

I played my 6th game of Nations last night, and in the ensuing discussion I wound up thinking about Jeremy Silman. Back when I played Chess (semi-exclusively), his book “How to Reassess your Chess” did very well, mainly because it rhymed. But also because he presented things clearly to amateur players. The most interesting idea was on exploiting imbalances.

Nations is a game of exploiting imbalances.

You can have lots of coal, or coins, or wheat. You can have little. You can have great production, or not. Military: Big or Small? Earn VP during the game or via buildings/wonders? Etc. You can’t beat everyone everywhere; you must choose your imbalances.

If you have great coins, that means you can afford the high-ticket items, so you can afford to take a few turns to get architects (for example) and pay a premium for better stuff. Or you can buy the cheap stuff, then snag a few expensive things later on. If you are coin-poor, you need to get the most important thing. If you have lots of coal, you can move people around to optimal places. You can also presumably afford to move to a high military for a turn, planning on abandoning it if necessary. (A coal poor person would be forced to keep it, since he couldn’t afford to move the workers around). A small military person may have to recognize that and boost stability (or preemptively buy a war) to avoid losing to much.

There are lots of specifics (and I’m vaguely tempted to write a few thousand words about them, but perhaps later). But the basic ideas are simple, and apply to many games:

  1. Be Flexible. If you put yourself in a position where you need to grab some card, you can be screwed.
  2. If you are going to be losing one type of fight (and you are), then make sure that isn’t a critical fight for you. If you are going to lose a war, by god, lose it. No point fighting for 6 grain on a crappy building if you need 7. Take the hit and boost your books and VP to compensate.
  3. If everyone is fighting for resource X, then there is some resource Y they are ignoring. If you corner the market in it, they’ll all fell the pinch.
  4. Having a ton of resources and few gained VP by the middle game is often just fine.

 

Written by taogaming

March 25, 2014 at 7:35 pm

Posted in Strategy, Thoughts about Thinking

Tagged with ,

Sometimes not even Money …

Continuing on a tangent from actions versus words, sometimes you can’t even trust actions … if the resource in question varies dramatically from time to time. They obvious example is high stakes no-limit poker. If you (and your opponent) each have a million dollars in chips, then a $100 call pre-flop basically means your opponent has two cards (because of implied odds) But if your opponent calls $10 when you each have $20, that’s a good hand (or bad opponent).

Cylons won’t do anything out of the ordinary for small stakes. But again, that’s because this turn may be mild influence on a skill check and waiting gives the chance to make a crushing decision. Uneven resources. But in Bang! (to continue the discussion), one turn is (usually) one shot. Resources don’t vary, so you can trust actions.

Written by taogaming

August 16, 2010 at 9:09 pm

Money Talks, BS walks

A few weeks ago Ta-Nehisi Coates had a guest blogger (Ayelet Waldman) talking about her experiences at a summer conference of the Army War College’s National Security seminar. This was an interesting read (and Ta-Nehisi’s blog is wide ranging, well written, and erudite). I remember hearing about (not playing) the National Security Games at Origins, and I started mentally thinking — if the National Security Seminar asked me to offer them a game to teach national security, what game would I offer?

Not a war game. I’d never be so brash as to assume that I could teach them about tactics, logistics, operations, strategy, fog of war. And my target audience probably doesn’t need this, but I think a broad swath of America (some of whom would be at the Army War College) could use a (good game of) Bang!

In Bang! (he wrote, on the off chance that non-gamers read this), you have a hand of cards, the most basic are “Bang!” and “Missed.” You can shoot whoever is in range (sitting next to you at the table if you have a pistol, longer range if you have a rifle. Some cards give you stuff, beer let’s you heal (this is a spaghetti western, after all).

The real trick of the game is you have a role. There’s the Sheriff. He wins if the outlaws and renegade are dead. The deputies win if the sheriff does. The outlaws win if the sheriff dies, and the renegade wins if he’s the last man standing (which means that he has to kill the sheriff last. The renegade is in a tough spot).

Everybody knows who the Sheriff is, but the other roles are hidden (revealed on death).

And here’s the thing — Accusations fly “Oh, he’s so-and-so.”  “He could be the renegade.” “I’m just shooting him to keep him honest.” And people pay attention to this crap. Most of the time it’s completely easy to tell someone’s motivations (with the exception of the renegade), but the information you gain from someone’s words (as compared to their action) is — nothing.

Words don’t win (or lose) the game. Actions do. A good player makes actions count. A clever player will also try to confuse things (to his benefit) with whatever words he deems most helpful, but actions have a cost (you 0nly get one “Bang!” card a turn, you only draw so many cards). Sure, sometimes a player mis-plays, but there you go. (Bang isn’t, of course, the best game to teach words versus actions. BSG is excellent, because actions are less easily interpreted. Poker is good. You can win with table presence, but great players can play without it. But poker’s too well known to make a good teaching game).

But even though there are better games for teaching this, Bang’s an excellent example, because it roughly simulates the US position at the beginning of every recent war. We’re the sheriff. Everyone knows who we are.  We have initiative (first turn) and resources. And we’re sitting a table with 4-6 others claiming to be our friends, most of whom want us dead. Some of them are in a position to do something about it, some aren’t. Your turn.

In my last game of Bang I knew everyone’s role after the first turn. By my first turn, I explained things to the Sheriff (I was deputy). But I had one huge advantage that a real world commander doesn’t. I know the exact breakdown of roles (so many deputies, so many outlaws, one renegade). The “US Army War College” version of Bang would have one sheriff (picked randomly) and then a deck of cards that includes deputies, outlaws and renegades (plural). In the real world, you don’t know exactly how many enemies you have.

(That’s the main flaw of BSG as well, two cylons in a 5 player game. If it could be 2 or maybe sometimes 3, imagine the tension when two are revealed. Of course, it would be a bitch to balance).

Anyway, once you grok Bang, then the basics of BSG, Shadow Hunters and other games become easy — follow the money, ignore the chatter. Sure, it may be helpful, but it may be intended to deceive, and until you’ve seen someone expend resources, you’ve no idea which team they are on.

Actually, I suspect most soldier’s know this lesson pretty well. I really just want to get this across to voters, who seem to be taken in regularly (about once per election).

Written by taogaming

August 15, 2010 at 7:05 pm

Misc Thoughts and Links

OK, I tweeted the problem, but the long form:

Alice secretly picks two different real numbers by an unknown process and puts them in two (abstract) envelopes.  Bob chooses one of the two envelopes randomly (with a fair coin toss), and shows you the number in that envelope.  You must now guess whether the number in the other, closed envelope is larger or smaller than the one you’ve seen.

Is there a strategy which gives you a better than 50% chance of guessing correctly, no matter what procedure Alice used to pick her numbers?

I saw this a few weeks ago, and my answer was “Hell no.” I was wrong.

I don’t entirely trust New Scientist (they have a bit too much woo), but their article that some algae have been found to use quantum processes isn’t entirely new. (I remember reading Penrose’s book about that a decade ago … and Anathem last year).

In my general quest to read up pop (and real) psychology on decision making, I’m going through Ariely’s Predictably Irrational. I recommend it (in general, I recommend all books that make people consider their own prejudices and blind spots), but one story jumped out at me. (Paraphrasing): A psychologist had kids playing football outside his window, so he went to them and told them how much he enjoyed watching the play, and paid them each a dollar. The next day, he did the same thing, but only paid them 50 cents each. Each day, he lowered the amount he paid, until they finally said “We’re never playing here again!”.

Satisfied, the psychologist went back to his office and enjoyed the quiet. He’d turned their game into a job, (changing why they’d played from “for enjoyment” to “for money”) and then made sure it was a crappy job.

The (probably apocryphal) story reminded me why I never wanted to blog “professionally” (and why I resisted all urges to own a game store, etc etc).  (I may use that excuse if I never finish my Homesteaders article).

Written by taogaming

February 11, 2010 at 8:14 pm