From Tim Harford’s blog:
“Game theorists know all about the centipede game:
One instance of the centipede game is as follows. A pile of $4 and a pile of $1 are lying on a table. Player I has two options, either to “stop” or to “continue.” If he stops, the game ends and he gets $4 while Player II gets the remaining dollar. If he continues, the two piles are doubled, to $8 and $2, and Player II is faced with a similar decision: either to take the larger pile ($8), thus ending the game and leaving the smaller pile ($2) for Player I, or to let the piles double again and let Player I decide. The game continues for at most six periods. If by then neither of the players have stopped, Player I gets $256 and Player II gets $64. Figure 1 depicts this situation. Although this game offers both players a very profitable opportunity, all standard game theoretic solution concepts predict that Player I will stop at the first opportunity, getting just $4.
Except, nobody really thinks this is the way players would behave in reality. The optimal strategy seems sociopathic; isn’t it worth playing cooperatively in the hope that the other player will do the same thing? (Unlike much real human interaction, standard game theory does not accomodate the “hope” that someone else will play suboptimally: optimal play is to be expected at all times. )”
Game theory is very clever and very useful, but often seems very naive. When it’s used in economics, it’s arguably the part of economics most hamstrung by the scattershot application of the “money=utility” fallacy. If you want your game theoretic result to be predictive or descriptively powerful, you must (must must) try really hard to make the payoffs reasonably accurate; in Harford’s quoted example the assumption is that the players care only about cash and that, as Harford says, they aren’t willing to take a shot on the other player prolonging the game. At the risk of being tautologically critical: can you read the setup of that game and not entertain the idea of waiting? I remember being taught the centipede game in David Myatt’s excellent game theory course as an undergrad; he showed us the ‘crazy centipede’ variant, which wondered exactly that: what chance of you choosing to continue the game is enough to make me also want to continue?
The kicker to me is that ‘game theoretic predictions’ are overwhelmingly often not as successful for the players as alternative strategies, even when we’re just measuring ‘success’ in the same cash-payoff terms as the theory. This is just what Harford goes on to describe:
But Ignacio Palacios-Huerta (best known to Undercover Economist readers as discovering that strikers and goalkeepers play optimal strategies in penalty-taking) and Oscar Volij gave the centipede game to skilled chess players. They found that the chess players were far more likely to play optimally; grandmasters always played optimally and took the $4. Hyper-rationality can be a disadvantage. (Or did the experiment discover something else: that chess grandmasters are sociopaths?) Palacios-Huerta and Volij don’t speculate. My guess is that they have discovered something about the rationality rather than morality or empathy of chess players, but I may be wrong.
It really does just beg for the ‘behavioral economics’ explosion: if predictions aren’t great, and in any case are less profitable than reality, we’re up the creek without a paddle or a boat.