I don’t know how many of the millions of people who’ve bought Daniel Kahneman’s Thinking, Fast and Slow have read it, but I have and I thought it was very interesting. One of things he talks about in chapters 25-26 is risk aversion. Lots of people won’t take a bet to either gain $200 or lose $100 on a coin-toss, and that seems to mean they’re risk averse. They stand to gain more than they stand to lose, and the chances are equal, but they won’t take that chance. Regular readers may remember risk aversion coming up once before when I was talking about Deal or No Deal.
Kahneman says that for a long time economists used to think that (or at least idealize that) people were risk averse when it came to money, but not when it came to utility. Your first million makes a bigger difference to you than your second, and maybe it even makes a bigger difference than your second and third put together. In view of that, maybe your last $100 makes more of a difference than your next $200. If that’s right, you’re not rejecting the bet by being risk averse; you’ve just got a proper appreciation of the diminishing marginal utility of money.
The problem with this line of thought is that while it can rationalize bets which seem sensible instances of monetary risk aversion, it can only do so by attributing people utility functions which also rationalize insane-seeming pieces of (monetary) risk aversion. Matthew Rabin showed this in a technical paper, and he and Richard Thaler wrote an entertaining paper about it which references Monty Python’s dead parrot sketch. The idea is that if diminishing marginal value of money is all that is going on, then someone can’t rationally reject one fairly unattractive bet without rejecting another very attractive bet. Their first example is that if someone will always turn down a 50-50 shot at gaining $11 or losing $10, then there’s no amount of money they could stand to win which would induce them to take a 50% risk of losing $100. They have several other examples, including ones which remove the ‘always’ caveat, only demanding that they would still turn down the first bet even if they were quite a bit richer than they are now. The basic idea is the utility of money has to tail off surprisingly quickly to rationalize rejecting the small bet, and if it tails off too quickly you'll have to make odd decisions when the stakes are high. They’ve thought of objections and the reasoning is hard (for me) to argue with.
Now, what Thaler and Rabin reckon is going on is loss aversion. The reason you won’t take the $100-$200 bet is that you recoil in horror at the thought of losing $100. There’s plenty of behavioural economics research (I’m told) showing that people can’t stand losing even if they’re pretty chilled about not gaining, and that’s why Thaler, Rabin and Kahneman think that’s what’s going on. Thaler and Rabin say it’s not just loss aversion either, it’s myopic loss aversion. The reason it’s myopic is that you’d take a bunch of $100-$200 bets if you were offered them at the same time, because overall you’d probably win big and almost certainly wouldn’t lose. But if that’s your strategy then you should take the bets when they arise, and in the long run you’ll probably end up on top.
I agree that people are myopic, and they don’t always see individual decisions as part of a longterm strategy where losses today get offset by the same strategy’s gains tomorrow. I think Thaler and Rabin have missed something when they invoke loss aversion, though. This is because you can set up the “if you reject this bet then you’ve got to reject this attractive bet” argument without doing anything with losses. Suppose I offer people a choice of either $10 or a 50-50 shot at $21. Sure, some people will gamble, but aren’t lots of people going to take the $10? If they haven’t already, some behavioural economists should do that experiment, because if people reject the bet then Rabin’s theorem will kick in just the same as before and lead to crazy consequences. The difference is that this time you can’t explain the difference as recoiling in horror at the prospect of losing $10, because the gamble doesn’t involve losing any money. It just involves not winning some money, and people are relatively OK with that. (Notice that choosing not to gamble also involves not winning some money.) If you object that the non-gamblers want to make sure they get something, then change the set-up (if your budget stretches that far) to either $20 guaranteed or a 50-50 gamble for $10 or $31. It still works, and I bet plenty of people will still take the $20.
Now, what I think is going on is myopic risk aversion. I don’t see that there’s much wrong with risk aversion in itself. If you could choose either a life containing a million hedons or a 50-50 shot at either a thousand or two million, I’d understand if you took the million. Only a real daredevil would gamble. And when John Rawls is putting whole-life choices before people in the Original Position, he won’t assume they’re anything less than maximally risk averse. Maybe Rawls has gone too far the other way, but I’d definitely want to see a pretty good argument before believing that the cavalier attitude of the expected-something maximizer is rationally obligatory.
Now, mostly when we make decisions they’re small enough and numerous enough that a fairly cavalier strategy has a very low risk of working out badly overall. Applying original-position thinking to the minor bets offered by the behavioural economists in the pub is confused. It feels like you’ve got a 50% chance of getting the bad outcome, but seen in the context of a more general gambling habit the chances of the bad outcomes are actually very small even with the cavalier strategy, and since its potential payoffs are much higher, you’d have to be very risk averse overall to turn down the gamble. You’re very unlikely to be that risk averse all things considered, although perhaps Rawls was right that it’s cheeky to make assumptions.
So that’s what I think’s going on. Loss aversion is real, but it can’t do the work Thaler and Rabin want, either in straightforward form or myopic form. I think the real culprit is myopic risk aversion. Overall risk aversion is rationally permissible, but myopia isn’t and can result in individual decisions looking more risky than they really are. Unless the stakes are really high, like on Deal or No Deal.