Showing posts with label Not so stupid math tricks. Show all posts
Showing posts with label Not so stupid math tricks. Show all posts

Thursday, January 06, 2011

Irrational Expectations

Once more into the breach: another statistics problem from The Burg:

Suppose you’ve somehow found yourself in a game of Russian Roulette. Russian roulette is not, perhaps, the most rational of games to be playing in the first place, so let’s suppose you’ve been forced to play.

Question 1: At the moment, there are two bullets in the six-shooter pointed at your head. How much would you pay to remove both bullets and play with an empty chamber?

Question 2: At the moment, there are four bullets in the six-shooter. How much would you pay to remove one of them and play with a half-full chamber?

The hardest part of this kind of problem is figuring exactly how to frame it. Suppose, for example, that objective here is to maximize your lifetime, and that your expected lifetime, should you survive the game, is a function of your remaining wealth W, say f(W).

For question 1, then, without the payoff, your expected future lifetime becomes:

L = (1/3)*0 + (2/3)*f(W) = (2/3)*f(W), and

L = f(W-P) with the payoff, so P is a good bet so long as f(W-P) > (2/3)*f(W).

For question 2, the numbers become

L = (2/3)*0 + (1/3)*f(W) = f(W)/3 with no payoff, and

L = (1/2)*0 + (1/2)*f(W-P) with payoff, so once again the payoff is a good bet so long as f(W-P) > (2/3)*f(W).

So are the situations completely equivalent? That conclusion (which is Landsburg’s, though he got there in a different fashion) is hasty. It’s entirely possible that L is more complicated than our assumption indicates. Below I give two versions of the problem which include some semi-realistic context and lead to different conclusions for the two cases.

Let’s take one more look. Usually Russian Roulette is a betting game, so there should be some sort of payoff if you win, e.g. survive.

Assume that you bet all your wealth that you don’t invest in the payoff, where the bet is B and the value of a win is N*(Prd)*B, where N is some natural number (1, 2, etc) and Prd is your probability of dying in the game. Now case 1 looks like this.

L = (1/3)*0 + (2/3)*f(W+N*Prd*B) = (2/3)*f(W +N*B/3) for the no payoff case

L = f(W-P) with payoff and the payoff is worthwhile for f(W-P) > (2/3)*f(W + N*B/3), or for f(W) = W, W-P > 2W/3 + 2N*B/9 => 9W-9P>6W +2W or W/9>P

[UPDATE: Oops. I screwed this next one up. Below is a fixed up version with a different conclusion]

L(no payoff) = (2/3)*0 + (1/3)(W+(2/3)W) = 5W/9

L(payoff) = (1/2)*0 + (1/2)(W-P+(W-P)/2) = 3(W-P)/4

For case 2, the payoff criterion becomes 3(W-P)/4 > 5W/9, or 27W-27P > 20 W so the standard is 7W/27 > P. Note that this means you should pay substantially more to remove the one bullet from the four bullet gun compared to case 1.

Even if you don’t have any investment in the game, there are still situations in which the optimal payoff is different for the two cases.

Suppose, for example, that in addition to the probability that a bullet will kill you, there is an additional probability that you will be scared to death (heart attack, stroke, etc.) by the experience of pulling the trigger of a loaded gun pointed at your head, and that that probability is, say, p*Prd. Recalculating, we get for case #1:

L = (1/3)*0 + (p/3)*0 + (1-p/3)*(2/3)*f(W) = (2*(3-p)/9)*f(W) no payoff and

L= f(W-P) with payoff, so breakeven for payoff is at (6-2p)*f(W)/9

For case #2:

L = (2/3)*0 + (2p/3)*0 + (1/3)*((3-2p)/3)*f(W) no payoff and

L = (1/2)*((2-p)/2)*f(W-P) with payoff.

If, for example, we assume f(W) = L, and that p = 0.1, then the breakeven point for case 1 comes at P = .355 W and for case 2 at P = .345 W, so it really is worth a slightly higher payoff in case 1.

The moral of this story: beware of economists peddling rational expectations. They just might not be capturing all the relevant complexities.

Saturday, March 06, 2010

Hotel Management

Let me depart for the moment from my traditional practice of abusing Steve Landsburg to note that I quite liked his post on countable and uncountable numbers. He briefly discussed the countability of the integers and rationals and presented Cantor's proof of the uncountability of the reals. One of his commenters asked the following:

Imagine that you run the front desk at a hotel with a countably infinite number of rooms. Imagine further that all of the rooms are occupied. A man shows up in the lobby and asks for a room. Can you give him a room?

The answer is yes of course, but it should keep your bell boys busy. Interestingly enough, it's hardly any more trouble to accomodate a countably infinite number more of guests. Still, you have to say that the manager who let this deplorable situation arise made a big mistake. How should he have run his hotel so that he didn't have to move an infinite number of people each time some more guests showed up?

What if he had an uncountably infinite number of rooms (for example, let each room have a real room number between 0 and 1)? Would that have made his job any harder?