### The St. Petersburg Game

The other day, someone I used to know at uni and I were discussing the "St. Petersburg Game". Basically, you repeatedly throw a coin until a heads appears, and then you pay out 2^n, where n is the number of tails thrown before the heads appeared. How much would you pay to play this game? In theory, the expectation of this game is infinite, so if you're fully risk neutral and everyone will always be able to pay up, you'd pay any amount.

So, there are two ways to generate a more realistic price to pay the game. My argument was that I'm risk-averse, and so I have some utility function for my money. I might buy a 1 pound lottery ticket with a million-to-one odds for a million pound payout. I wouldn't buy a thousand pound lottery ticket with a thousand-to-one odds of a million payout. My argument was that for one go at the game, I wouldn't want to pay more than 3 or 4, because I'm not really interested in remote chances of earning vast amounts, I'm interested in a reasonable chance of not making a big loss.

His argument was that credit needs to be taken into account. Beyond a certainly point, you would go bust. So, I might as well truncate the series that generates the expectation at that point. In other words, if you can only pay out a million, (2^20), the fair value is 11. This only makes sense if I'm playing repeatedly, but apparently that's the case he was thinking about. Certainly, if I'm playing repeatedly, I'd pay more than 3 or 4, since I'd have a much better chance of seeing some of the big payouts which make it worth it (this may sound woolly, but I'll try to make it a bit more concrete later on).

So, if we're playing repeatedly, fair value isn't what we care about, really. What we care about is the end-state. We assume that the two players will keep going until one of them gets wiped out. Unfortunately, my maths isn't good enough to work out closed form solutions for probabilities of wipe-out given different starting account values, so I wrote a simulation instead. I hacked it up fairly quickly, so it's probably annoyingly buggy, but I hope I haven't made an error fundamental enough to invalidate the whole thing.

What I'm doing is a set of simulations, where for each simulation I find the break-even level, where if I paid more I'd go bust first, and if I paid less you'd go bust first. I collected the results for 1000 simulations, assuming that I have one million and you have one million, to start with. I then took some percentiles, and prices:

 Percentile Price 10% 9.948271666 20% 10.18424126 30% 10.43704421 40% 10.66959518 50% 10.98563359 60% 11.36414737 70% 11.84457506 80% 12.97528121 90% 16.04786940
 Price Percentile 10.5 32.90% 11 50.20% 11.5 62.7%

So, 10% of the time, if we both start with a million, I'd lose even if I paid 9.95, and 10% of the time, you'd lose even if I paid 16.05. It looks like the price which gives us even chances of wiping out our opponent is actually near 10.99, which looks suspiciously like our fair value!

This is, of course, assuming that I'm willing to risk losing a million in order to gain a million. What happens if my pockets are much deeper, or shallower, than my opponents' (Or that I just want to place my stop-loss closer)? Why, I'll just run my simulation again with different parameters... first up, I have 100K with which to steal your million:

 Percentile Price 10% 8.288661462 20% 8.560000916 30% 8.744448393 40% 8.959470526 50% 9.151535916 60% 9.403354633 70% 9.664265445 80% 10.00022710 90% 10.78195451

So, even if I paid 10, rather than the fair value of 11, I'd still get wiped out 80% of the time. And if I have 1M with which to try to remove your 100K?

 Percentile Price 10% 10.13697085 20% 10.64982176 30% 11.07549714 40% 11.53903907 50% 12.14243862 60% 13.05499767 70% 14.48813500 80% 16.87636127 90% 25.58518196

So, if I were willing to lose 1M half the time, in order to win 100K the other half, I'd be happy to pay about 10% over the fair price associated with you having 1M. Not surprising that I'm willing to pay more if I'm happy to take the same risk for less reward. However, another way to look at it is that if we both had 100K, I'd be willing to play around 9.3 to play for a 50% chance of winning - i.e. an expectation of 0. At the 1M level, this price is now 11. If I want the expectation to be zero for winning 100K or losing 1M, I need to win 90.9% of the time. At that percentile, I'd be willing to pay 10.1. This happily sits between the two bounding situations. No real shock, there.

Switching tacks, one of the things I noticed when running these simulations is that it can take several million plays to bankrupt someone if you start with a million each. If you really are throwing coins, you'd be giving up rather sooner. Another way of approaching the problem is to see what the percentiles are for the break-even prices if make each game consist of a certain number of plays. I've plotted the percentiles of prices for 1K, 10K, 100K and 1M plays per game:

 Percentile Breakeven on 1K Breakeven on 10K Breakeven on 100K Breakeven on 1000K 10% 4.552 6.1909 7.83347 9.517169 20% 4.946 6.6637 8.35954 9.914645 30% 5.275 7.0939 8.69940 10.315357 40% 5.739 7.5291 9.09153 10.684029 50% 6.158 7.9714 9.54439 11.188397 60% 6.802 8.5406 10.34503 11.798191 70% 7.626 9.5700 11.02936 12.726459 80% 9.012 11.0576 12.58205 14.213793 90% 13.258 14.5263 15.83425 18.443645

As you can see, the amount it's worth paying per go to break even at each percentile level even increases with the number of goes you play before stopping. Why is this? As I said woollily before, the longer you play, the more likely you are to actually see one of the big payouts. In a way, the behaviour of this thing is rather like a log-normal distribution, where the mean and median are in different places, but more so. In each case, the mean payout is infinite, but the median starts to slide upwards with the number of goes.

Another thing to notice is that the increase in payout with number of steps matches the appearance of less-likely cases. So, if you're truncating at a certain level of unlikeliness, doing 1000 times as many games means truncating 10 steps later (2^10 being approximately 1000), and thus the price being 5 greater. Look at the price difference between the 1K and 1M column, and this is what you see. Extrapolating back, if I'm willing to pay 3 or 4 to play the game once, I'm effectively at the 75 percentile price. Which sounds about right - I would only be willing to lose three quarters of the time in order to get a hopefully pretty impressive payout that last quarter of the time.

I think this approach provides a slightly different way of looking at the game. Remember, the above prices are without bounding the pay-offs, so we're not taking the credit approach. The expected payout is still infinite. However, if we're only interested in what happens in sensibly common cases, this is more useful. This is basically a VaR approach. The credit approach can also be looked at as finding the expectation, contingent on being in particularly large percentile. Mathematical modelling of finance has always been rather poor at getting the behaviour in the tails right....

Posted 2007-12-09.