Daniel Bernoulli’s Utility

Consider the following game. Keep flipping a coin until it shows heads. You receive 2n dollars where n is the number of tails you saw. For example, if you flip tails 4 times and then heads, then you get $16.

How much would you pay to play?


This is the famous St. Petersburg paradox. It’s a paradox because the expected payout is:

\[ \frac{1}{2} \times 1 + \frac{1}{4} \times 2 + \frac{1}{8} \times 4 + …​ = \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + …​ \]

which increases without bound, suggesting you should be willing to sacrifice everything just to play this game!

It’s like Zeno’s dichotomy paradox, but cleverly counteracts repeated halving with repeated doubling in order to shove infinity in our face.

The futility of utility

Daniel Bernoulli analyzed this problem in Specimen Theoriae Novae de Mensura Sortis in 1738. An English translation Exposition of a New Theory of the Measurement of Risk appears in Econometrica, Volume 22, Issue 1 (January 1954).

This work is famous for the wrong reasons. A common narrative is that Bernoulli resolved the St Petersburg paradox by defining utility and then arguing one should maximize expected utility rather than expected wealth. This is only partly true.

Bernoulli did indeed introduce a good definition of utility along with good reasons for maximizing utility instead of wealth. However, Bernoulli failed to resolve the paradox. As Karl Menger observed in the 1930s, utility merely moves the embarrassing infinite expectation elsewhere, which is immediately apparent if we change the rewards from dollars to utiles, a unit that measures utility. [Did you think a layer of indirection would fix the problem? This isn’t software engineering!]

One might try to dodge Menger’s trick by mandating a finite supply of utiles for one reason or another, but then why not insist upon a finite supply of dollars in the first place? Moreover, the latter is true, while the former feels like a made-up restriction on a made-up thing.

The largest lottery jackpot ever won in the US was about $2 billion, that is, under 231 dollars. Suppose we cap the St. Petersburg lottery so that the 32nd flip always counts as heads. Then the expected payoff is $16.50, a long way from infinity.

Although it turns out we should not actually pay $16.50 to play (though the obscenely rich should pay close to this amount), the point is that the paradox vanishes when we inject the tiniest dose of reality.

Centuries after Bernoulli, von Neumann and Morgenstern would strengthen the case for maximizing the expected value of some kind of utility function, though not necessarily Bernoulli’s utility. Their utility, and that of economists in general, is a magic number that tells us how to live our lives. This may be theoretically sound, but I suspect has less relevance in practice. For example, it is well-known that plenty of exercise and eating right leads to longer, healthier lives. This surely increases economic utility, yet how many of us follow this advice?

We deliberately limit our scope. We focus on maximizing the part of utility that depends on wealth, namely Bernoulli’s utility. How do factors besides wealth, such as health, affect well-being? How should wealth be spent? Not our problem!

Log Money is Power

What should Bernoulli’s work be remembered for?

In section 10, he states that, on a small gain in wealth:

  1. The gain in utility is proportional to the gain in wealth.

  2. The gain in utility is inversely proportional to existing wealth.

That is, if \(y\) is utility and \(x\) is wealth:

\[ dy = k \frac{dx}{x} \]

for some constant \(k\). Integrating yields:

\[ y = k \log x + c \]

for some constant \(c\). Changing \(k\) changes the base of the logarithm, while changing \(c\) changes a wealth scaling factor before taking logarithms. Neither matter if the goal is to maximize \(y\), or to calculate means. We can dodge any psychological effects from the choice of constants by converting utility back to ducats or dollars.

For now, choose \(k = 1\) and \(c = 0\) so utility is just the natural logarithm of wealth:

\[ y = \log x \]

This is what Bernoulli’s paper should be remembered for.

In the St. Petersburg paradox, utility is a red herring. In real life, Bernoulli’s utility is a good measure of the power of one’s money. When I deposit money in a bank, the interest I earn is a percentage of the amount. On the other end of the spectrum, when a market genius manages money in a skyrocketing fund, they still advertise the return as a percentage of the amount invested. Rewards are geometric, not arithmetic ("the rich get richer"). As Napier noted, in such cases, calculation is easier with logarithms.

Why do we often think the power of money is arithmetic? Perhaps it’s because everyday decisions are on tiny scales where linear approximations suffice. Six cans of green beans costs twice as much as three cans of green beans. But would the cost increase linearly if we wanted a million cans or so?

Log wealth figures might never catch on, but perhaps we can still do better when quoting dollar amounts. For instance, published figures showing the mean wealth of various groups typically use the arithmetic mean. A common complaint is that such statistics look misleadingly high due to a few outliers.

In fact, the arithmetic mean is already questionable with just two data points. Suppose Smith has $1 and Jones has $10000. Then the arithmetic mean is roughly $5000. Yet it’s much easier to grow $5000 into $10000 than it is to grow $1 into $5000 ("the first million is the hardest").

It may be better to examine the arithmetic mean of their log wealth, which we can convert back to dollars via exponentiation. The result is also known as the geometric mean:

\[ \exp \frac{\log x + \log y}{2} = \sqrt{xy} \]

In this case the geometric mean is $100. Saving $1 and $100 in accounts that ultimately grow one-hundred-fold will result in exactly $100 and $10000. In other words, the geometric mean is a reasonable halfway marker between Smith and Jones, because it is equally challenging for Smith to reach $100 as it is to start from $100 and end up keeping up with Jones.

Bernoulli’s paper walks through a few examples to demonstrate how his definition of utility can guide decisions.

Pro tip: familiarity with Jensen’s inequality can improve one’s intuition with utility, as the log function is concave.

Double Trouble

If our life savings amounts to 100 ducats, should we place an even money bet of 50 ducats on a coin toss? The expected log wealth is:

\[ \frac{\log 50 + \log 150}{2} \]

This is less than \(\log 100\), thus even though the bet is fair, our expected utility decreases. Bernoulli advises the only winning move is not to play, calling it "Nature’s admonition to avoid the dice altogether".

We exponentiate to express this number in ducats:

\[ \exp \left( \frac{\log 50 + \log 150}{2} \right) = 86.60…​ \]

namely, the geometric mean of 50 and 150. We can think of the bet as being as unpleasant as losing about 13 of our 100 ducats.

On a fair coin toss, if the penalty is to lose half our wealth, then according to Bernoulli, the reward must be at least double our wealth:

\[ \frac{\log \frac{x}{2} + \log 2x}{2} = \log x \]

Reportedly, some believe "risk-neutral" means that one should always accept a double-or-nothing bet, and even risk destroying the world so long there is at least a 50% chance of making it twice as good. In contrast, Bernoulli suggests no finite improvement is worth wagering the whole world, and if there is a 50% chance of making the world twice as good, the largest penalty a risk-neutral party can tolerate is making the world twice as bad. Double-or-half; not double-or-nothing.

What if we can bet less than the whole world? That is, how much should we wager on a fair coin toss where on losing, half our stake is generously returned to us? If we start wtih 100 ducats, we should maximize:

\[ \frac{\log (100 - \frac{x}{2}) + \log (100 + x)}{2} \]

which happens when \(x = 50\). In general, when offered this bet, rather than greedily attempt to double our wealth in one go, it’s best to risk only 25% of our wealth, and hope for a 50% increase.

Such lopsided outcomes might seem too good to be true. However, economists have modeled stock prices with geometric random walks, which would imply the stock market offers deals of this nature. Maximizing log wealth then corresponds to maintaining a fixed ratio of cash to stocks, a strategy that some call Shannon’s Demon.

A Sinking Feeling

Caius wants to ship goods from overseas worth 10000 rubles. There is a 5% chance that disaster will befall the ship. Should he pay 800 rubles to insure the goods?

Let \(x\) be Caius' current wealth, excluding the faraway goods. Without insurance, his expected utility is:

\[ 0.95 \log (x + 10000) + 0.05 \log x \]

With insurance, his utility will be \( \log (x + 9200) \).

These are equal when \(x \approx 5043 \), thus Caius should only buy insurance if he has less than this amount.

Bernoulli proceeds with a similar calculation to show that, on the other side, the insurer must have at least 14243 rubles or so for the deal to make sense.

(He then repeats this exercise with insurance that costs 600 rubles, and observes it would be insane to charge under 500 rubles!)

How do you like your eggs?

Semipronus has 4000 eggs in a safe location and wishes to bring home another 8000 eggs, which must be transported in baskets. Each basket has a one-in-ten chance of catastrophic failure that breaks all the eggs within.

The expected number of eggs he will eventually have is:

\[ 4000 + 0.9 \times 8000 = 11200 \]

no matter how many baskets are used.

However, Bernoulli’s utility tells a different story. (And Bernoulli himself also told a different story: for fun, I changed ducats to eggs.) If Semipronus puts all his eggs in one basket, his expected utility is:

\[ 0.9 \log 12000 + 0.1 \log 4000 \]

Exponentiating yields 10751.5…​ eggs.

If divided evenly into two baskets:

\[0.9^2 \log 12000 + 2(0.9)(0.1) \log 8000 + 0.1^2 \log 4000\]

In eggs, this is 11033.5…​

In other words, from a log wealth viewpoint, extra baskets are worthwhile, even though the expected number of surviving eggs is identical.

As the number of baskets increases, the AM-GM inequality implies the expected utility converted back to eggs approaches but never exceeds 11200.

Does Spare Change Change?

It seems incontrovertible that an increase in wealth is proportional to the increase in utility. Indeed, if you meet someone who believes the opposite, then you’re in luck: you can volunteer to take what they consider to be excess wealth off their hands!

Perhaps because I was a student for so long, it was less obvious to me that the increase in utility is inversely proportional to current wealth. But after working in industry for a while, I accrued some savings, and the meaning of "spare change" gradually shifted under my feet. Amounts that once felt substantial became pittances. A coupon for $5 off some purchase or other grew less attractive over time, though I still got a kick from using one.

I don’t know why I naturally think this way, and I’m a little surprised that I do, as I suspect my younger self might have claimed the meaning of "spare change" would never change for him! Luckily, Bernoulli provides justification for my attitude, along with a precise mathematical statement.

Suppose Alice has one million Australian dollars, and Bob has one million British pounds. If Alice receives one Australian dollar, and Bob receives one British pound, then they ought to gain the same utility. Now, let’s say one British pound is worth exactly two Australian dollars. Since an increase in utility is proportional to an increase in wealth, if Alice were to receive one British pound instead, then her gain in utility is double that of Bob’s. The exchange rate also implies Alice has half the wealth of the Bob, hence the increase in utility is inversely proportional to current wealth.

Money makes more converts than reason (and converts make more money)

Two centuries passed before log wealth was rescued from obscurity. See William Poundstone, Fortune’s Formula.

  • 1951 - Henry Latané realized Bernoulli’s utility could be applied to stock portfolios.

  • 1954 - Leonard Savage calls the log curve a "prototype for Everyman’s utility function", having being convinced by Latané.

  • 1956 - John Larry Kelly Jr., publishes A New Interpretation of Information Rate, arguing for maximizing log wealth.

  • 1959 - Harry Markowitz publishes a well-known finance book with a chapter dedicated to maximizing log wealth, citing Latané.

This promising start was sabotaged by economists led by Paul Samuelson, who eloquently (by economists' standards) criticized Bernoulli’s thinking in his journal article Why We Should Not Make Mean Log of Wealth Big Though Years to Act Are Long.

Not everyone was fooled by the economists. Some of these sensible few amassed spectacular fortunes, vindicating Bernoulli’s ideas.

Equally spectacular was LTCM, a bond fund allegedly run using state-of-the-art economic theory, whose founders included state-of-the-art economists. It lived so fast and died so young that an emergency ballout ensued.

So the tide may be turning. One LTCM insider estimated he had bet over 80% of his family’s liquid wealth on the venture. He later co-wrote a book he wished he had read before his ill-fated gamble: Victor Haghani and James White, The Missing Billionaires.

Paul would roll in his grave if he knew this book says: use log wealth to size bets right. Do not put all eggs in one place.

However, Haghani and White miss at least one mark. While they acknowledge the importance of Bernoulli’s utility, they perpetuate the myth that it resolves the St. Petersburg paradox. Funnily enough, they dismiss the utile version of the paradox because it is:

a combination of using a generic utility function that doesn’t quite match a person’s true utility at extremes of wealth, with setups that tend to exploit those extremes, such as having an infinitesimal probability of making more than all the money in the world or losing everything.

Agreed. But why not say this sort of thing about the original paradox?


Ben Lynn blynn@cs.stanford.edu 💡