Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Expected value doesn't mean jack shit if the game can only be played once.

> Expected value (also known as EV, expectation, average, or mean value) is a long-run average value of random variables.

If you can only press a button once - you should take the guaranteed money in almost all circumstances (assuming you have finances that look like most Americans - if you're already a millionaire... do what you want, this game doesn't matter much to you).

Basically - This is a dire misunderstanding of how statistics works in general. The population at large might be better off pressing the 50% at 50 million button (because then you are running this game many times and you will likely achieve the expected value) - but as an individual, who can only roll the dice once, you are much better off just taking the immediate and guaranteed win.

And that's not even accounting for the drop off in marginal value of each dollar as you accumulate them - that first million is far more impactful than the next 49.



This is a key observation in more practical concerns like retirement planning. Often, maximizing expected value isn't actually what you want. For somebody with a comfortable retirement portfolio you care a lot more about not running out of money than ending up with a huge amount when you die. So you'll choose strategies that might have worse expected values but limit the frequency of worst case scenarios.


Rory Sutherland (behavioural science chap) made a similar point on travel. He says that when he must get to the airport on time, he takes the back roads that get him there in a guaranteed 30 minutes rather than take the freeway that will take 15 minutes 95% of the time but could be heavily congested (and inescapable) otherwise. Sometimes urgency and efficiency are at odds!


But we live in a world of real time traffic reports and satnav, so ...just do what's best ok the day


I remember looking at real-time traffic reports, but they were not future time, which is what I needed. I had an appointment to make but 500 feet in front of me...horrible crash. No way out. Stuck for 45 minutes. Had I had future time instead of real-time, that would be the best solution.

Had I taken the back roads, if there was an accident, there would be tons of options to get to where I needed to go.

If you need to be somewhere at a certain time, do the tried and true 100% guaranteed way.

It's how it is.

So there's that.


Depends a lot on what the back roads are like, though. The freeway can be more reliable a lot of the time.


You can turn around in the road if you encounter an accident on the back roads. Encounter one on the highway and you may be stuck until it's cleared. I always go back roads if I need to make a flight.


This can be captures pretty well by taking the logarithm of each outcome's dollar figure before computing the expected value, if you ever find yourself wanting to calculate how to balance a portfolio.


Great idea. So you weight x10 the EV as +1 point.


Isn’t it expected utility that matters, not expected value? From that standpoint taking $1 million guaranteed is rational unless you already have high net worth.


That's an additional concern, but even if you have a flat "utility" scale you can use for comparisons, _expected_ value still isn't necessarily the thing you want to optimize.

Maximizing the min/max outcomes are common alternative preferences. Like, suppose I have a 1% chance of being tortured for a year and a 99% chance at being the next God-Emporer. I can't actually average those futures; I'll be in one or the other, and I might want to have a 0% chance of torture or if the odds are flipped maybe I'm okay with being tortured for the tiny chance at being God-Emporer.

Such preferences are hard to cover under the umbrella of maximizing expected utility because it introduces (neg)infinite utility to certain outcomes. Instead recognizing that you might be optimizing something else is a cleaner way to handle the problem.


> that first million is far more impactful than the next 49.

This is in fact the reason you should take the million.

How many times you get to play the game is irrelevant. Your whole life is filled with potential but uncertain payoffs, and you should maximise expected utility every time (where utility is not the same as dollars).


No it’s not, if you play the game 20 times you’re almost certain to win 50 million and probably a lot more. Unless your utility function is flat after 20 million it does matter.


If you play the game 20 games you'll still be better off pressing the 1 million button 5-10 times, at the start if you don't know in advance how many presses you get, or at the end if you do and haven't won big yet.


You've only got a one in a million chance of losing 20 times in a row. If you lose 19 times in a row, sure you can take the sure million on the last try. But it doesn't make any sense to take the sure million at the beginning.


After you've pushed "give me $1 million" on your first go, your utility function looks very different to how it did before (assuming you're not already a millionaire).


if you play the game 20 times you’re almost certain to win 50 million

You can get any result you want if you just rewrite the problem conditions ◔_◔


I replied to someone claiming that you should always go for certainty no matter how many games you play. If you get to play the game 49 times it doesn’t make much sense to go for certainty 49 times because 2^-49 (or 2^-20) is really small.


It's an interesting question (assuming you know the number of times you get to play up front).

In reality, with these numbers, the best strategy for most people who aren't already very wealthy would probably be to get a sure-fire nest egg and then play the odds.

If you have to play the same every time, I'm not sure. Again, with these numbers, the utility function is looking pretty flat after $20 million for the vast majority of people. And "almost certain" != certain.


Not so. It can be throughly reasonable to make a choice that has lower expected value but has a distribution that fits your needs more closely.


If one option doesn't fit your needs then it has lower utility.


Then this is worthless as a decision making system since your decisions will always already be the one that maximizes expected utility via this unknowable transformation.


No, that doesn't follow. You might be acting in a short-sighted or impulsive way that doesn't maximize your utility. You can't just post-hoc declare that your true utility function is whatever would have led you to the decision you made.


Agreed.

And well, if everyone played the game, then the population at large would still be better off taking the million. I can well imagine there being fewer social problems if we all get a million fun bucks versus half of us getting fifty million. But then that's a different effect kicking in.

Personally, a million would affect my life positively (I'd buy a house), 50 million negatively (I'd stop working).


Would you stop working? Or would you take a break until you found something you truly wanted to work on?


My job is what I want to work on. But I'm lazy, and I know myself well enough to know that if I didn't have to go through the things I dislike about my job (hello doing performance reviews), I'd stop doing it all. And it would ultimately be to my detriment. I wish it were otherwise, but there we go.


I'm probably much older then you. My advice is stop thinking you are lazy. If $50 million is enough for you to be comfortable for the rest of your life then take that - nothing is more valuable than time. You'll find something rewarding to do with your time. Even if you just enjoy your life, it's fine!


Not the person you replied to, but I would 100% stop working. Even when I find things I want to do (not work), those rarely last more than a month, usually a week.

Anything that expects me to wake up at the same time every day, or working for a set amount of hours, or prevents me from stopping or taking a break (weeks, not hours) when I get bored of working on it is out of the picture. That leaves zero work options as far as I'm aware.


The problem with the analysis in the article and with your analysis is that the expected utility of the player is not the same as the expected amount of money. Different people have different "utilities of money" reflecting their different risk tolerances, incomes, satiation rates (diminishing marginal utility), etc. The expected value analysis is the correct one if you use the right "value".

If you are only playing the game once, then any rational agent should attempt to maximize expected utility. Here "rational" just means that preferences are consistent in a particular way. For the purposes of this game played just once, almost all humans are rational. When humans play multiple times, they quickly lose the ability to calculate and make rational decisions.


Exactly.

Econ 101 covers expected utility, and it's one of the few pieces of useful econ theory. It's like people write these articles without an elementary understanding of the theory which might be able to sensibly explain the situation.


But the article does go over the utility and explicitly states that for many people the utility of a guaranteed 1 million dollars is greater than a 50/50 chance at 50 million, so I'm not sure what "people" you're talking about or if you even bothered to read the article.


The people who write articles like this, call it "why people make dumb decisions", and don't appear to understand this is a well understood area.

It's written as though they stumbled across this esoteric idea written by some dude 100 years ago. There is a whole set of papers, Bernoulli is one of the guys who did research on it and there are a bunch more. There is a whole field, the ideas have been pulled together. It's introduced in any half decent microeconomics class.

What's next? An article suggesting maybe we can predict how long it will take an apple to hit the ground? That some guy named Newtown penned a few useful ideas on it back in the day? And pretending "physics" isn't a field?


“Go big or go home” comes to mind. Most people I know would take the 50/50 chance. In the worst case, nothing in their life changes. If they take the million dollars, something is going to change :)

I’m also reminded that “people are happier when a choice is made for them” or some other thing I’ve heard thrown around.


And expected utility (and decreasing marginal utility of money) does a good job of explaining why most people would change behaviors as you scale the numbers involved even if you keep the ratio of expected values the same.


> Expected value doesn't mean jack shit if the game can only be played once.

Thinking like this was the mistake I've made.

While you can play a given game only once, your life will have plenty of such games. So there definitely is a relevance to "expected value". And this is easily to simulate with a program. The expected value of the wealth for those who take the chance when the "local expected value" is better than the certain outcome does tend to be higher.


Well, life doesn’t always give many chances to play a game. You can only work at so many failed startups, or have so many failed long-term romantic relationships before you’ve used your best years! Someone else already made the point about the risk of walking away empty handed, but I’m just pointing out that some domains allow for many retries and some don’t.


This represents the trap of over-rationalisation which is so prevalent in the Western world. You cannot devise universal rational guidelines suitable for every situation and every subjective experience. There is a multitude of various different factors involved in every particular situation. The lean and precise rational model breaks badly simply because it doesn’t (and can’t) account for all the factors.


> Well, life doesn’t always give many chances to play a game.

Disagree. Sure - you don't get many games involving millions of dollars, but you do get many for smaller amounts.

I could put all my extra money into paying off a low interest mortgage (guaranteed return), or I could put it in an index fund (higher average return, with no guarantees, and a potential for a loss).

And working at startups: Not sure the expected value is high there. May be higher than working at a FAANG. I doubt it.


I assume you agree that some kinds of opportunities are limited. Not trying new foods because you're afraid of wasting your money would be silly, or not saying hi to your neighbor because they might ignore you would be silly, but some things are very complicated. I'm thinking of: surgeries, mate selection, college degrees, white-collar crime, etc. I'm just saying that utility and loss aversion come into play, and "life is long" can't always save the day.

On startups, I think there are people who have been in situations where they have an expected value greater than something like a FAANG $300k/year over 3 years scenario (e.g. they own a large stake in a close-to-IPO company). And they should maybe still walk away, if the 50% chance of a tiny IPO payout would destroy their self esteem and make them feel even further behind their high-salary peers. (Also keep in mind that not everyone lands jobs at FAANG companies, so it shouldn't be super hard to find people who lucked into a startup where their EV is higher than their market salary over a few years). In other words: even if a startup somehow has higher EV, you may want to ignore the EV.


This is a much more profound statement than it seems at first and I wholeheartedly agree with it. Not only that but the gains compound over time.

It's not about the expected value of any one opportunity, it's about the expected value among every opportunity you will encounter in your life. This also implies that one should do what they can to expose themselves to said opportunities especially while they're young.


> your life will have plenty of such games

What are you talking about? Which life will have plenty of such games? In what way is that true?


A very common one: You have a debt to pay off (typically mortgage). Should you put all your extra money to pay it off early or should you pay the minimum and invest the rest?

As another commenter pointed out: Most investments involve this. In the RE circles you often have the same dilemma: Buy a house for rental in a LCOL area where you get (mostly) guaranteed net income, or buy in a place like California where the rent income won't cover all the expenses, but you feel you can pay the difference and rely on profiting off the hoped appreciation.

Insurance is also a good example someone else pointed out.

Even: Get a guaranteed low paying job as a relatively unskilled worker, or get into deep debt to go into medical school, do a residency, and earn a lot. The latter can have significant risk: Some people don't do well enough to get a residency. Others get the residency but don't have what it takes to complete it. In both cases you're left with a huge amount of debt.


In the way that these are analogies for actual situations, not just pure whiteroom thought experiments.


I suspected that much (that it was an analogy for some kind of actual situation).

But what situation? How is it that a person's life has many of these chances in large enough volumes to make expected values worth it?


Everytime you book additional insurances that cover small amounts of money. Like a airplane ticket insurance (that only covers the fee of the ticket if you cancel). Or a additional rental car insurance. Assuming that Insurance companies are not stupid and only offer an Insurance that is +ev for them, that means its -ev for you. If you are in the financial situation that 1-5k$ wont ruin you its rational to NOT take these kind of insurances.

Every spot in life that you encounter that can be seen purely from an EV perspektive should be played as that. Only exceptions are longtail ruinous outcomes, like House Fire insurance, Health Insurance. Thats why in many western nations these types of insurances are mandatory.


Investing has a degree of this as well. And, in practice, most rational investors will diversify based on a number of factors into fairly safe but low return assets and into potentially higher return but riskier ones.


The investor's situation, I believe, is very much different from the common person's. The investor put him or herself in the position of doing tons and tons of financial transactions and investments, etc, like that. He or she put him or herself in a situation such that EV-reasoning makes sense. It seems to be that this isn't the situation for the common person.

But I agree... If you are an investor, or maybe a professional poker player, then you'd have put yourself in a position that favors reasoning guided by EV.

There are other ones as well, non-money related. For example, in sports. I believe basketball players probably try to do this. There are so many shots. They're probably using EV to guide their strategy and practice.


re: sports

Five Thirty Eight writes about this from time to time. Three points shots in basketball. Going for it on fourth down. Going for a two point conversion. You can work out the stats for all this sort of thing--and there are apparently biases for various reasons why coaches/players don't always follow the EV strategy.


I think I get it, but I'm not so sure I'm convinced. Those examples, however, don't resonate with me (don't have a car, nor a license to drive one; nor I own a house; I've been inside an airplane only once).

However, I believe I've done similar things with used electronics. I tend to favor buying a really cheap used ones for [sometimes] 1/5 of the price instead of a new one. It could break or be of low quality, but chances of that are small and thus (over time -- making an EV-ish calculation), I spend less money on electronics.

I also believe I do this in buying new products. In many situations, I can pay extra for an extra year or two of 'guarantee' (not sure if the right term is 'guarantee' or 'insurance'). However, very often, the first 6 months or 1 year of guarantee is given and has its cost embedded in the price of the product. The question becomes: how likely it is for the product to fail given it hasn't failed for the first year. I believe the chances are small so I don't buy it. I guess it's also an EV kind of calculation (just like you gave as an example).

However, those don't seem that common, really. Maybe it's just the kind of life that I live.

Is the situation 100%1M vs. 50%50M supposed to exemplify these ones? These not-so-frequent ones for small amount of money?

Another thing is that expected value has to do with a limit in this situation:

(1/n) x SUM [j = 1 to n] outcome(j) -> E for n -> oo

(there is an ergodicity assumption going on here -- which doesn't always hold in practice). That limit can be E while the first idk how many hundreds of values of outcome(j) be very distinct from E.

How many times will things like that happen in your lifetime? Some dozen? What if you separate away the large-scale ones (like the 100%1M vs 50%50M)? The small-scale ones will be more frequent and you just blindly follow the EV approach to them. The large scale ones will be extremely rare, and maybe another approach is better. No?


>In many situations, I can pay extra for an extra year or two of 'guarantee' (not sure if the right term is 'guarantee' or 'insurance'). However, very often, the first 6 months or 1 year of guarantee is given and has its cost embedded in the price of the product. T

Extended warranty which is basically insurance. Leaving aside the fact that some credit cards provide it for you anyway and things like that. Yes, for most purchases, this is a bad deal because the expected value is almost certainly negative and--probably--if something does break you can replace it.

Here we're talking about losses rather than gains. The certainty of small losses (extended warranty purchases) vs. the chance of a relatively large loss. But it's the same idea with a negative sign.


One of the thing that isn't obvious to me that seems to be for many people is the decision to maximize expected value instead of best worst case scenario. In this situation, given how exceptional the 100%1M vs. 50%50M situation is and how the 1M will definitely kill your financial problems, it really does seem like you'd like to pick the strategy that maximizes your worst case scenario (if choice=red, worst-case=1M; if choice=green, worst-case=0). I understand the reasoning behind expected values, I guess, it's just that it's not clear to me it is of any use here.

To me, the choice looks like "solve your financial issues with the red button; 100% chance" vs. "solve your financial issues and get extra money you won't really need, but with 50% chance through the green button".

I'd have a hard time choosing the green button.

It's curious because I'm a mathematician. I feel like I should know this better, but I've never really studied probability, much less statistics or economics.

(edit)

Another issue is what would it mean, in practice, that "50%" statement? I guess it means that if you'd play the game long enough, 50M would come out roughly half the times (by counting). This could mean a system in which the first 10 always fails, the second 10 always succeed, and the ones after that have their results based on a fair dice (1,2,3->50M; 4,5,6->0). This would certainly fit the frequency "definition". In practice, these probabilities don't mean a clean neat thing very often. Another issue is that the definition of that 50% means if you played that game long enough, you'd observe the half-half split, but you'll play it only once. Again, there is a statement about a limit (a statement about a_n, for n large), but you're only looking at a_1 (it often seems to me that people believe that information about EV transfers to information about a_1 -- it really does not). Even though I can mostly think of artificial examples (stuff like the one above), I'm not sure it'd be clear [in an actual situation] what is the meaning of that '50%'.


If the $1m "solves your financial problems" or is otherwise life-changing, you should almost certainly take the sure thing. As other discussions suggest, once you get into maybe the $3m-$5m net worth range, you presumably already don't have financial problems and another $1m is nice but not really transformative whereas $50m would be even though not a sure thing.

Even for a one time event, at some point it makes more sense to place the bet depending on a number of factors.

If it's hard to conceive of in this scenario, pick numbers about which it's easier to have intuition. What if you could take $10 for certain vs. a 50% chance of getting $500? Or pick some other values with the same ratio. 50% in this case just means a coin flip. You're right that no one gets the expected value. They get zero or they get $50m. But that may be a good bet depending on circumstances.


EV is such a nonsense measure anyway once you step outside the realm of pure theory.

For example, the EV in this example (50% chance of $50m, or $0) is $25m. The EV of a 2.5% chance of $1 billion is also $25m, but your probability of getting nothing is 20 times higher. Is it more rational to choose this over the certainty of $1m? I don't think so. Is it rational to chose a 0.0025% chance of $1 trillion over $1m? At that point I think even the most avowedly rational economist would choose the cash.


That just does to show that expected value is not the rational metric, but expected utility is.


Increase the payout with a correspondingly lower probability and you're basically lowering the expected utility--down to some point where it crosses the sure-fire payout.


I'm not sure on your argument. I belive it's rational to make a decision that you would repeat every time you are presented with the decision, no matter if you knew beforehand how much times that decision would be presented to you.

Let's account for the marginal value of each dollar to set the number 50 to be some number that equated to triple the utility of the first million.

Why would it make less sense to choose the 50%? Assuming you would definitely take a 99.9999% chance of 50 million over 100% of 1 million, at what percentage do you switch over to the higher percentage?


> Why

"Variance" and "risk tolerance"

There is only one of me, not a Large Number


Why not form a "company" with 10 friends, and pool the winnings?


You can derive what sort of variance a person will tolerate if they have a "utility function" quantifying the marginal value of each dollar. If you are risk averse (which people usually when they have a small net worth) then your utility curve will be concave. Think like sqrt(x) function.

OTOH, insurance companies can afford to have a roughly linearly utility function (because, as pointed out, they play the game much more often than others), which is why they are in business.


Unless you already have a million, then 50% chance at 50 million can make more personal sense than another million.


The most profound comment was not in this HN thread, nor by a user in the Twitter thread: the most profound comment was the one quoted in the article itself, by Bernoulli!:

> Bernoulli once wrote, “The utility [of probabilistic decisions] is dependent on the particular circumstances of the person making the estimate. There is no reason to assume that the risks anticipated by each [individual] must be deemed equal in value.”

Risk over non-fungibles (body parts, sentimentally valued heirloom pieces, ...) are obviously subjectively valued. But even for platonic (ideal) fungibles like fiat money the risks depend on the person because modeling reality as if everyone is treated equal in commerce or has equal access to and treatment in the courts etc. is a very strong assumption to make.

Let us first assume contracts are never reneged etc, and let us thus first assume agreements are rigorously respected.

Let us further assume the subject has the usual goal of maximizing its capital, here denoted in dollars.

Since currencies are a social construct and only hold value in the context of a society, we assume the subject is in prolonged contact with a society that values this currency. (If not the subject doesn't care which answer to give.)

Contrary to all the comments here in HN (nonlinear utility etc.), if the goal of the subject is to maximize capital, then the correct answer (assuming absence of things like conscientious objection) is unconditionally the green button with the highest Expectation Value, non-linear utility functions, or one-time-ness of the offer, or subject poverty be damned!

To understand why: even if the offer is one-time, and even if the subject can not afford the regret of missing out, the subject is still in contact with society. This society has companies regularly dealing with large sums, and optimizing expectation value.

THE SUBJECT CAN SIMPLY GO TO A BANK AND TRADE THE HIGH ROI FOR STABILITY WITH THE BANK:

For example the subject and bank can agree to the following:

* Bank pays subject $20 million

* subject presses green button

* If subject receives $50 million, it forwards this to the bank, otherwise nothing

In this scenario the subject wins $20 million unconditionally, and the bank spent $20 M with an expected return of $25 M, so the bank sees an expected ROI of a handsome +25%.

The real catch is not personal utility function, one-time-ness, etc ... but reliability of contracts and trustworthiness of the system enforcing them, which one can read between the lines of Bernoulli's comments.

All the comments and observations about how poor people "should" take the certainty with the lower amount is just echoing the indoctrinated "learn and embrace your lowly position in society" wheither thats low in rewards, or low in reliability of fair enforcement of the law.

I find it hard to read intellectually capable people concoct artificial examples to make people distrust mathematical rigor when it can be entirely relied upon. The real element of unreliability is in the systems under which we are subjugated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: