You are viewing dpolicar

Previous Entry | Next Entry

Probability is about caring.

but...
Probabilities puzzle me.
They always have.

But I had an insight about this recently.

In response to a question like "what's the chance of drawing a red ball from a jar containing 95 red balls and 5 green balls?" my inner clever seven-year-old argues that there are only two options, red or green, and therefore there's a 50% chance of drawing a red ball. He feels very clever and cheeky.

So I respond, because he is clearly enjoying playing with his understanding of what "the chance of X" actually means, that no, there aren't two options. There's only one option, which is whatever will actually happen. But we don't know what that is, so knowing that doesn't do us much good.

Instead, we make up these things called "probabilities," which are basically ways to attach numbers to our ignorance about what will happen, so we can do math with that ignorance. And, sure, if we're really ignorant about it, if all we know is that the ball might be red or might be green, then we can say "only two options, therefore 50% chance," like he had.

But if we're not that ignorant, we can attach different numbers... if we know the jar contains 95 red balls and 5 green balls, for example, we can say "95 out of a hundred balls are red, therefore 95% chance."

And if we knew different things, we'd end up with different numbers.

None of these numbers are actually about the ball we're going to draw out of the jar, I continue... that is red, if it's red, or green, if it's green, and not 95% or 50% anything. The numbers are about how ignorant we are... or, equivalently, about how much we know... right this moment. If the ball we draw is green, its still true that there was a 95% chance of the ball being red; they don't contradict each other, because that chance was never about the ball, it was about our state of knowledge prior to drawing the ball.

Similarly, he could have a 50% chance of drawing a red ball, while I have a 95% chance of drawing a red ball from the same jar, because I'm using all of the information we have about the jar, and he's ignoring information.

That doesn't make him wrong. There is a 50% chance of drawing a red ball, given the information he's using.

But the interesting thing, the remarkable thing, is that if we actually draw balls from the jar, record the color, put the ball back and mix up the jar, and we do this a hundred times, we'll almost undoubtedly get a number of red balls closer to 95 than to 50. That is, using all the information we have lets us predict the future more accurately than using just some of that information.

So we need to decide what we care about. If we want to be clever, we can say "50% chance of red, because I choose to ignore information" and that's true, and we're clever.

And if we want to predict the future, we can pay attention to all the information, and say "95% chance of red," and that's also true, and we've predicted the future... which is also kind of clever, wouldn't you say?

It's just a question of what we care about.

Comments

( 12 comments — Leave a comment )
chhotii
Jan. 29th, 2014 11:43 pm (UTC)
Do you know Bayes' Theorem? I don't think it exactly addresses what you're saying, but I think you would appreciate it.
dpolicar
Jan. 30th, 2014 01:26 am (UTC)
Yes, I'm acquainted with it, and yes, it's related, though indirectly.

I like the idea that any (in principle every) observation can (in principle should) be used to adjust my confidence any (in principle every) proposition in a well-defined way.
intuition_ist
Jan. 30th, 2014 01:16 am (UTC)
why do i get the feeling this is just one long extended metaphor for something else entirely?
dpolicar
Jan. 30th, 2014 01:28 am (UTC)
I get that a lot.
hitchhiker
Jan. 30th, 2014 03:38 am (UTC)
have you seen the sleeping beauty problem? it was hugely debated over on rec.puzzles, probably more so than anything else, ever, but i think the final conclusion most people came to was that the problem was somehow ill-posed, though it's hard to put your finger on why exactly.
dpolicar
Jan. 30th, 2014 02:30 pm (UTC)
I'm vaguely acquainted with it but will probably think more about it in writing this comment than in my entire life put together. TBH, I am most partial to the phenomenalist position listed here, as it is closest to the "go away and leave me alone" position.

But, OK. Since we're here.

I usually pull the brake-cord as soon as anyone starts a problem involving an ideally rational epistemic agent that is framed as a human. But in this case nothing seems to depend on "Sleeping Beauty's humanity; we can frame it just as well as a computer program executing.

So, OK.

My initial instinct is to say "fuck all this nonsense about sleeping and waking, there's a 50% of heads." But I accept that this is a question about the probability of the coinflip given SB's knowledge, so this argument doesn't hold... as above, it's not a question about the coin, it's a question about SB's mind.

So, OK. I wake up and am asked for the probability (belief, subjective probability, whatever) of heads.

I know that it's either Monday or Tuesday. If it's Tuesday, the coin is P(1) tails, and if it's Monday, the coin is P(.5) tails. But I don't know what day it is.

So P(Monday | I'm awake) seems important. Which I think means I agree with Bostrom over Lewis... "is it Monday?" is essentially a piece of self-locating information, and is relevant, so there is new information. (This does not make me happy. Anthropic arguments make my teeth itch.)

If you ran this experiment 100 times, I reason, I would wake up on M and T half the time, and just on T half the time. So I'd expect 150 wakings, of which 100 would be on T. Which, I now realize, is the same "long-run average outcomes" argument wikipedia cites.

Yeah, I guess I'm a thirder.

I don't see what's ill-posed about the problem.
hitchhiker
Jan. 31st, 2014 03:34 am (UTC)
see the "arguments for ambiguity" here, though i'll agree that if i had to pick one side it would the the thirder side (the halfer said seems closer to your "ignoring information" argument above)
dpolicar
Jan. 31st, 2014 04:47 am (UTC)
Ah, I see.

So, right, it boils down to what question we're asking: if we're asking a question about the coin, then we can ignore all the crap about sleeping and waking and Monday and Tuesday and be halfers. If we're asking a question about SB's mind, then we can't ignore it, and we're thirders.

It seems relatively clear to me that the problem as framed in the wikipedia article, at least, is a question about SB's mind. But then, I suppose I would say that... I mostly feel the same way about pulling red balls from a jar.

I recognize there are many people who would disagree.
Jeff Jo
Mar. 4th, 2014 05:12 pm (UTC)
What makes Sleeping Beauty unintuitive is when you confuse "didn't/can't observe" with "doesn't happen." Tuesday-after-heads still happens, and by being awake SB knows it isn't what is currently happening. And urn problems like you've used can be used to demonstrate what that means.

Put a white ball into an urn, then flip a coin. If it lands Heads, put a red ball into the urn. If it lands Tails, put a second white ball in. Then give it to SB, and have her pick a ball at random. If it is white, what is the probability that the coin landed Heads?

The first issue intuition causes is justifying why it shouldn't still be 1/2. But if the ball had been red, they'd certainly change it. And it can't go up, given red, without going down, given white. In fact, it changes to 1/3 for a white ball, which is easy to see given the four possible results.

The second issue people have, is that the analogy is really that SB gets to pick both balls, but on different days and with amnesia in between. And the amnesia is the key to this issue. Without being able to know about the other ball, the ball she sees is effectively a randomly-selected one.

And the last issue is tee fact that she doesn't ever see the red ball. So, instead of a red ball, coat a white ball with the sleep-inducing drug. The answer can't change, but now the urn problem is identical to the Sleeping Beauty Problem. The answer is 1/3.

Edited at 2014-03-04 05:13 pm (UTC)
drwex
Jan. 30th, 2014 03:57 pm (UTC)
Your language makes it sound like you're conflating two things
One of which is knowledge, and the other is probability.

Specifically, where you state "That doesn't make him wrong. There is a 50% chance of drawing a red ball, given the information he's using." You're being at best imprecise.

I would instead say "He believes there is a 50% chance, given the information he's using. But that's incorrect; there actually is a 95% chance."

While it's true that probabilities do not map to specific individual events or effects, they are predictive tools about reality. It's an error to say that a predictive model with incomplete information is "correct". Somehow I intuit you didn't mean to say that, though that's how I read what you wrote.

This is highly applicable to things like models of weather or climate warming, where we know that the models are incomplete and may contain inaccuracies. We don't say that any particular model is correct and instead we develop multiple models that we hope have different deficiencies and look at their consensus predictions.
dpolicar
Mar. 5th, 2014 03:55 pm (UTC)
Re: Your language makes it sound like you're conflating two things
It's an error to say that a predictive model with incomplete information is "correct". Somehow I intuit you didn't mean to say that, though that's how I read what you wrote.


It's a fair reading of what I wrote. Whether I meant to say it, well... I dunno. I'm stumbling around something in the dark here.

I agree, of course, that a predictive model becomes more and more correct as its predicted probabilities approach actual measured frequencies. But I'm hesitant to embrace that statement as a definition, because it doesn't apply clearly to all cases.

Specifically, it makes all the sense in the world when applied to repeatable events, like drawing a ball from an urn, but it's less clear to me how to apply it to singular events.

For example, if I draw exactly one ball from the urn and then destroy the urn, do I have a 95% chance of drawing a red ball?

Well, I want to say yes, because I want to say that whatever is going on when I draw the first of 300 balls from the urn is also going on when I draw just one ball and then stop (1). And we've already determined that in the former case I have a 95% chance of drawing a red ball, so it seems to follow that the same is true in the latter case.

So, OK, let's go with that. I draw a single ball and then destroy the urn. The ball is white. "Ah," I say, "but there was a 95% chance of it being red."

That's an entirely untestable claim about the expected results of a series of events that won't ever occur. Taking such claims seriously gives me epistemic anxiety and upsets my digestion.

Above, chhotii made a reference to Bayes' Theorem, which as I understand it provides a way of thinking about probability not as a prediction of hypothetical and potentially-untestable event frequencies, but as the result of certain mathematical operations on available evidence.

Admittedly, I would ordinarily call that "confidence", not "probability".

So perhaps what I should have said in the first place is "The correct confidence to have in drawing a red ball, given the information he's using, is 50%."

And perhaps what I'm stumbling towards here is the conclusion that correct confidence regarding an outcome needn't equal the probability of that outcome... that confidence is properly speaking a fact about one's state of knowledge, whereas probability is properly speaking a fact about states of world, and mistaking one for the other is an error.

=====

(1) Which seems clear, because after all, how can this event possibly depend on whether or not I draw another 299 balls afterwards? Causality doesn't work that way. But of course it might be that both events depend on something that already happened in the past and are therefore entangled, so it's not quite as clear as it seems. But never mind that for now.

drwex
Mar. 5th, 2014 06:07 pm (UTC)
Re: Your language makes it sound like you're conflating two things
it's less clear to me how to apply it to singular events.

One thing you can do is keep in mind that there are two things we're observing in parallel. One of them is the number of outcomes; the other is the probability of a given outcome. I don't have it at my fingertips now, but Nate Silver wrote a good piece after the '12 election in which he addressed the question of "What if Romney had won?"

If you recall, his model gave Obama something like an 80% likelihood of winning the electoral college. In fact, Obama did. But now imagine that Romney had won and ask "Would Silver's model have been wrong?" The answer is "no", and the probability of Obama winning would still have been 80%. But sometimes you do on a single trial get a low-probability result. This is what keeps slots players feeding coins into the machine, or why we see freak unpredicted storms when the day was predicted to be sunny, among other things.

if I draw exactly one ball from the urn and then destroy the urn, do I have a 95% chance of drawing a red ball?

I believe what you're trying to ask is "do I have a 95% chance of holding a red ball right now," to which the answer is still yes. The probability doesn't change if you only draw one ball.

an entirely untestable claim about the expected results of a series of events that won't ever occur. Taking such claims seriously gives me epistemic anxiety and upsets my digestion.

Sorry for your indigestion. This is sort of getting to the heart of predictive powers of given tests and is a necessity of limited beings that live in real worlds. For example, if I have a model that says "there will be 100 nails in each box my factory produces" I can test some number of boxes. Regardless of whether I test one, or some, boxes I can't test every box - practical considerations intervene. So I turn to statistical tests and do enough samples so that I can say (for example) "I'm 95% confident that my machines are correctly placing exactly the right number of nails in each box."

Diving into statistics will tell you things about sample size and the surprisingly counter-intuitive result that after a certain point taking more samples doesn't increase your confidence in the prediction. This is one reason why good pollsters are able to give reasonably accurate predictions based on polls that sample a surprisingly small percentage of the electorate.

Bayes' Theorem, which as I understand it provides a way of thinking about probability not as a prediction of hypothetical and potentially-untestable event frequencies, but as the result of certain mathematical operations on available evidence

Not exactly. What Bayes' Theorem lets you do is quantify the impact of prior knowledge. Bayes talks about "the probability of B, given A" and is often associated with such things as "inference" and "how beliefs ought to change in the face of evidence." That's a slightly different kind of problem than your jar of balls problem. Bayes' equation is a statement about the relationships of probabilities, not about confidence.

correct confidence regarding an outcome needn't equal the probability of that outcome... that confidence is properly speaking a fact about one's state of knowledge, whereas probability is properly speaking a fact about states of world, and mistaking one for the other is an error.

I would agree with this. It's one of the fundamental practices that's taught in project management: you determine a set of outcomes or risks, and then survey the knowledgeable people for how likely an outcome or risk is. Separately, then, you try to gauge how confident they are in their predictions.

In the finance world this leads to discussions of "tail risk" and "black swans" but I'm out of time here.
( 12 comments — Leave a comment )