Hey there,
I have a question that probably has a simple answer. It's about coin flipping.
If you flip a (normal) coin 10 ten times, and it lands on heads every time, the probability it will land on tails, I'm assuming, is very high. That is, you can predict with a significant amount of confidence that the next flip will be tails. But it is still just a 50/50 probability that it will land on either side. This seems like a paradox - or maybe I'm just interpreting something wrong...
Can anyone clear this up for me?
Thanks in advance,
- Dan
But after after 10 tosses of heads, you can expect a likely occurrence of tails. So how would you say this statistically. Given the central limit theorem, you would expect an equal number of tails as heads as you approach infinity, so after many heads in a row, you expect to flip a tails. Is this not the same as saying: after many consecutive flips of heads, there is a higher probability of tails than heads?
Hey djarvis,
Seems like what you're thinking is that if I flip a fair coin 1,000,000 times, then I should expect 500,000 tails.
Therefore if I have flipped the coin 100,000 times and gotten 100,000 heads, then in the next 900,000 flips I should get 500,000 tails. i.e. I should expect the probability of tails to be 500,000/900,000 = .56 from now on.
But this is not true, for as you say:
Don't think of infinity as a number, that you can subtract your previous experience from, to get some new probability of tails on your coin.
If you knew that your coin would be "fair" after 1 million flips, then you could do what you are thinking. But no one knows this; that phrase "as you approach infinity" is tricky.
This makes sense, and I'm not confused about the fact that the probability within 1 million will not be .56 to get tails after 100,000 heads. But, this still doesn't get to my quandary. You always expect what is going to happen, to happen, in a certain way (i.e. expect 50/50 in the long run), but you can't consider what has already happen, because that is independent of what is going to happen. This seems to present a paradox. The odds of flipping 100,000 heads in row are extremely thin - isn't this the same as saying you are more likely to flip a tails after many heads in a row than a heads; hence the probability of flipping tails after flipping many heads is higher than flipping another heads?
it's 0.5..flipping a coin is an independent event, therefore your chances of getting T or H after getting one the first time isn't affected.
Thanks canadiana, but you're not addressing the question I just asked.
Dragan and Mean Joe: you answers have been helpful thus far.
But, can someone give me a cogent answer to the problem I stated above? i would greatly appreciate it.
The good thing about this paradox is you can actually do the calculations, to clear it up. It's been a while since I've done this, so make sure you check my work!
P[100,000 heads in a row] = .5 ^ 100,000
P[99,999 heads in a row, then a tail] = (.5 ^ 99,999) * .5 = .5 ^ 100,000
So, it's an extremely thin chance to get 100,000 heads in a row. But once you've gotten 99,999 heads in a row, it doesn't make it any more likely that you'll get a tail next.
This is a classic problem, solved by (I believe) Blaise Pascal (1623 - 1662 AD). In short I can say that your intuition is spot on
If we do not approach the problem as a binomial distribution (or independent Bernoulli trials) but rather as sequences of 'Bernoulli trials' then look at the negative binomial family (and if I remember correctly the geometric distribution [its late and I'm too lazy to check ]).
Attached is a graph of the PDF of the negative binomial [in your situation: r=1,p=0.5] or the PDF of your situation (which is the geometric I believe again not sure). This represents the number of failures which occur in a sequence of Bernoulli trials before a target number of successes is reached (or 1 tail after many heads). Here you see that the chance of getting n consecutive heads before 1 tail declines (ergo the chance of getting at least one tail in your sequence increases with the size of your sequence). The second graph is the probability of at least one tail after a sequence of heads (or the cumulative probability distribution CDF; x-axis in the graph is k in the formula). It increases even though the independent per toss chance of tails remains 0.5.
Here's a few links, I'm sure this will get you on your way (if you want to learn the math behind it):
http://www.aiaccess.net/English/Glos..._neg_binom.htm
http://en.wikipedia.org/wiki/Negativ...l_distribution
http://en.wikipedia.org/wiki/Geometric_distribution
Does this answer your question?
Last edited by TheEcologist; 04-22-2009 at 12:47 PM. Reason: changed unclear wording..
The true ideals of great philosophies always seem to get lost somewhere along the road..
Nuppers, the chance is still 50/50.
What you're maybe wondering is, how does the proportion ever get to being 50/50 then? Well, in your scenario, after 20 coin tosses your expected number of tails will be 5; after 40 tosses, 15; after 100, 45; and as the number of tosses approaches infinity the proportion of heads and tails will approach 50/50 (but with no guarantee it'll be exactly 50/50 with any real number of tosses).
I can see how it's easy to think that after 10 heads you've just got to expect a tails, and this is a very common assumption to make, but it isn't on the money. canadiana is quite right; each toss is an independent event; the proportion remains 50/50 regardless of what's come before.
Not quite... tossing 100,000 heads in a row is unlikely because the chance of each toss being heads is just 50%, rather than something correcting too many heads by popping up some tails. As the number of tosses increases, the chance of the next toss being heads or tails remains 50/50; but with each toss, the chance that all completed tosses are heads gets smaller and smaller. Does that make any sense?Originally Posted by djarvis
Kudos to theEcologist for teh win historical knowledge btw
TheEcologist; it seems that you are saying what I was wondering
"Here you see that the chance of getting n consecutive heads before 1 tail declines (ergo the chance of getting at least one tail in your sequence increases with the size of your sequence)"
- This says you can predict a tails after many heads; doesn't it?
And CowboyBear; If the chance that all completed tosses are heads decreases through time - isn't that the same as saying the chance that the next completed toss is tails through time after so many completed heads increases - therefore the occurrence of a tails is predictable?
It shows the probability that your sequence of n tosses will be all consecutive heads and it states that the chance of at least one tail increases with the size of the sequence.
If you stop after x heads though, the probability of a tail in the next toss at that moment is still 0.5 (the coin has not changed, its still fair); The per toss probability of tails is still 0.5.
The negative binomial theory regards sequences. The first graph shows the probability of obtaining sequences of 1 head, 2 heads, 3 heads ect
Predicting the next head in your sequences just brings you back to n = 1 and thus p=0.5
Do you see the difference?
The true ideals of great philosophies always seem to get lost somewhere along the road..
yes, I see the difference, but aren't you always practicing within part of a sequence? Hence, "the chance of at least one tail increases with the size of the sequence", means that after so many heads, the probability of a tails increases, therefore becomes predictable...
Well, it seems my problem is unresolved. you guys have given me nice explanations that make it clear why it is always a 50/50 chance when you flip the coin, but I've yet to hear a simple explanation of the phenomenon that even though it's always 50/50, you really can expect a tails after many heads.
Tweet |