I'm not a statistician by degree or trade (actually an engineer), so a little confused by something, and was hoping someone might be able to help.

The confusion surrounds two ideas, The Gamblers Fallacy and Regression to the Mean.

As far as I understand them, the Gamblers Fallacy is the idea, say for a coin toss, getting 5 tails in a row makes it more likely to get a heads (so you gamble more - reds and blacks on a roulette table is another example). I think I've been told this is wrong because it assumes data points are dependant of each other, whereas they're actually indepandant (so whether you get heads or tails on the previous toss, the next toss is still a 50/50 chance).

Regression to the Mean on the other hand, (quoting from wiki here) 'is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on a second measurement'. A common example I've heard used is a batter in baseball - if they're hitting above average, it's not so much that they're having a particularly amazing game, they're just having a slight stats fluke, and soon they'll regress to the mean.

My confusion (and very possibly yours after reading that garbled mess) is why, when you're gambling do you treat the data points as indipendant, and not gamble on the coin toss averaging out over time (or regressing to the mean), but when you're looking at regressing to the mean do you seem to treat the data points as dependant as assume the 'hot' streak in batting will average out.

If you understand any of that mess, kudos - like I said, not a statistician, so the terminology is a bit off. If you have any ideas even better.

Thanks for any help.