# Thread: What is the difference between classical and bayesian understanding of probability?

1. ## Re: What is the difference between classical and bayesian understanding of probabilit

I did some googling around and, for better or worse, CBear and Dason are right. Within the frequentist paradigm of the interpretation of probability you need to do some serious mental gymnastics to be able to do account for simple probabilistic statements like the chance that X or Y team (I refuse to use the P-word ) will win A or B tournaments.

I used to be a big advocate of the Bayesian paradigm when I was still a master's student but I am somewhat ambivalent of it now. It's not because of Bayesian statistics itself but mostly the fact that, in my field (education/social sciences), we're just so bad at data analysis that I feel if we don't clean our act first it's better to remain within the frequentist paradigm a little longer.

Every time I'm talking about this issue with my profs colleagues that Herman Rubin claim comes up in my mind:

A good Bayesian does better than a non-Bayesian, but a bad Bayesian gets clobbered.

Which immediately makes me think Will these people be good Bayesians or bad Bayesians? Do I really want to find out?

2. ## The Following User Says Thank You to spunky For This Useful Post:

Buckeye (02-08-2017)

3. ## Re: What is the difference between classical and bayesian understanding of probabilit

Originally Posted by hlsmith
Right on Rogojel it is all about M-verses. String theory is so 90's for buzz terms. I am also interested in the idea of M-verses. Opens the doors to many awesome movies as well. And I keep wondering if our universe is ever expanding cant we stave off our own freezing demise with some type of human generated nuclear power. I digress, but was thinking the same thing about M-verses.

I don't get all gummed up by the idea that given the event in a super-population is true, that realization has a certain probability. I think the issue is when thinking about it as a Superbowl win, instead of average change in weight or a more approach example.
Not to be prim about a technical issue but the Everett interpretation is way older then the M-verses. It just says that each time the wave function collapses - basically at every interaction of a particle with the environment - the universe splits in as many parallel universes as we have outcomes of the event. Our consciousness just stays in one of these universes - in all other there will be a similar consciousness that registered a different outcome. Oldie but goldie from the fifties.

4. ## Re: What is the difference between classical and bayesian understanding of probabilit

I was referring to:

Scenarios where the big bang was created, along with our universe via the collision of two membranes.

5. ## Re: What is the difference between classical and bayesian understanding of probabilit

This interesting article seemed applicable enough to anyone who is alive or robotic. Occam's razor applied to Bayes.

http://www2.stat.duke.edu/~berger/papers/ockham.html

6. ## Re: What is the difference between classical and bayesian understanding of probabilit

I'm not sure if it's been mentioned yet, but part of the way that Bayesian's view probability comes from the difference in how they view population parameters (if I'm not mistaken). Their 95% credible interval is the prime example of this. The interpretation fits the natural question that many have; "What's the probability that I'm right?" Their interpretation says there exists a 95% chance that the true parameter value is within the 95% credible interval. This statement is allowed because Bayesians have a notion that the data are fixed but the parameter is a random variable (as opposed to Frequentists viewing the data as random and the parameters as fixed). This also helps illustrate why the Frequentist says the true parameter value either is or is not in the 95% confidence interval, there is no in between because the parameter is not a random variable. This thinking can be extended to other kinds of tests, too.

7. ## The Following User Says Thank You to ondansetron For This Useful Post:

hlsmith (02-08-2017)

8. ## Re: What is the difference between classical and bayesian understanding of probabilit

Some bayesians might think that way but it isn't required.

This statement is allowed because Bayesians have a notion that the data are fixed but the parameter is a random variable (as opposed to Frequentists viewing the data as random and the parameters as fixed)
You don't need to view the parameter as random to put a credible interval around something. Keep in mind that the interpretation of what probability *is* ends up being completely different for a bayesian. So it's not that the parameter is random - it's that you're modeling the belief about the parameter and even if you believe that there ultimately is some unknown but unchanging value for the parameter you might not know what it is so your belief about it isn't fixed yet.

Now some bayesians do have somewhat different beliefs about these things (or at least are a bit looser with how they describe things for sake of convenience) but you don't have to believe that parameters are always random to be a bayesian.

9. ## Re: What is the difference between classical and bayesian understanding of probabilit

Dason are you talking about the king James bayesians or the ones with the gold tablets?

10. ## Re: What is the difference between classical and bayesian understanding of probabilit

Originally Posted by Dason
Some bayesians might think that way but it isn't required.

You don't need to view the parameter as random to put a credible interval around something. Keep in mind that the interpretation of what probability *is* ends up being completely different for a bayesian. So it's not that the parameter is random - it's that you're modeling the belief about the parameter and even if you believe that there ultimately is some unknown but unchanging value for the parameter you might not know what it is so your belief about it isn't fixed yet.

Now some bayesians do have somewhat different beliefs about these things (or at least are a bit looser with how they describe things for sake of convenience) but you don't have to believe that parameters are always random to be a bayesian.
I'm definitely no expert on Bayesian stats. That (my post) was just something I heard, and it helped me conceptualize part of the difference in probability as a strength of belief vs. a Frequentist idea of long run rates of something, for example (although, it isn't a very direct route to understanding, I don't think). And, granted, I didn't touch much on the idea of probability being more about a degree of belief or certainty since it was already covered. Definitely makes sense, though, now that you mention they don't have to view it as random to make a credible interval. So with regards to prior and posterior distributions, are those applicable to both a fix parameter as a well as a variable parameter? My thought would be yes, since you're just updating your idea of what the distribution looks like (irrespective of what the RV is), more or less. Is that fair to say?