Understanding Multilevel Model

noetsi

Fortran must die
#21
I agree with that point jake. I was saying it was heresy to many to interpret parameters in a model that were not signficant statistically. So statistical signficance test were needed. There is a school, Cook and Campell were in it, that argues that test of signficance and the null hypothesis associated with it should be abandoned or at least have less influence than they do now, but much of the literature I have seen in social sciences does not reflect this.
 

spunky

Doesn't actually exist
#24
keep dreaming spunky....

three words for you: SMALL.SAMPLE.SIZES

we just need to market that idea and people will be converting to Bayesianism in no time.

and I shall be become the leader of this new movement of religious Bayesianism predicated as a a quick-fix, one-size-fits-all solution to everything! much like we did with null hypothesis testing... but shinier and with more sequins!
 

noetsi

Fortran must die
#25
99.999999999999999 percent of the people who run statistics will probably have few if any graduate classes in stats. Aka practisioners in most organizations. So they won't even have heard of bayes. And most graduate stats classes I have seen are taught by frequentist (or individuals who teach it because that has been the norm in the past). In addition most practisioners work with data bases having thousands if not tens of thousands of cases ( it is the nature of what real organizations do, they have vast amounts of observational data). So small sample sizes don't matter to most.

So the couple of hundred real statisticians in the US and Canada might someday run bayes. Very few others will.
 

spunky

Doesn't actually exist
#26
99.999999999999999 percent of the people who run statistics will probably have few if any graduate classes in stats. Aka practisioners in most organizations. So they won't even have heard of bayes. And most graduate stats classes I have seen are taught by frequentist (or individuals who teach it because that has been the norm in the past). In addition most practisioners work with data bases having thousands if not tens of thousands of cases ( it is the nature of what real organizations do, they have vast amounts of observational data). So small sample sizes don't matter to most.

So the couple of hundred real statisticians in the US and Canada might someday run bayes. Very few others will.
yeah, but those people will retire and die off, so that the new people (aka us) take their place. sure, if you only get taught how to draw a red square, all you can do is draw a red square. but what happens when more and more people start learning how to draw a blue circle? eventually everyone will either learn how to draw a blue circle or will be able to both draw blue circles and red squares!

i know that, once again, you're being silly (well, i'm being silly myself on this. frequentism will never go away) but regarding the comment about sample sizes i always need to remind you that just because you, in your limited micro-universe do something does not imply the vast majority of people do it. and yes, i am very aware that you're prone to just quick, sweeping (mostly unfounded) generalizations (like i'm prone to hystrionics and drama) but that's the way you are and i'm OK with it.
 

noetsi

Fortran must die
#27
Actually I was being serious but making it sound silly. My master's in measurement and statistics (in an education school) was recent (I graduated last year). We virutally never talked about bayes and at least one professor made it clear that what we learned conflicted with a bayesian approach. It stressed HLM and SEM (with GLM and ANOVA thrown in) so maybe those methods are less involved in baysian approaches.

Whether people teaching undergraduate stats classes, as far as most practisoners will go, will shift to a baysian approach I don't know. But I am skeptical this will occur.

Companies and government deal with observational data. And that tends to be very large. My own experiences are beside the point, the structure of data used by firms and most government agencies is large simply because of the nature of what they do. They don't go out and gather data, they are interested in what they do and it is extremely unlikely that many firms are only going to have a few hundred cases to work from. That occurs in academics and experimental design because of the cost and availability of data. Firms gather vast quantity of data and deal with large numbers of units so they will not encounter this by the nature of what they do and analyze.

Do you really think most firms are going to have a hundred or less cases of observational data to analyze? Or that they commonly analyze data that is not observational in nature?
 

TheEcologist

Global Moderator
#28
shift to a baysian approach I don't know. But I am skeptical this will occur.
Classical statistics are failing across the board, and Bayesian statistical are filling the hole. Examples include image processing, signal analysis, phylogenetic trees, network analysis, underwater navigation, court cases and gene expression data.

The application of Bayesian stats is already everywhere in modern society, and from smart traffic lights to spam filtering it has proven it's use. It's the computational revolution that made it all possible. It may be rare in some fields, prevalent in others but regardless of how you - or I - personally feel about Bayesian methods, lets not deny that it's use and application are already extensive.
 
#29
Examples include image processing, signal analysis, phylogenetic trees, network analysis, underwater navigation, court cases and gene expression data.
Most of these examples involve large volumes of data that I'm sure Fisher never envisioned when developing experimental design in agricultural fields.

From my limited experience (my 2 cents) I've seen both frequentist and Bayesian methods produce volumes of output from analyses on the same large data -- thing is, with Bayesian methods there's more room for "tweaking" so that fewer "significant" things are outputed.
 

TheEcologist

Global Moderator
#30
Most of these examples involve large volumes of data that I'm sure Fisher never envisioned when developing experimental design in agricultural fields.

From my limited experience (my 2 cents) I've seen both frequentist and Bayesian methods produce volumes of output from analyses on the same large data -- thing is, with Bayesian methods there's more room for "tweaking" so that fewer "significant" things are outputed.
Good to hear someone say it, Fishers test were indeed designed for exactly for those types of simple experiments. However, I didn't mean that, data from that kind of study is pretty objective - and the so is the study design that dictates your test. It's the subjective nature of face recognition, spam filtering or statistical translators that make a Bayesian approach more feasible. If you get into it you'll see that in these cases the more intuitive approach is to start with Bayes. That is what most of those examples have in common - not the fact that they involve large volumes of data. No, certainly not, many have small sample size problems as well.

However also the argument that Bayesian methods are feasible for large observational datasets is beside the point, true born-again Bayesians have compelling arguments to use Bayes on small data as well. These arguments are so compelling that Bayes is becoming the standard in many drug testing trials. A major argument is that you should risk as little patients as possible, and correct application of Bayes apparently helps, you can make more complex inference with less data - and thus less risk of human life.

A recent paper by Johnson makes the case that significance determined with classical test only amounts to marginal evidence (especially with small datasets) and that the wide use of classical tests - were alpha levels of 0.05 - 0.01 are seen as significant - is a major contributor to the appallingly large proportion of study were people fail to reproduce results in some fields. He argues that the application of Bayesian uniform most powerful test would reduce this proportion greatly.
I suspect that a major contribution is also that many classical tests are being applied to study designs that are not appropriate so I am highly skeptical that 'standard' Bayesian tests built for ANOVA type study designs will solve this - but this is again completely besides my point.

I simply do not agree with the notion that Bayes is not widely applied. You may not use it in your field, you may not have seen it during your undergrad courses, but you're seeing it's application in your daily life and likely also using it daily - from your traffic avoidance app to the collision avoidance software in your new truck or the face recognition on your camera. Bayesian methods are already everywhere. That is my only point, the rest are semantics.
 
Last edited:

noetsi

Fortran must die
#31
Classical statistics are failing across the board, and Bayesian statistical are filling the hole.
Perhaps at the cutting edge at which you work TE. Not in the trenches where most non-academic individuals who do stats work IMHO.

It would be interesting to know how often these methods were taught in undergraduate classes last year. As I noted I have taken graduate classes in stats in a variety of programs and universities and never seen baysian statistics.
 

TheEcologist

Global Moderator
#32
It would be interesting to know how often these methods were taught in undergraduate classes last year. As I noted I have taken graduate classes in stats in a variety of programs and universities and never seen baysian statistics.
That's got to be a field specific thing, however, I know for a fact that many of my colleagues in Biology would say the same things as you. If you're not studying something in which you really need Bayesian stats, and you're not likely to encounter it in the work of your peers, you won't be applying for those Bayesian courses.

I'm not sure of course, but I bet that most universities must be giving graduate and undergrad courses in Bayesian statistics. Mine does. Yours didn't,
so our best estimate (MLE) is: 50% of the universities give Bayes classes (N = 2).

:p
 

noetsi

Fortran must die
#33
It may well be field specific. I was commenting on social sciences including administration and education. But, since most people work in business or social sciences, if it is field specific than it is likely that baysian is not taught/used that much. Also I am sure that people who do very advanced statistics use it quite often - the question is how many people that is. My guess is not many.

It is discouraging to me how rarely I have encountered anyone who did statistics even in 1,000 plus member organizations (which is fairly large by US standards). At any level - forget more advanced approaches like baysian. I am the statistical "expert" at my present large organization - which is a sad commentary on the expertise.

I think people who are experts at stats, and work with others equally familiar, don't have a strong sense of the reality in most non-research operations.
 

spunky

Doesn't actually exist
#34
It may well be field specific. I was commenting on social sciences including administration and education.
mine does (and it's a social science program). yours doesn't. as TE says, it depends on what you're into.

but think about this noetsi. when multivariate methods started being developed (you know, the good, ol' MANOVA that we all know and love, Principal Components Analysis, etc.) it took not one, not two, but almost 30 years between these methods were published and they started getting taught in graduate programs in Psychology, Sociology and social sciences in general. and they found TREMENDOUS opposition... mostly people arguing that we didn't need any of that and as long as we had multiple regression we were OK.

fast-forward to now and people are regularly taught this thing. it's so common that even SPSS does it.

Bayesian statistics is probably going through a similar process right now. think about it... if this board had existed 30 yrs ago you'd probably be saying "MANOVA? oh pff... who ever needs that? just do regression!"
 

noetsi

Fortran must die
#35
We will see spunky. I am profoundly skeptical it will be broadly used outside academics. I think if you were to look in the elite journals in education and possibly even psychology (in methods based articles obviously) you would find it rarely. But obviously I could be wrong. I reviewed the vocational rehabilitation literature recently (the last decade or so) tuilizing well known academic search tools for articles and I had no success even finding time series let alone let alone baysian approaches.

It woul be interesting to do a formal review of education and psychology journals that use statistics and find out how often baysian approaches are utilized. I would be willing to make a ten dollar bet that it shows up less than 10 percent of the time:p
 

Lazar

Phineas Packard
#37
I think it is becoming more common in education and psychology particularly now that it is now integrated into Mplus. Like TR says IRT, Multiple Imputations, etc. Bayes has been around in ed and psych for a while now.
 

spunky

Doesn't actually exist
#38
I think it is becoming more common in education and psychology particularly now that it is now integrated into Mplus. Like TR says IRT, Multiple Imputations, etc. Bayes has been around in ed and psych for a while now.
Mplus (and many other software programs) tend to be secretly dirty and conveniently set a 'uniform' prior everywhere so that you end up with MLE estimates anyway.

OWN your Bayesianism and choose an informative prior!
 

Lazar

Phineas Packard
#39
I have not actually used Mplus for Bayes (I use JAGS or OpenBUGS), but I assume it is not hard to set an informative prior. Mplus has all sort of stupid defaults (because it assumes you are stupid) and hence it is rarely a good idea to simply accept them