Multi-Level Model

hlsmith

Omega Contributor
#1
Why in multi-level models can you get away with fewer observations per predictors?

I know that I have heard and seen this before, but I am not completely clear on why and unsure what a general rule may be for the number of predictors you can have.

http://support.sas.com/documentatio...TML/default/viewer.htm#statug_glm_sect054.htm

Here is a link to a linear mixed model that had 44 observations and 2 categorical predictors (e.g., one had three levels and the other had 6 levels) as well as their interaction term. If we think about this from a standard linear model you would have 7 terms via dummy coding plus the interaction term(s), which is around 5 observations per term?

Thanks in advance for some input!
 

spunky

Smelly poop man with doo doo pants.
#2
but.... you still only have two predictors: machine and person, each with multiple levels. i have trouble thinking that the level of a predictor and the predictor itself would be the same thing.

perhaps you're getting confused and mean that multilevel model (linear mixed models) can handle unbalanced designs more easily? and you don't have to worry about which sums of squares decomposition is more optimal given the design?
 

Jake

Cookie Scientist
#3
What is the problem exactly? This model would also work fine as a classical linear model. Is there some rule you think is being broken?
 

hlsmith

Omega Contributor
#4
Thanks for the replies,

My scenario is a mixed linear model:

doesn't the use of categorical variables with more than two groups (k>2) suck up more degrees of freedom and you have fewer people in each unique subgroup. People throw out the ~10 people for each variable, would the use of two categorical variables (k=3 and k=6) count as 7 variables (terms), so some people would like to see at least 70 observations in the model where above the person has 44 observations. And this is ignoring the interaction term, which would also suck up degrees of freedom.

Any corrections on the above?

Also, I was think that the breaking up of random and fixed effects allowed for fewer observations per variable in the mixed model?
 

Jake

Cookie Scientist
#5
"10 people for each variable" is a rule of thumb of unknown origin that should not be taken seriously.
 

hlsmith

Omega Contributor
#6
I have seen a logistic paper reporting issues with few observations. How would I monitor overparameterization then in my linear mixed model?
 

Jake

Cookie Scientist
#7
For a variety of reasons it is better to have many observations rather than few, and this is especially true when the model contains many predictors. But nothing special happens at 10 observations per predictor.

Overparameterization problems in mixed models will usually manifest either as convergence errors or as weird parameter estimates for the random effects (e.g., perfect correlations between random effect terms, random effects with 0 variance, etc.)
 

hlsmith

Omega Contributor
#8
But aren't denominator degrees of freedom re-divied up on the tests, perhaps making for the allowance of fewer obs in multi-level models?
 

hlsmith

Omega Contributor
#10
Side note, I heard some one say a rule of thumb for the number of groups (I believe 2nd level groups, AKA clusters), should be at least 20 to adequately handle an interaction term in a multi-level model (no cite, but I may have seen this a couple of times). I know you would like more groups for tests hypothesis at the group level, since they are kind of treated as the observations in those tests.

I am not getting hung up on cut-off numbers, but just sharing!