What variables would exist, do exist, that can not be conceptually nested in some other variable.

- Thread starter trinker
- Start date
- Tags hlm multilevel model

What variables would exist, do exist, that can not be conceptually nested in some other variable.

noetsi said:

One of the problems I have working with HLM is their argument that when data is nested than OLS will inherently have unequal error variance (heteroscedacity) and observations will not be independent. Since all data is inherently nested in something, then all OLS would be badly harmed by these conditions. And clearly that is not the case.

HLM, like many recent discoveries has a habit of overselling itself and degrading other methods that in fact work fine.

HLM, like many recent discoveries has a habit of overselling itself and degrading other methods that in fact work fine.

- Is there likely to be nonindependence of errors in this method though schools were randomly selected from the data base?
- If there is autocorrelation are the estimators biased? I've seen people dbate this both ways.
- Is autocorrelation one of the signs of structure.
- Is durbin watson an appropriate technique to find autocorrelation?
- Is it shody statistics to print these results though knowing the data is structured even if the test indicates otherwise and the possible problem is mentioned in the limitations.

I think the answers I get may not be what this raptor wanted to hear.

I recently ordered and just received Zuur et al 2009, "Mixed effects models and extensions in ecology with R," but I haven't started digging into it yet, so I can't really say else much about it except to point out its existence.

To return to a more practical element I am working through a homework problem that asks what the critical value around the grand mean in a level 2 equation tells you about the existence of group differences (that is do they exist or not). The grand mean tells you about the intercept in the level one equation but barring centering there does it really tell you if the groups vary? I don't see how. The random term U in the 2nd level equation tells you not the grand mean about how groups vary.

Even more confusing to me is the difference (except for how they are calculated) between a CI and a plausible value range?

I think that may be all true (I can't imagine what a handsome raptor looks like), but it does not really address my point. If HLM is correct in its assumptions, when can you **ever **validly run OLS. My answer would be, in practice, never. Because all data is ultimately nested, all variables are ultimately nested.

If you have observations from a whole bunch of schools but only have one response from each school then you don't need to care that the observations are nested within school because you only have one. So it seems like some of your argument is just a strawman.

I think HLM should be used when it's appropriate but I don't agree that it's needed always. Maybe it's appropriate for

Also note that even if we do have nesting we only really care if we have multiple responses from each group in the nest.

If you have observations from a whole bunch of schools but only have one response from each school then you don't need to care that the observations are nested within school because you only have one.

If you have observations from a whole bunch of schools but only have one response from each school then you don't need to care that the observations are nested within school because you only have one.

If HLM is correct in its assumptions, when can you **ever **validly run OLS.

... which brings be to the point of our handsome raptor. i haven't yet read/seen whether there is any relationship between the Durbin-Watson test and the statistical significance of the intra-class correlation... there probably is but, if not, i think i'll take this up as a fun project to present at the next departamental colloquim we are having. i think our handsome raptor is being very professional in acknowledging that regression has assumptions and such assumptions should be tested. but assuming further that this particular raptor is proficient in R, i think it would probably take him around 27 seconds to run this as a quick mixed-effects regression with a random effect for the intercept and see whether there is in fact no "school effect". my intuition leans towards saying there sholdn't be one because of the evidence from the Durbin-Watson statistic, but then again that only tests for one lag of autocorrelation, i'm not sure how auto-correlation lags get translated into variance components in terms of testing for statistical significance

ps- trinker, i'm in my school laptop right now so i probably wont be able to send out stuff until i get back home where i keep all my files in the big computer

The grand mean tells you about the intercept in the level one equation but barring centering there does it really tell you if the groups vary? I don't see how. The random term U in the 2nd level equation tells you not the grand mean about how groups vary.

dependent variable = fixed effects + random effects + error

But they are trying to sell a method of course.

HLM is the structural equation modeling of the 21st century. when LISREL came out it was all about fitting SEM models to everything even if it didnt make sense... then the HLM software became avaiable and **bam** now everyone wants to jump on that bandwagon because it's pretty hot... i've always said that the next new hot thing in the social sciences are going to be Bayesian Networks so i'm starting to look into those so i can be

Yeah I usually don't what noetsi is talking about when they talk about HLM stuff because I definitely don't use that notation/terminology.

....for instance, on that question noetsi asked you about "the grand mean" and "level 2 predictors" with a whole bunch of gammas, etc. all he was asking was for a reason as for why the intercept of the fixed effects changes if you add predictors with random effects... which is something i believe is more understandable to you... am i right?

I don't see the CI of the grand mean (Goo in R&B) tells you anything about if groups vary. What tells you if they vary is if Uoj is signficant. Is there anyway that the CI of the grand mean tells you if group differences are significant?

To use the notation brought up above...

for instance, on that question noetsi asked you about "the grand mean" and "level 2 predictors" with a whole bunch of gammas, etc. all he was asking was for a reason as for why the intercept of the fixed effects changes if you add predictors with random effects... which is something i believe is more understandable to you... am i right?