# Multilevel analysis -SAS

#### noetsi

##### Fortran must die
Is there a SAS PROC called MMIX? I am not sure if this is a spelling mistake or what.

By using the PLOTS option in the SAS PROC MMIXED procedure, several diagnostic plots will be produced. It is
important to look at these diagnostic plots to ensure that the model is a good fit for the data.

#### hlsmith

##### Not a robit
They probably just meant "PROC MIXED", they have a continuous DV right?

#### noetsi

##### Fortran must die
An assumption that is us usually made in ML regression is that the variance of the residual errors is the same in all groups. This can be assessed by computing a one way analysis of variance of the groups on the absolute values of the residuals which is the equivalent of the Levene test
Hlsmith do you have any idea how that is done in SAS. None of the documentation I have seen even mentions it.

#### noetsi

##### Fortran must die
is there anyway in PROC MIXED to get the regression equation for each group (that is show the regression for group 1 than group 2 etc)

#### noetsi

##### Fortran must die
I am running my first real model using multilevel analysis and I had a question. As a first step I ran the following code to generate the ICC (unitid_pri is our groups var, DV is our interval level Dependent variable).

Is this the correct way to generate the empty model to calculate that?

proc mixed data= work.test2 covtest noclprint;
class unitid_pri;
model DV = /solution;
random intercept /subject= unitid_pri;
run;

I calculated the ICC by Estimate /(estimate + residuals) as I found on line. Is this correct?

My ICC is about 3.8 percent. Is that enough to see multilevel models as useful, or is it suggesting that group really has a limited effect?

#### hlsmith

##### Not a robit
noetsi,

I believe further down in the SAS output there should be:

"Null Model Likelihood Ratio Test"

The related chisq test is for Ho: groups do not predict. So if that test is significant it provides support for using MLM.

#### noetsi

##### Fortran must die
I understand that, but looking at the literature the point is made that substantively, as compared to statistical test, there is a point at which the ICC is so low its not worth considering the impact of groups. It is really that I was asking about. How low does the threshold go, that is how high does ICC have to be, to consider groups important and thus need multilevel models.

#### hlsmith

##### Not a robit
As a simplistic hypothetical, think about ICC as parital R^2 contributions. It is contextual, some may love 3% others may be used to double digits. Does accounting for 4% seem worthy given your context. What gets biased by ignoring group level besides precision?

#### noetsi

##### Fortran must die
Which is pretty much what I thought. I am curious, I do not read the academic literature on this enough, what ICC's are commonly reported in multilevel studies.

To me a 4 percent contribution seems minor.

#### hlsmith

##### Not a robit
In medicine I cant say they typically report the value. It depends on what you are clustering on, intra cluster variability, and probably the number of values in cluster. Jakes link in chatbox, the box plot, visualizes this.

PS, I typically have pretty small value under ten I would say.

#### noetsi

##### Fortran must die
The Wald test for Random effects, the p value that tells you if a random effect is significant, is commonly seen as wrong. It is recommended that instead you add one random effect, look at the change in the model deviation, and do a chi square test (deviation model 1 -deviation of model 2) with a df of one using a chi square test.

Does anyone know how to do this in SAS?

This is one discussion of the approach

Consider another example in which a model with a random effect for the slope (i.e, is the slope is allowed to vary) is compared to a model without the random effect for the slope (i.e., the variance
of the slope is constrained). This example would appear to be testing a single parameter, but, infact, the two models differ by two parameters. The first model will include an estimate of the slope
variance, τ 2 1, but also an estimate of the covariance between the slope and the intercept, τ10, by default. The covariance cannot be estimated when the slope is constrained to be non-varying,
however. One would ordinarily expect that the difference between the two models would be compared to the chi-square distribution with df = 2, because two parameters differed between the
models being compared. But because variance tests should use a one-tailed test and covariance tests are two-tailed tests, a more complicated significance criterion is needed. Snijders and Bosker
(2012, p. 99) recommend using a "mixture distribution" (or "chi-bar distribution") by comparing the chi-square difference obtained from subtracting D0 – D1 to a combination of two critical values. For
α = .05, the critical values are: one slope 2 χ mix = 5.14, two slopes 2 χ mix = 7.05, and three slopes 2 χ mix = 8.76.

http://web.pdx.edu/~newsomj/mlrclass/ho_significance.pdf

The way I interpret this is if you were testing one random effect you would run it first with the fixed but not random effect for a slope (that is for the slope fixed). Then run it with the random and fixed effect for that predictor and determine how the deviance differed. Then, using this deviance, and a df of 2, you would run a chi square test. If the result was greater than 5.14 you would conclude the random effect was significant at the .05 level.

Or are you comparing the empty model to the model with a random and fixed effect for that predictor specified and using those two models to get the difference in the deviation (everything else would be the same I assume).

Last edited:

#### noetsi

##### Fortran must die
You use the empty model to predict the interclass correlation. Is this the correct SAS model to do that? I am not sure if you specify the intercept as random or not.

proc mixed data= work.test4 covtest noclprint;
class unitid_pri ;
model weeklyearnings_clo = /solution;
random intercept /subject= unitid_pri;
run;

While I am at it the link below has a macro that performs the LR deviance test with mixed p values (both strongly recommended especially with random effects). One thing that is unclear to me is if you have to run the

%include '\\cdc\private\mixture method pvalue macro1.sas'; macro always or whether this unique to the data the author is using. He never mentions this macro at all.

Last edited:

#### noetsi

##### Fortran must die
I am running the following code in SAS

proc mixed data=work.test4 method=ml covtest empirical
noclprint ;
class unitid_pri female ;
model dv=female/ ddfm=contain s ;
random intercept /subject=unitid_pri type=ar(1) s ;
parms / ols;
ods output FitStatistics=fm1 SolutionF=SFfm1 ;

Regardless of any fixed effect I test, there is only one variable in the model, I always get that the Hessian matrix is not positive definitive (the model converges, but this issue remains).

I have about 77 groups and 6000 cases. The parms/ols; statement sets a starting value which is one way recommended to deal with a non-positive Hessian matrix (but which made no difference).

#### hlsmith

##### Not a robit
I have gotten that error before. Can you post a screenshot of the log so I can put it into context.

So if I got it right this is for female status predicting DV, with either repeat measures or people clustered in groups. And the groups have random intercepts but fixed slopes based on gender.

#### hlsmith

##### Not a robit
Does every group have males or female? Is there any sparsity (low counts)?

#### noetsi

##### Fortran must die

NOTE: Convergence criteria met but final Hessian is not positive definite.

Its female status predicting the DV (its looking at groups, but not repeated measures). The intercept is random, the only fixed effect is female.

According to some authors the results are invalid if you get this comment.

#### noetsi

##### Fortran must die
Does every group have males or female? Is there any sparsity (low counts)?

Every cell had at least 1 female. There were a very few cells that had low counts (one had 3, but the next lowest was 25 of which 7 were women).

#### hlsmith

##### Not a robit
I would try a different covariance structure to see if you can resolve the warning. Also, keep an eye on the AIC in each of these model.

(UN), may help, or

(CS).

#### hlsmith

##### Not a robit
If that doesn't work, read the following, if you haven't already. I remember coming across this back in the day. In particular the example about variability across classrooms and using repeated instead of random option. Though I would first mess with the variance/covariance structure. Even though Jake said he would probably not mess with the convergence criteria, I would perhaps tweak it a little to let the model run a little longer.

Lastly per the classroom example, you may need to also accept there isn't much variation explained by the group variable that isn't picked up in a simple model. Also what is your group variable? is it a geographic location, if so, locations may be getting separated but they are tangential and actually more similar than believed.

http://www.theanalysisfactor.com/wacky-hessian-matrix/

Last edited by a moderator:

#### noetsi

##### Fortran must die
I would try a different covariance structure to see if you can resolve the warning. Also, keep an eye on the AIC in each of these model.

(UN), may help, or

(CS).

My concern is that what I used is part of a macro needed to run the deviance test. I do not know how changing this effect this macro.