AMOS measurement model - degrees of freedom issue

#1
Hi there,

I am working on a SEM relationship representation of three latent variables. The first one "Stres" [EN: Stress] has four indicators, the second "CAT" [a conceptual cognitive test] has three indicators, but the third latent factor "Fluencia" [EN: Verbal Fluency] has only two indicators. Because of the fact that the "Fluency" factor has only two manifest indicators in the measurement model, this factor has not enough degrees of freedom.

How can I correctly make constraints to deal with this issue?

PS: I set both loadings to "1" because in past research I found relative equal loadings between the variables - do I saved one degree of freedom here. One more thing, if I change the latent "Fluency" for a summative score (mere sum of #Z and #T), so there is no latent but manifest composit, the regression weight to "CAT" is almost exactly the same.

EDIT: the model fit is good: x2 = 0.2; CFI = 0,95; RMSEA = 0,069 (pclose > 0.3)

Thank you much,
best regards,

M.

Image - please see the red rectangle part.
 
Last edited:

Lazar

Phineas Packard
#3
You will have to give a bit more info here. The model is identified globally regardless of what you do with the two item factor (I count 45-18 or 45 - 17 df for the total model). Obviously as a two item congeneric model it is not identified (< 0 DF) but in this context it is fine. see http://davidakenny.net/cm/identify.htm

It could be empirically under identified but just run the model and see if the results are sensible or you get a funky error message. In which case just bring the error message back here. In any case I gave a recent talk on SEM that might be useful (at least the link at the end https://dl.dropboxusercontent.com/u/17190019/SEMtalk.pdf).
 
#4
Thanks for your answer,

yes, there are about 26df globally, no error message not a single regression was insignificant.
The only little thing is a marginal (0.051) (in)significance of one error variance (e9).

So the question is wheter I can put in the latent "Fluency" factor even though it has only 2 indicators (and thus is locally not identified) or not.
And secondly, should I constrain something (I constrained both loadings to 1, assuming equality) or it is not necessary (/propper)?
(it has almost no impact to the fit statistics or estimates, but to be as correct as possible).

As for the model, I checked
1) Overall fit -> good
2) Estimates -> good
3) Multivariate normality -> below 1.5 for total
4) Outliuers -> ok
5) I compared "latent fluency" with "raw-sum fluency" (as manifest) -> comparable results
6) I compared "latent CAT" with "raw CAT" -> little, very comparable results (like the regression weights changed by 0,01 to 0,07)
7) I compared "latent %STRESS" with "EFA composit Stress" -> again, it was very similar

...perhaps I can do something else?

Anyway, thank you for advising and the texts, I am just about to read them,
M.


EDIT: perhaps very important thing - I have only 65 observed cases to build the model (it was very time- and money-expensive
to get a single observation).
 
Last edited:

Lazar

Phineas Packard
#5
It is only the case that a two item latent factor is not identified if it is modelled in isolation (3 elements in the covariance matrix with 4 estimated parameters give -1 df). In the context of a larger model however it is fine (see first link I sent you). You can constrain the two items to have tau-equivalence if you want but it is not absolutely needed. Just run the model you are interested in check convergence etc.
 

spunky

Can't make spagetti
#6
And secondly, should I constrain something (I constrained both loadings to 1, assuming equality) or it is not necessary (/propper)?
constraining both loadings to be one makes the Fluency factor become weighted average of its two indicators. so you're basically just treating it as an observed variable.

but i'm with Lazar on this one. the W7 path that connects it to the rest of the structural model should help with its identification. or constrain them to have equal loadings. or constrain them to be their reliabilities. there are many things you can do to those loadings and each one answers a different question. so, overall, it really boils down to how you conceptualize the Fluency factor as being measured by your indicators. remember, design should always guide analysis and not vice-versa. whatever your design is should tell you what to do with those loadings.
 
#7
It is only the case that a two item latent factor is not identified if it is modelled in isolation (3 elements in the covariance matrix with 4 estimated parameters give -1 df). In the context of a larger model however it is fine (see first link I sent you). You can constrain the two items to have tau-equivalence if you want but it is not absolutely needed. Just run the model you are interested in check convergence etc.

Constraining both loadings to be one makes the Fluency factor become weighted average of its two indicators. so you're basically just treating it as an observed variable. But i'm with Lazar on this one. the W7 path that connects it to the rest of the structural model should help with its identification. or constrain them to have equal loadings. or constrain them to be their reliabilities. there are many things you can do to those loadings and each one answers a different question. so, overall, it really boils down to how you conceptualize the Fluency factor as being measured by your indicators. remember, design should always guide analysis and not vice-versa. whatever your design is should tell you what to do with those loadings.
Thank you both for help,
as for design guidance, I had not initialy assumed the same loadings to the two indicators, this "move" was made just in order to deal with the identification issue (and to check the difference to study the underlying assumption).
But I was not sure, so thank you, I will let the one loading to be estimated (thanks to W7 connection).

And as for the record, I know (apriori from previous research) that the reliability should be about 0.75 to 0.8 (in cronbach terms, unidimensionality tested positive) and I have
their loadings to general factor from previous research. In this case, can I make constrains in the loading with reference to this past research (put the apriori estimated loadings)? (I ask just to make sure I understand the principle of constraining loadings/error variances/factor variances).

Again, thank you for your constructive and educative posts,
M.
 

Lazar

Phineas Packard
#8
I would not bother. I do not see why you would set the loadings to previous research reliability when you can just estimate it from the data.