Bad fitting models - what to do next?

MrJP

New Member
#1
In my study I was doing studying attitudes and their influence on eachother using existing 7-point likert scales. To goal was to models these attitudes. I had almost 300 participants, 9 latent variables and about 30 items. I've first done a CFA, which couldn't achieve acceptable fit. I used the chi-square, plus 4 other fit indices.
I did a couple of modifications, such as dropping poor loading items, combining subscales (because of a non-positive definite matrix), adding correlations between error variances of items with belonged to the same subscale. That lead to 2 fit indices being good fit (RMSEA and SRMR) and three being a bad fit (chisquare, NNFI and CFI). As I couldn't find an explanation for the bad fit incides I decided the model was truly a bad fit. As all scales had been validated before I tried to be conservative in rejecting them.

Based on the data exploration I thought the participants in one condition were showing a strong positive rating bias. I tested whether the items might be influenced by a condition in the test. This condition was added as a dummy variable (0/1) which influenced the latent variable Acquiescence, which loaded to all items. This again showed a bad fit.

I am not too familiar with SEM and I can't find what to do with bad fitting models. All books and examples I find focus on reporting good fitting models. What are you expected to report with a bad fit? I understand the model, the modifications and fit indices should be reported, but what about parameter estimates? Are they still of any value if the model has bad fit? Is it any use to do further modelling if a CFA doesn't fit? From my understanding a model with the same latent variables will always show worse fit as a confirmatory model. The confirmatory factor model has free parameters between all latent variables, while a typical model will always have more restricted parameters.

Also what can I conclude if the model has a bad fit? Can I say the model is bad (for the situation I tested it in)? Even if this disagrees with other studies?
 

noetsi

Fortran must die
#2
Normally if the fit is bad it means you misspecified your model (not that you measured it through an invalid method, that you are making theoretical assumptions that are not correct). The solution is to think through what could be wrong and specify that. One alternative in Mplus is that you can ask the software to suggest paths you have not. You can then test these new paths (ideally you need to be able to explain what these new paths mean and why they are theoretically justified). This won't, however, suggest paths you have that you should not.

Who are you reporting the model to? Generally a journal will be leary of publishing something in which you don't discover anything (and discovering that nothing works even if true may be rejected this way). What you report is best answered by looking at who you are submitting the information to (if its a professor or a research group you can just ask them. If its a journal read past publications they did. What you report depends on how you are doing SEM and substantive issues in your research issue).

A 7 point likert item may show non-normality and inflate the SE of the chi square test of model fit. At least if you are using Maximum liklihood which is the default for most SEM software. There are alternatives such as robust weighted least squares and the like to address this. SEM assumes that the observed variables are interval in scale which Likert are not (although with 7 levels you might be ok).

Did you have at least 3 observed variables for everyone of your latent variables?

If your model fit is bad then what you can say is that the matrix produced by your theory does not match the matrix in the actual data. What is important is why it does not.
 

MrJP

New Member
#3
Thank you for the response. It's going to a professor first, altough I think he has limited experience with SEM research so I am looking for other oppinions. I've used modification indices, but when using only ones that theoretically make sense I can't achieve a good fit. Each latent variable has between 3 and 6 observed variables and the items generally have good loadings on their factor. I also inspected whether 2 variables had very high correlation and should be merged, this was not found to be the case for any latent variables.

From what I've read ML performs okay for 7-point likert scales. While WLS is preferred for likert scales, I read the recommended sample size is n > 1000, which I don't have. If people have other information on this, please let me know.

I do have some theories that some cognitive biases are the distorting the data. Most of the paths in my model have been used in other studies where they did find good fit. Based on what you said, I think my focus should be on explain how these biases have interferred and why not in the other studies.
 

noetsi

Fortran must die
#4
I have never read that WLS needs a 1000 or more cases. In my classes in SEM we ran it with far fewer (say 500 I believe). When you have a thousand or so cases of course inflation in Chi Square becomes a major problem.

I am not expert in SEM for sure ( I took several classes, but don't use it regularly). So take these comments with a grain of salt. If you ran the MI that make conceptual sense then you are left with:

1) The type of measurement issues you noted.
2) Something wrong with the core theory (in honesty if you can find this that is what journals would be impressed by). :) This would involve signficant changes in the paths of course.

3) I assume you are using standard CFA with no correlation between factors, no variables loading on more than one factor, only one layer (level) to the factors and no feedback loops between observed variables. It could be that your model does not actually fit that and you need a more complex relationship. But going from there to what actually needs to be changed would likely be complex and of course you need some theoretical basis to do so.

4) Your data itself could be a problem (maybe there are biases in it for example tied to sampling problems). Or simply different than past samples in a way that leads to different conclusions. You might want to look at who and how past studies sampled.

A lot easier to make suggestions than to actually do any of this.