In my study I was doing studying attitudes and their influence on eachother using existing 7-point likert scales. To goal was to models these attitudes. I had almost 300 participants, 9 latent variables and about 30 items. I've first done a CFA, which couldn't achieve acceptable fit. I used the chi-square, plus 4 other fit indices.
I did a couple of modifications, such as dropping poor loading items, combining subscales (because of a non-positive definite matrix), adding correlations between error variances of items with belonged to the same subscale. That lead to 2 fit indices being good fit (RMSEA and SRMR) and three being a bad fit (chisquare, NNFI and CFI). As I couldn't find an explanation for the bad fit incides I decided the model was truly a bad fit. As all scales had been validated before I tried to be conservative in rejecting them.
Based on the data exploration I thought the participants in one condition were showing a strong positive rating bias. I tested whether the items might be influenced by a condition in the test. This condition was added as a dummy variable (0/1) which influenced the latent variable Acquiescence, which loaded to all items. This again showed a bad fit.
I am not too familiar with SEM and I can't find what to do with bad fitting models. All books and examples I find focus on reporting good fitting models. What are you expected to report with a bad fit? I understand the model, the modifications and fit indices should be reported, but what about parameter estimates? Are they still of any value if the model has bad fit? Is it any use to do further modelling if a CFA doesn't fit? From my understanding a model with the same latent variables will always show worse fit as a confirmatory model. The confirmatory factor model has free parameters between all latent variables, while a typical model will always have more restricted parameters.
Also what can I conclude if the model has a bad fit? Can I say the model is bad (for the situation I tested it in)? Even if this disagrees with other studies?
I did a couple of modifications, such as dropping poor loading items, combining subscales (because of a non-positive definite matrix), adding correlations between error variances of items with belonged to the same subscale. That lead to 2 fit indices being good fit (RMSEA and SRMR) and three being a bad fit (chisquare, NNFI and CFI). As I couldn't find an explanation for the bad fit incides I decided the model was truly a bad fit. As all scales had been validated before I tried to be conservative in rejecting them.
Based on the data exploration I thought the participants in one condition were showing a strong positive rating bias. I tested whether the items might be influenced by a condition in the test. This condition was added as a dummy variable (0/1) which influenced the latent variable Acquiescence, which loaded to all items. This again showed a bad fit.
I am not too familiar with SEM and I can't find what to do with bad fitting models. All books and examples I find focus on reporting good fitting models. What are you expected to report with a bad fit? I understand the model, the modifications and fit indices should be reported, but what about parameter estimates? Are they still of any value if the model has bad fit? Is it any use to do further modelling if a CFA doesn't fit? From my understanding a model with the same latent variables will always show worse fit as a confirmatory model. The confirmatory factor model has free parameters between all latent variables, while a typical model will always have more restricted parameters.
Also what can I conclude if the model has a bad fit? Can I say the model is bad (for the situation I tested it in)? Even if this disagrees with other studies?