RMSEA impacted by high reliability scores..

#1
Hello again

Sorry - In relation to running Confirmatory Factor Analysis -

I'm wondering whether someone is aware of RMSEA being impacted by high reliability scores ?

As background, I understand that RMSEA can be affected by high factor loadings.
Currently, I have a construct with 6 items with factor loadings for all items being 0.86 and above.
The construct shows high composite reliability (0.89).
However the RMSEA is 0.099, CFI/TLI = 0.99, and significant chi square (35.23)
R-square for all items are above 0.75, with residual correlations below 0.04.

I would be grateful for any insights.

Many thanks in advance
 

spunky

Doesn't actually exist
#2
I'm wondering whether someone is aware of RMSEA being impacted by high reliability scores ?
As background, I understand that RMSEA can be affected by high factor loadings.
well... reliability under a latent variable framework is a function of the factor loadings, and if high factor loadings influence RMSEA...then... i'm sure you can see how it's connected :cool:
 

Lazar

Phineas Packard
#3
I cannot see how the size of any parameter directly effects the RMSEA as it is based on the chi-square which is based on the lack of fit that results from unestimated not estimated parameters but maybe I am missing something.

I am guessing here, that either a) your sample size is small and/or b) you have a simple model, maybe a one factor congeneric? RMSEA is upwardly biased for simple models and small sample sizes. http://davidakenny.net/cm/fit.htm
 

spunky

Doesn't actually exist
#4
I cannot see how the size of any parameter directly effects the RMSEA as it is based on the chi-square which is based on the lack of fit that results from unestimated not estimated parameters but maybe I am missing something.
i think there was a hidden assumption here (which i also didn't make explicit so apologies for that) where RMSEA is affected by high factor loadings in the case of misspecified models. meaning, if your model is misspecified, then other model parameters (aside from the misspecification) play a role in how big or small your fit indices can be.

i thought this girl would've written a manuscript and published it (apparently she didn't) but her MA thesis is open for anyone to have a look, and she does a great job in explaining what happens and why. but yeah, the gist of the argument (for RMSEA) is that IF the model is misspecified, a model with higher loadings would result in a worse RMSEA than a model with lower loadings. the implication being, of course, that you could have a horribly-misspecified model with somewhat lower loadings (in the population) looking better (by RMSEA) than a model with very minor misspecifications but which has higher loadings... nudging you to maybe choose a badly-specified model for no other reason than the fact that it has lower loadings.

THESIS IS HERE
 

Lazar

Phineas Packard
#5
Interesting. Still I would put my money on it being a simple model and that being the reason :) I would not recommend the use of RMSEA for one factor congeneric models.
 
#6
Belated thanks Spunky, Lazar, for your responses =)
(we must be on different time zones)

Re sample size - it is 297, so not small, but not large either..

The model I'm referring to is congeneric, however one of 10 models..

The problem is, I've run the 10 one-factor models - and made respecifications so that the models were a good fit (e.g. acceptable Chi square, RMSEA, CFI, TLI, WRMR) based on MI, EPC, and item content.

Then, run pairwise models for all pairs of one-factor models to identify factor cross-loadings. Some items were deleted at this stage if they showed factor cross-loading.

Now.. I've run the one-factor models again (which were revised following the pairwise analysis), and two of the one factor models now don't fit.. (!)

However, the full 10 factor does show good model fit, which is positive, but doesn't help the one factor model situation..

My thoughts for the two one factor models are that - given the high factor loadings (above 0.7), high composite reliability scores (above 0.8), and high R-squares (above 0.74), there is an argument to leave the models as they stand, despite the RMSEA scores of 0.084 and 0.099 respectively...

Wondering your thoughts..

Cheers again for this

** Thanks Spunky for the link to the thesis, I think Heene et al (http://www.ncbi.nlm.nih.gov/pubmed/21843002) and Browne et al. discuss something similar http://www.ncbi.nlm.nih.gov/pubmed/12530701... ?
 
#12
Many thanks again for your responses =)

I don't mean to sound like a neophyte - I've just had a look at the ESEM papers and available syntax - can you run ESEM or semi-confirmatory factor analysis for one factor models ?

The syntax that I've found on statmodel and in the Asperhov & Muthen paper look at two or more models..

I think I'm looking to argue that the two one-factor models, individually, are a good fit (rather than suggesting that they are two separate constructs, if that makes sense)

Cheers again, and best wishes

** for a minute, I thought spider-man was the username for Professor Herb Marsh on this forum, lol.