Belated thanks Spunky, Lazar, for your responses =)

(we must be on different time zones)

Re sample size - it is 297, so not small, but not large either..

The model I'm referring to is congeneric, however one of 10 models..

The problem is, I've run the 10 one-factor models - and made respecifications so that the models were a good fit (e.g. acceptable Chi square, RMSEA, CFI, TLI, WRMR) based on MI, EPC, and item content.

Then, run pairwise models for all pairs of one-factor models to identify factor cross-loadings. Some items were deleted at this stage if they showed factor cross-loading.

Now.. I've run the one-factor models again (which were revised following the pairwise analysis), and two of the one factor models now don't fit.. (!)

However, the full 10 factor does show good model fit, which is positive, but doesn't help the one factor model situation..

My thoughts for the two one factor models are that - given the high factor loadings (above 0.7), high composite reliability scores (above 0.8), and high R-squares (above 0.74), there is an argument to leave the models as they stand, despite the RMSEA scores of 0.084 and 0.099 respectively...

Wondering your thoughts..

Cheers again for this

** Thanks Spunky for the link to the thesis, I think Heene et al (

http://www.ncbi.nlm.nih.gov/pubmed/21843002) and Browne et al. discuss something similar

http://www.ncbi.nlm.nih.gov/pubmed/12530701... ?