My research team and I ran two separate confirmatory factor analyses on two separate measures (same participants filled out both self-report measures). One measure was found to have 2 correlated factors and the other 3 correlated factors. What we want to look at now is whether there two measures are measuring different things or is there some overlap.

Would we want to run one confirmatory factor analysis with everything and see where things fall? I am assuming if we do that and confirm a five factor model then we can say our two measures are measuring distinct things. The two measures look at difference aspects of the same overall construct, so I am thinking the factors may end up correlated but distinct?

Any thoughts? Thanks. ]]>

f^2 = (R2(A, B) - R2(A))/(1-R2(A,B,C))

where R2(A, B) is the R_squared when both control set variables A and test set variables B are in the model; R2(A) is the R_squared when only the control set A is in the model. "But the noise is further reduced by the other variables, comprising a set C, so PV_E = 1 - R2(A,B,C) (Model II error, Cohen & Cohen, 1983, pp.158-160)

So my question is that anyone really uses this test? If so, could you please share a reference?

Thank you very much!

Joy ]]>

I am new to this forum and statistics and looking for a little insight from folks more advanced.

I am currently reading about the NHST debate and am trying to understand why anyone would use it if it there is so much condemnation of it? Are there times when it is appropriate to use? From my limited understanding, I am thinking that the time to use it is when you are researching a topic in which not a lot is known. Also, if you decided not to use it, what are some other options?

Honestly, the more I read about the debate, the more I am confused about when you would and wouldn't use NHST. Anyone willing to lead me towards a better understanding of this or at least start the conversation? Thanks.:o ]]>