On you running confirmatory or exploratory factor analysis. In the former case the chi square goodness of fit tells you if the model fits the data.
Hi and thank you for reading this post,
I ran a factor analysis on a scale and its data, and it came with a "Chi-Square goodness-of-fit" output (on SPSS). It's right after the correlation and factor matrix and it goes as follows:
Chi-Square: 423.255
df: 170
Sig: .000
How could I interpret/report this? It's a bit random that it showed up right after the scale and I'm honestly clueless.
Thanks again,
Statn00b.
On you running confirmatory or exploratory factor analysis. In the former case the chi square goodness of fit tells you if the model fits the data.
Typically goodness-of-fit tests don't tell you if the model fits the data. They just tell you if there is evidence that the model and the data don't really agree.
statn00b (11-18-2011)
But how would you report the p-value in this case? It's an exploratory factor analysis.
This might be one of those "actual usage versus theory" disagreements we get into Dason but the norm for discussing chi square in SEM is to talk about if the data fits the model. Specifically if the reproduced covariance matrix is close to the covariance matrix in the data (which means the residual covariance matrix will be small). That is the specific terminolgy used, does the model fit the data. I have never seen the term "agree" used
statn00b (11-18-2011)
statn00b (11-18-2011)
So, would you say it has no values in terms of interpreting the factor analysis?
I never say anything has "no value." I am not enough an expert in statistics to make that comment. What I meant was that there are many statistics generated in reports. Only a small portion of those are reported. This does not mean that the things not reported have no value (if they did they would not be generated). It is that the field is not primarily interested in this.
This was not what we did in logistic regression. Rather, we transformed the conditional expected value, and made that a linear function of X. This seems odd, because it is odd..
statn00b (11-18-2011)
I see, but if I were to interpret the results with that p-value, what would the interpretation be like? I just want to know for future reference.
I don't know what the chi square is testing. In order to interpret the significance test you have to know what this specific chi square is testing (and what rejecting the null means for it). For example in logistic regression rejecting the null (tested by a Wald test using the chi square distribution) means the variable tested matters. In SEM rejecting the null of the chi square goodness of fit test is what you are looking for, it means your model fits the data.
SPSS has documentation that tells you what a test is actually measuring. Without this you can not interpret it. Formally the p value tells you if you reject the null hypothesis or not at a given alpha level. If the p value is below your alpha level then you do reject the null.
This was not what we did in logistic regression. Rather, we transformed the conditional expected value, and made that a linear function of X. This seems odd, because it is odd..
I'm just make the point that in a goodness of fit test the null hypothesis is that the data came from the proposed model. So if you don't reject the null that doesn't mean you have evidence that the null is true - just that you lack evidence that it is false. It's the same situation you have with all null hypothesis testing situations.
Ok I misunderstood you. The null is never true or not true (or rather you can never know it is based on a statistical test). I was talking about what rejecting the chi square test null meant substantively in SES.
This was not what we did in logistic regression. Rather, we transformed the conditional expected value, and made that a linear function of X. This seems odd, because it is odd..
Well, see, I myself don't know what the null here would be. I did the factor analysis on a 20-item questionnaire doing the following
UNIVARIATE - Set of tables.
INITIAL - Communalities and eigenvalues.
SIG - Significance.
KMO - Kaiser-Meyer-Olkin measure of sampling adequacy and the Bartlett test of sphericity.
EXTRACTION - Extracted communalities and eigenvalues.
I also did the eigen plot, with 1 factor and 25 iterations (by default on SPSS). Used an ML (maximum likelihood), and used no rotation.
So I knew what all of that meant... but to my surprise, I see a "Goodness-of-fit" test at the end of my output! And I have no idea how this relates to what I had anticipated I would see.
All I can say is that I have never seen this brought up in comments on EFA. If you KMO and Bartlett is ok then you focus on your factor loadings and how many factors are extracted normally. Generally you will use rotation (which you did not) because the factor structure is much easier to interpret if you do.
This was not what we did in logistic regression. Rather, we transformed the conditional expected value, and made that a linear function of X. This seems odd, because it is odd..
Tweet |