Warning "The estimated weights for the factor scores are probably incorrect" in fa function (psych)

#1
Dear all,

I recently posted this question in another website, reposting it here in case I get any answers. I'm relatively new to this, so maybe I don't include all the information necessary in order to solve this, or I should post it in another forum - please let me know in that case.

Our problem is the following:
We are trying to run an exploratory factorial analysis (EFA) using fa function from psych v2.0.12. We are using:

Code:
efa <- fa(r, nfactors=3, fm="ols", rotate=”promax”, cor="mixed", n.obs = 334, correct = 0)
where r is a matrix of mixed correlations obtained from a dataset of 334 subjects (n.obs=334), with 2 continuous and 12 polytomous variables (7 of them with 7 levels, and 5 with 5) using mixedCor, and the number of factors nfactors=3 is set to the number recommended by a parallel analysis using fa.parallel function.
When carrying out the EFA, we get the following warning:
The estimated weights for the factor scores are probably incorrect. Try a different factor score estimation method
We tried using different methods such as fm="minres", but we are still getting the same warning.

However, when checking the reliability of the obtained result, the model explains a relevant amount of variance (0.86), KMO verifies the sampling adequacy (KMO=.92), all the KMO values for the individual items are >.85, Bartlett’s test for sphericity [χ2(91)=15836.85 p<0.001] indicates that correlations between our variables are sufficiently large for factor analysis, and our alpha is of 0.97. Is obtaining good results on these indexes enough for us to trust the EFA results and ignore the warning?

If not, what could the warning indicate? Is there any way to solve this?

Thanks in advance,

Lidón
 

spunky

Can't make spagetti
#2
Did you get any more warnings with this or was it the only one it gave you? Something about Heywood cases or loadings greater than 1?
 
#3
Did you get any more warnings with this or was it the only one it gave you? Something about Heywood cases or loadings greater than 1?
Thanks for your answer! I also got "In cor.smooth(R) : Matrix was not positive definite, smoothing was done", nothing else.
 

spunky

Can't make spagetti
#4
Thanks for your answer! I also got "In cor.smooth(R) : Matrix was not positive definite, smoothing was done", nothing else.
Aha! That is the problem. Because a mix of metrics (so basically it is a matrix of polyserial/polychoric/Pearson/correlation matrices and those are estimate one correlation at a time as oppose to simultaneously. So you ended up with a correlation matrix that is not a "true" correlation matrix (because it is not positive definite) and it had to be "smoothed" (which is some version of doing an eigen-decomposition of the matrix, messing with the eigenvalues, and re-creating the matrix). Factor scores don't work well with "smoothed" correlation matrices.

Do you need to estimate factor scores for anything, though?
 
#5
Aha! That is the problem. Because a mix of metrics (so basically it is a matrix of polyserial/polychoric/Pearson/correlation matrices and those are estimate one correlation at a time as oppose to simultaneously. So you ended up with a correlation matrix that is not a "true" correlation matrix (because it is not positive definite) and it had to be "smoothed" (which is some version of doing an eigen-decomposition of the matrix, messing with the eigenvalues, and re-creating the matrix). Factor scores don't work well with "smoothed" correlation matrices.

Do you need to estimate factor scores for anything, though?
We are trying to replicate the procedure used in this article, that is, to obtain a composite score from the items of our questionnaire, by carrying out a factor analysis and using the loadings of each item for the calculation of the score. Specifically, they describe it like this:
"Following derivation of the factor structure, factor scores were calculated for each participant for each of the three factors by standardizing raw scores and multiplying these by the factor weights. The weighted standardized scores were then summed to produce a factor score for each participant on each factor."

Is it possible to trust our results even if factor scores don't normally work well with "smoothed" matrices? If not, is there any way we can calculate factor scores in spite of the characteristics of our data (a mix of continuous and polytomous variables that result in a mixed correlation)?

Thank you so much for your help!
 

spunky

Can't make spagetti
#6
We are trying to replicate the procedure used in this article, that is, to obtain a composite score from the items of our questionnaire, by carrying out a factor analysis and using the loadings of each item for the calculation of the score. Specifically, they describe it like this:
"Following derivation of the factor structure, factor scores were calculated for each participant for each of the three factors by standardizing raw scores and multiplying these by the factor weights. The weighted standardized scores were then summed to produce a factor score for each participant on each factor."

Is it possible to trust our results even if factor scores don't normally work well with "smoothed" matrices? If not, is there any way we can calculate factor scores in spite of the characteristics of our data (a mix of continuous and polytomous variables that result in a mixed correlation)?

Thank you so much for your help!
Wow… that’s… not a great method. Not only did the authors completely ignore the problem of factor indeterminacy as well as the fact that any number of (infinite) factor rotations would have given them different factor scores… they made-up their own, ad-hoc version of how to obtain “factor scores”. No regression or Bartlett scores or any other statistically proper way of doing it. They just… made something up. And nobody caught it in peer review. I mean, that’s aside from the fact that they’re “double-dipping” their data (the same data that is used to construct the model is used to validate the classification) which probably explains why they were able to reproduce their bilingual/monolingual classifications. Overall, the statistical methodology of this article is quite suspicious. But then again I guess that’s part of the reason why there’s a Crisis of Replicability in psychological research or the behavioural sciences in general.

Anyhoo, the only way to bypass the non-positive definiteness of the mixed (polychoric/polyserial/Perason) correlation matrix is to get a bigger sample. I’m taking a guess here but I’m willing to bet that in your categorical variables (the ones with 7 and 5 categories) you probably have some skewness (i.e., some categories are consistently endorsed more often than others). That’s quite likely what’s creating issues with low-count (or zero-count) cells on the contingency tables for the categorical variables needed to estimate the poly/serial/choric correlations. The more skewed those variables are, the larger the number of participants in your sample you’ll need. But then again I bet you can just say the mixed correlation matrix needed to be “smoothed” first and results should be interpreted with caution. I mean, the authors of the article you linked got away with a much more questionable approach so what the heck do I know ¯\_(ツ)_/¯
 
#7
Wow… that’s… not a great method. Not only did the authors completely ignore the problem of factor indeterminacy as well as the fact that any number of (infinite) factor rotations would have given them different factor scores… they made-up their own, ad-hoc version of how to obtain “factor scores”. No regression or Bartlett scores or any other statistically proper way of doing it. They just… made something up. And nobody caught it in peer review. I mean, that’s aside from the fact that they’re “double-dipping” their data (the same data that is used to construct the model is used to validate the classification) which probably explains why they were able to reproduce their bilingual/monolingual classifications. Overall, the statistical methodology of this article is quite suspicious. But then again I guess that’s part of the reason why there’s a Crisis of Replicability in psychological research or the behavioural sciences in general.

Anyhoo, the only way to bypass the non-positive definiteness of the mixed (polychoric/polyserial/Perason) correlation matrix is to get a bigger sample. I’m taking a guess here but I’m willing to bet that in your categorical variables (the ones with 7 and 5 categories) you probably have some skewness (i.e., some categories are consistently endorsed more often than others). That’s quite likely what’s creating issues with low-count (or zero-count) cells on the contingency tables for the categorical variables needed to estimate the poly/serial/choric correlations. The more skewed those variables are, the larger the number of participants in your sample you’ll need. But then again I bet you can just say the mixed correlation matrix needed to be “smoothed” first and results should be interpreted with caution. I mean, the authors of the article you linked got away with a much more questionable approach so what the heck do I know ¯\_(ツ)_/¯
Thank you so much for your response.
If I understand this correctly, the right way to calculate a composite score from our questionnaire using factor analysis would be to calculate regression or Bartlett factor scores (or other), instead of the ad-hoc version used in this article (standardized raw scores*factor weights), right? Would that address the factor indeterminacy issue? What about the number of factor rotations? How can we address this?
I checked our variables and we indeed have some skewness. However, it is impossible for us to get a bigger sample at this point... Is there any other way to obtain factor scores from our data in spite of the skewness of some of the variables?

Again, thank you for your answers, I'm learning a lot from them!
 

spunky

Can't make spagetti
#8
So... quite a bit to unpack here.

Thank you so much for your response.
If I understand this correctly, the right way to calculate a composite score from our questionnaire using factor analysis would be to calculate regression or Bartlett factor scores (or other), instead of the ad-hoc version used in this article (standardized raw scores*factor weights), right? [\QUOTE]

You are correct.

Would that address the factor indeterminacy issue? What about the number of factor rotations? How can we address this?
No. There is no way to address this issue with the approach taken by the authors because that's part of what makes Exploratory Factor Analysis (EFA) "exploratory". Actually, we have known for quite a while (ref: https://link.springer.com/article/10.3758/BF03329685) that due to the indeterminacy of factor scores and its rotation, you can basically rotate any factor solution to perfectly predict anything you want. I'm pretty sure this is also at play in the original article you posted.

The only way I know to address this issue would be to abandon the approach of the authors all together and embed whatever it is you're doing within a larger Structural Equation Model (SEM). More likely something called a a MIMIC (Multiple Indicators Multiple Causes) model which allows you to join latent and observed variables.

Also, for a nice overview of everything that can go wrong when you treat factor scores as "observed" variables and use them in further analyses can be found here https://www.tandfonline.com/doi/abs/10.1207/s15328007sem1202_5 We've known about these problems since the 1970s so it's a little bit disappointing to see that they keep coming up over and over again. It's just a reminder that factor scores are estimated, not observed. When you do what the authors did you are discarding all the additional uncertainty associated with the estimation, so the standard errors of whatever you calculate are too small and your Type I error rate is inflated. That renders most of the chi-square tests the authors used in the article uninterpretable/meaningless.

I checked our variables and we indeed have some skewness. However, it is impossible for us to get a bigger sample at this point... Is there any other way to obtain factor scores from our data in spite of the skewness of some of the variables?
I mean... yes and no. Factor Analysis/SEM is a large sample technique. There is no way around it. But then again, as I mentioned, if this is all you have as far as a sample goes then you simply say that you encountered a non-positive definite issue (happens A LOT more often than most people think) and results need to be interpreted with caution. Your only realistic other option at this point is to just not do the analyses the authors did in their article. ¯\_(ツ)_/¯