What I meant by "better results" was the following: all the items of my dependent variable load together reasonably well (each > 0.7) when I do PCA on the subset of the dataset selecting only the columns corresponding to those items.

oh, i see... well, you should NEVER do that. the loading of your items and the apparent niceness of your results is nothing more than a statistical artifact of how you're getting your factors. factors and latent structures are defined over the

*joint* covariance of your variables (or items in this case). if you chop up that covariance structure then you are really not dealing with factors.

although there is debate on the literature, i have never seen a clear solution as for what is the best way to extract factors, all of them seem fine to me. i like maximum likelihood or generalised least squares because it gives me a chi-square test of model fit, which is sometimes pretty useless anyways depending on sample size... but at least i get something. principal components analysis is not factor analysis and, to the best of my knowledge, one of the few instances where the results are the same is when the residual correlation matrix is diagonal (or the communalities are very close to 1), but that doesnt happen very often (if ever).