The help here is really great, thank you a lot! Also I apologize, if I probably ask mundane questions, Im really not very deep in the statistics :-/

Some data:

Sample size: 333

Items: 45 (all with 4Point-Likert scale)

Im not sure, if I understand the parallel analysis right, here is what I did:

Number of Variables in your Dataset to be Factor Analyzed: 45

Sample Size of Your dataset: 333

Type of analysis: 1

Number of Random Correlation Matrices to Generate: 111

Percentile of Eigenvalues: 95

Seed: 1000

It goes to Root 21 (of 45), before the mean (thats the Eigenvalue, right?) goes under 1.0

you almost got it right. what you have to do now is check the eigenvalues that SPSS reports from

*your* data and compare those to the number that appears in the 'Means' column. the logic behind this is that you only keep the number of factors SPSS tells you are

*greater* than the ones in the 'Means' column. so, for example, if your eigenvalues from SPSS were 5,4,3 and then they went down to 0.000001, 0.000000001, 0.000000001, and so on, you would compare those to the 'Means' column and would only keep the first 3 (the hypothetical 5, 4 and 3). in your case you already have a theoretical reason as for why you should use 4 factors so that in itself should trump any statistical rule of how many factors you should have. but i guess it's always good to have extra reassurances.

because your data are ordinal and not continuous, you could potential have the issue that you're getting weird results because you're not treating your data as categorical and doing your factor analysis on the poychoric correlation matrix, which is the one you should be using. but, once again, SPSS will not calculate this one for you (did i mention that you could do this in R for free? just saying...

)

but since you've already admitted that statistics is not your forte, we'll just leave it like that.

Yeah I guess thats just the natural thing to find. Its always hard to trust in your own data, when it goes a different way than expected, because the many oh so great studies showed differently. Of course thats the only reason, why the have been published in the first place, because nobody wants to read unsignificant data (at least thats what many papers seem to imply, sadly).

well, there are

*some* places now where researchers with null findings publish them (like here:

http://www.jasnh.com/) because those could also be informative. but, in general, you're right: nobody (particularly in the social sciences) really cares much about stuff that happens if you don't get p <.05

maybe as a reassurance i would be willing to say that, from a data analytic perspective, you're not going about this quite right. as i mentioned to you, your research question (are my subscales independent of one another?) does not quite match the analysis that you're doing (exploratory factor analysis). you have a well-defined hypothesis that you

*could* test (i.e. the null hypothesis H0: factor correlations = 0) if you were using the correct methodology (structural equation modelling/confirmatory factor analysis). but because you don't want to/can't step outside from SPSS we sort of have to do some hand-waving, close our eyes and 'pretend' that exploratory factor analysis can masquerade as a confirmatory technique. the crux of the issue is that you don't have a standard error for those factor correlations so you cannot test whether or not they are 0 in the population. they're on the bigger side of things though (r=0.4) but it could happen. or you could do some more advanced methods (a latent mediation model) but then again that implies you can do structural equation modelling.

all in all this is just to say that you

*could* be right and what you expect about your scales

*could* be true, but you'd need to use the proper analytic tools to discover this.