1. I am clear about using pre-test measures as covariates (I found this in a number of peer-reviewed journals and books too). I also noticed in a number of research studies that additional covariates are being used, besides pre-test measures as covariates. Is there any maximum limit for the number of covariates to be included?

Ok. glad to hear that. But I don't know the maximum. Somewhere else I remember some experts (I think it was Dason) said that a regression can theoretically accept n-1 covariables, where n = sample size. So since ANCOVA is a regression analysis, such number can be applied to it too.

2. I have 3 dependent variables: Achievement, Attitude and Attitude.

I would like to go for MANCOVA on each of the dependent variable separately because of the following reasons:

a. Achievement is a big construct as it consists of 3 subdimensions/subscales (namely, knowledge, comprehension, application). These subscales are intercorrelated with each other. That’s why, I would like to do MANCOVA on achievement.

I think that's fine to use a MANCOVA here, because of the number of your dependent variables. Please note that here, your actual variables are those subscales (while the achievement as a total or combined score can be a single variable). But since MANCOVA deals with situations with more than one dependent variable, it is justified in my opinion to use it here.

b. Besides MANCOVA, I want to compare groups on each sublevel of achievement separately by making use of ANCOVA. Since I will be using ANCOVA 3 times (as there are 3 sublevels), I would like to control type I error rates. To control the possibly inflated Type I error rates, can the Bonferroni method be adopted to limit the familywise Type I error rate to a 0.05 alpha level. Hence, only results significant at the 0.017 level can be accepted (obtained by dividing 0.05 by the number of subscales).

Aha

It is so controversial if the number of tests in a study dictates needing the correction for multiple comparisons, or the number of tests PLUS the type of tests.

I have had the same concern before, and after conversations with guys here (especially I'm so thankful to CowboyBear), and checking many articles with such designs, I can tell you quite confidently that your case (three different ANCOVAs) does not need any correction of the multiple comparison problem. So an alpha set at 0.05 is quite correct in that case.

If you are interested in the reason, actually nobody's sure!! The scientific community has came to accept that "we correct familywise error when doing pair-wise comparisons within an ANOVA framework OR when the number of tests are too much." Is correcting the multiple comparison problem in every situation with more than one test needed? Nobody don't know for sure! But they don't do it in cases other than the one within an ANOVA design, or cases with thousands or millions of parallel tests (such as assessment of MRI images, each of which can have millions of voxels [in the lattter, a Bonferroni is not so practical though, but that is something else]).

If you wanted to consider the number of tests when correcting the multiple comparison problem, you could say "Why only 3 different ANCOVAs? I have some more tests to run in this setup which according to the Bonferroni method, will increase the chance of obtaining a random P value smaller than 0.05. So I have to divide 0.05 not by 3, but by 6 or 7 (or whatever the number of your tests indicates)."... So I recommend you to adhere to the rule of thumb and do correct the multiple comparison thing within each ANOVA, and gladly the post hoc tests already take it into account while giving you P values.

c. Similarly, attitude is a big construct, consisting of 5 subscales which are intercorrelated with each other. That’s why, I would like to do MANCOVA on attitude and ANCOVA on each sublevel of attitude ( in the same way as I mentioned in b).

Ok. I think you can do both of them. Either consider each of those subscales as actual variables and inputing them into a MANCOVA (because the dependents are > 1, not because they are correlated. If they were not correlated, you still had to do a MANCOVA). Or, combine them all and calculate an "attitude" score and evaluate it using a single ANCOVA. Again you don't need and should not correct for multiple comparison thing here.

d. I am still confused about post hoc analysis. Please clarify more.

I don't know what aspect of it still confuses you. I think you know what it is, but if not, I can tell you that an ANOVA (or its siblings) tell us whether there is any significant difference in the whole setup or not. Once an ANOVA returned a significant P value, we are sure that there is some difference among the variables involved in that ANOVA. But we want to be more specific, and we want to understand which variables are the ones responsible for the significant ANOVA P value? So we run pairwise tests between different levels of different variables to see which of these correspond to the total ANOVA significance. There are some specific tests designed for this purpose. They can both compare two groups and correct the multiple comparison problem inherently (so using them spare us from applying the Bonferroni correction method). These are called "post hoc" tests.

If you need to know how to run them within SPSS, you should first open the ANCOVA dialog box. Then all the post hoc tests are available by pressing the button labeled "post hoc".