Queries regarding ANCOVA and MANCOVA

Although as Dragan stated, Bonferroni might be too conservative, it is still correct (the second form that Dragan taught us [the one which can work with covariates]). I suggest you playing with both tests and see if the P values given by the Bonferroni differ considerably from those given by the LSD or not. If I had a similar situation and saw that Bonferroni and LSD are showing similar results, I would publish the Bonferroni one, because I have seen it being used more.

Besides, given the more conservative nature of Bonferroni, its significant P values are more reliable.


No cake for spunky
However, if "the number of tests" was the determining factor, we must reduce the alpha level for almost every study, because in most of the studies, there are more than one test. Besides, this multiple comparison thing says that "we have more than one test, so we might be more likely to obtain some random significant P values." But what are the boundaries to enable us to define the number of tests?
I suspect, based on my own experience, that most authors test multiple models before their final one. In practice this should require a family wise modification because of all the test they run, but I suspect this virtually never happens. Well that is true for regression. In ANOVA it is common to do post hoc tests such as Tukey's HSD which inherently reduce power to address this. If equivalents exist for regression (essentially the same method as ANOVA) I have not seen this. One reason you do contrast is because they have more power than these ad hoc test.

SPSS has a wide range of ad hoc tests for ANOVA. Tukey's HSD is probably the most requested.