How to test for group differences in questionnaire responses

#1
The original project involved giving different forms of instruction to three different groups. We also gave participants a questionnaire regarding previous knowledge of the subject, how many college courses they've taken, and how much they liked the instruction formats. Because of group randomization, things like previous experience and courses taken shouldn't be significant between groups, but opinions about the instruction format and its effectiveness may be. There are 15 questions total, many of which are interrelated (e.g., how many psych courses taken and how many graphs a student has made may be correlated because of course assignments to make graphs).

How should I test for differences between groups on these questions? I thought I might use a separate ANOVA for each question, then use a Bonferroni correction to protect from familywise error, but is there a simpler, more powerful, or more common way?

Thanks for the help!
 
#2
I don't understand why you would do a seperate ANOVA for each question, as compared to one for the entire test. There is a vast literature out there on reliability of a test, essentially how related each item is to some central dimension you want to test. IRT is one version of this, classical test theory another. Factor analysis is also used to determine if common factors explain the questions (or if there is more than one underlying factor).

Usually you do this before you test if the test has any impact.
 
#4
It depends on what you are trying to test. Commonly you are testing an entire intervention not individual elements of it. In that case you would certainly do a single ANOVA. I am not certain why you would be running ANOVA's to test the results of individual questions.

But reading through your question again, it seems like your treatment is really three forms of instruction not the test which I originally assumed. Your test appears to be the way you measure how people changed as result of the three different types of instructions. If so I am confused what your real purpose is here. Is it to see if the test reliably and validly measures whether people are influenced by the instructions? If so that should be addressed by something like IRT, you rarely use ANOVA to address this. Or do you already have a validated test measure for the instructions impact, in which case the ANOVA should be on if the instructions made a difference (one ANOVA for the overal results unless you think the results should be broken down into mutliple dimensions which would be pretty complex).
 
#5
We found significant performance differences between instruction methods, but we administered the questionnaire in order to ensure those performance differences weren't due to error - we want to show that all 3 groups had equal knowledge and skills before using the tutorials. If groups had differences in experience prior to using the tutorial, we couldn't say the performance difference after using the tutorial was because of the differences in tutorial. The questionnaire also asked questions about how much they liked the tutorial and if they would recommend it to others - these questions are expected to be significantly different between groups, because one group received a "no instruction." version of the tutorial.

So for more organization -

between groups design, 3 groups, random assignment to conditions
IV is tutorial type (text, video, or control)
DV is performance making a graph (measured using a rubric, gives a score out of 50)
questionnaire was to measure prior knowledge and computer experience

The questionnaire has 15 questions, and many of the questions are interrelated (how many classes they've taken and how many graphs they've made is probably correlated).

Thanks for the help!