How to check if results statistically significant?

#1
Hi all,

I am a novice when it comes to statistics and would really appreciate some advice!

I created a teaching video for students on a particular surgical procedure. In order to determine if it is an effective learning aid, I asked the students to watch the video and complete a test pre and post video. The aim was to see if watching the educational video brings about an improvement in their test score (and therefore knowledge of the topic).

In total 29 students participated in the study and almost all students improved their score. I have calculated the mean scores pre and post video and the results show that on average the percentage improvement in score was roughly about 30%. I want to work out if the improvement in score after watching the video is statistically significant (i.e. unlikely to be due to chance). If i'm not mistaken I would need to calculate the p value to do this? Which statistical test would be most appropriate in this case to calculate the p value (note: I dont have a "control" group as such)?

Also, I would like to compare the percentage improvement in score between various subgroups (i.e. male vs. female, those who had seen the operation before vs. those that had not) and see if the differences between the groups are statistically significant.

The data is available here https://www.yousendit.com/download/OHo2ak93UzgzMW52Wmc9PQ

I would really appreciate your help guys.

Thanks.
 
#2
This is a repeated measures
a "within" subjects test ...

BUT ... note ... as you yourself do ....

you don't got nothing without a control group. You can make absolutely no claim that the video was the prime mover here ... it could have been their FEAR of having to sit through it again ... it could have been because they were sleepy beforehand ...

sorry ;-(
 

CowboyBear

Super Moderator
#3
I agree with Philyuko on both points. The specific repeated measures test that is appropriate would probably be the paired t test (although this does come with the assumption of normally distributed scores). You can run a paired test online at http://www.graphpad.com/quickcalcs/ttest1.cfm or in pretty much any stats software package (SPSS, SAS, even Excel).

But yeah, without a control group (preferably with randomisation), you really have no basis to claim that any improvement (even if statistically significant) is due to the video. This is especially a problem with performance tests - practice effects will likely be substantial (i.e. having done the test before will probably improve scores in and of itself). If you're keen to perform a meaningful analysis of the effect of the video, you need to randomise students to video or no-video conditions :eek:
 
#5
You could do a Sign Test, which is nonparametric. So you wouldn't have to worry about the normality assumption. There is also a 2-sample version of the Sign Test to test subgroups, but with only 29 total students, it probably won't have much power.