help in data analysis for physiotherapy bachelor thesis

Hi everyone! I'm new here, write from Italy so may you all accept my apologies for my english... hope to be understood anyway.
I am graduating in physiotherapy and doing an experimental study as thesis dissertation, a sort of little clinical trial. I'm writing here because i begun experiencing some difficulties with statistical analysis of the data I collected.
This is the experimental design: I recruited 30 subjects and randomized them into 3 groups: group 1, 2 and a control group.
Then I measured each subject's ankle range of motion (ROM, which is how many angular degrees a joint moves). Then, a 10 years experienced physio treated each subject according to the group they belong to. Then I measured ankle ROM again. I did not know how each subject was treated until all the experiments have been done, when I put together measurements and subjects' allocations.
To summarize, I now have 3 groups made of 10 subjects each. Group 1 and 2 received a treatment that we hypothesized should improve range of motion (and we wish to now which one is the more effective), group 3(control) received a placebo treatment which had no effect.
The analysis I made if, for each group, a paired two tailed student's t test to see if changes between pre- and post- treatment are statistically significant. Then I used unpaired two tailed Student's t test to compare groups' mean of the differences between pre- and post measurements. So, I made group 1 VS 2, 1 VS control, 2 vs control... reaching statistical significance in all of those comparisons (p<0.001).

BUT, reading some statistical tutorial n the net, I discovered that it is not correct to use Student's t test the way the way I did to compare groups!! I found out I should use an ANOVA.
Well, I finally come to the point. Can I use an ANOVA, to compare differences occured in subjects after they received the treatment? Or have I to use absolute values? Because, mean of the differences show that there is come substantial difference between each group, but the same don't do the absolute values given that ankle ROM is a value that differs much between various subjects.
Also, I would be advised how to carry out the ANOVA (software, spreadsheet, whatever?) or suggested if there is any other kind of statistical test or tool that better fits my needs.

In a few hours I will attach to this thread an excel spreadsheet with the measures I took... now I don't have them here with me.

Thank you!!
Hi.... again
here it is the excel spreadsheet with the data I was speaking about, so all the talk makes more sense.

I was also wondering, maybe I should have posted this thread in "Epidemiology and Biostatistics". If a moderator is of the same opinion, could he please move it there.

Thank you!
Last edited:


Super Moderator
Hi Mattia,

Have moved post to Epidemiology and Biostatistics as you requested. I wouldn't worry too much about which forum you place posts in though - I think most of the regulars tend to use the Latest Posts feature, so new posts will get noticed regardless.
Thank you CowboyBear.

I'm trying to figure out something.
The biggest doubt I have is this: is it correct to use a one-way ANOVA that compares differences occurred in the subjects due to the treatment?
Little by little it seems that I manage to get through it.
I tried 2 paths:
1. repeated measures anova, which results in no statistical significance
2. oneway anova, entering difference (post - pre) as values to analyse

Repeated measures anova is based on two coloumns of values, pre and post... but, since ankle range of motion is a value with high inter-subject variability, groups 1 and 2 end up having the same values even if they started from different values.

Group 1: PRE 45,46 (4,49) POST 47,10 (3,98) DIFF 1,64 (0,74)
Group 2: PRE 45,38 (3,30) POST 46,05 (3,27) DIFF 0,67 (0,55)
Control : PRE 45,15 (5,26) POST 45,09 (5,28) DIFF -0,06 (0,26)

So, if I use a one-way ANOVA with differences occurred due to treatment, instead of a repeated measures ANOVA, I reach statistical significance because I cancel the beginning difference in pre-treatment values.

Am I correct? Can I do an analysis like this? Or am I wrong in any detail? I have the suspicion I make some kind of mistake in the repeated measures anova... :confused:

edit: and when should I use the Bonferroni correction?

Thank you!
Last edited:


Ninja say what!?!
Hi mattiazambaldi,

What you did with the t-test seems fine actually. It really depends more specifically one what your questions are. Anova would be to compare the ROM between the three treatment groups, BUT if you are to do this, I would recommend only using the measurements from after treatment.

The t-test would be used to compare the effectiveness of each treatment individually. That is, to see if the treatment was really effective in improving the ROM.

One thing that I would like to note though is to have a solid analysis of the study groups before treatment was administered. If your three groups are not identical, or as similar as possible, it won't be plausible to compare the treatments to each other.

Thanks Link for your answer.
Your note makes sense, indeed. But that comes from the combined effect of small sample size and high individual difference in ankle ROM values. A bigger sample size would for sure equalize starting values (which are not so different in absolute values, they are when compared to the post treatment variations I have to deal with).

My questions are:
is treatment 1 (group 1) better than treatment 2 in improving ROM?
is treatment 1 (group 1) better than a placebo in improving ROM?
is treatment 2 (group 2) better than a placebo in improving ROM?
so basically, 1VS2, 1VSplacebo, 2VSplacebo.

Are you saying that I actually can do this with an unpaired student t test? Do I have to apply Bonferroni correction then? And in case, how?

Thank you very much.
Ok but, what kind of ANOVA?

One way ANOVA with the differences between post and pre treatment?

Or repeated measures ANOVA with absolute values?

I guess the first one... but I am not sure.


Super Moderator
To follow on from LINKS suggestions, assumming your overall ANOVA is significant, you should then follow up with planned comapriosns to test which of your groups are different.
Thanks bugman
but, still, I don't understand and keep on asking the same question:
is it correct to use one way anova with differences between pre and post treatment?
e.g., with 5 subjects
1 41,78 43,36 1,58
2 48,14 49,04 0,90
1 40,02 41,96 1,94
3 43,08 43,16 0,08
2 46,16 47,02 0,86

In SPSS I use a table like this
Group Value
1 1,58
2 0,90
1 1,94
3 0,08
2 0,86


Super Moderator
Since you are looking at subjects and therefore cannot assume inedpendance I would be inclined to use a repeated measures ANOVA, followed by planned comparisons.
Then in SPSS insert data like this?

group | pre treat measure | post treat measure


I have very strange results... like p=0.70 or more... while comparing differences I have p in the order of 0,001.

Sorry, maybe I am just dumb and should not have done an experimental project for my thesis. Should have done a review of the scientific literature :mad:


Super Moderator
No, dont be down on yourself. Its a trickt thing to get your head around!

but, yes a literature review before embarking on research is an essentil step in the process.

you input looks fine.

So, your omnibus test is 0.7 and the planned comparisons are P<0.05? have i got this right?


Super Moderator
The biggest doubt I have is this: is it correct to use a one-way ANOVA that compares differences occurred in the subjects due to the treatment?
In general, using difference scores is not a very good idea. This article explains problems with the approach - the context the author is commenting on (consumer research) is different, but the problems of reduced reliability, spurious correlations, variance restriction and so on seem likely to apply here as well.

Repeated measures ANOVA seems to me to be the most appropriate choice, as others have said; if you chose to report the results of a differences-scores analysis as well, it would be important to discuss the limitations of the approach.
just... before starting statistical analysis, I used to think that it would have been easy and quickly with few student's t test and nothing else. but now I see that having 3 groups everything gets more difficult.

@bugman: I meant that planned comparison in rm anova gives me results like p=0.70 or higher, while one way anova using differences scores gives me p<0.05 (in the order of 0.001). I can't understand why such a difference... maybe because the difference scores are too small compared to absolute values and their standard deviations, to be detected in a rm anova?
...that is why I asked if I can use only differences scores... and, that is not because I only want significant results, but neither p values of 0.70... I would like to have at least some making sense result, even not statistical significative, and I think that choosing the right test I can do that.

bugman,, CowboyBear: you are very kind for helping me. thank you so much.
here I attach raw data of my experiment, so maybe it is easier for you to understand. I am using SPSS in its 15 days free trial.

now it's time to go to bed... it's 3.30 am here!


Super Moderator
@bugman: I meant that planned comparison in rm anova gives me results like p=0.70 or higher, while one way anova using differences scores gives me p<0.05 (in the order of 0.001).
Which effect are you looking at in your spss output when you see p=0.70? As I understand it, the effect that will be of most interest to you is the interaction time*group (not the main effect for group). Time*group indicates whether or not the rate of change was different between the three groups. Unless I'm doing something wrong (EDIT: looks like I am, given Bugman's results!), I'm finding that this effect is significant in your dataset. The attached graph from your dataset illustrates the interactive effect of time and group - there is a sharp increase in DV for exp group 1, a less steep increase for exp group 2, and group 3 (control) stays pretty much the same.


Super Moderator

I'm finding a strongly non-significant interaction effect.

Becareful of the scaling here mattiazambaldi.

the first attached plot agrees with cowboybears - but look at the scale of the y-axis!

The small range is exagerating the size of the effect. In the second plot, I have included 95% confidence intervals, which are overlapping in each group and time period - which agrees with my ANOVA output of no time*group interaction.