What is better: higher number of participants or identical participants?

#1
Dear all,

I am a language teacher and currently conducting a research project into vocabulary acquisition. For this project, I had a setup of 1 pre-test, followed by treatment sessions and 3 post-tests (immediate, 10-day and 2-months). I am interested in seeing the performance in vocabulary acquisition and retention over the three post-tests for experimental and control groups who received different treatment.

The same participants took part in all tests and treatment (n=68), but for the last post-test, six participants were absent (n=62). In my opinion it is better to report and work with means, etc. of the 68 participants where possible (because a higher number of observations gives more reliable results), and the lower where necessary (i.e. in test 3). But I wonder if it is permissible to compare results based on different participants like this or if I sould discard the results of the six participants who missed post-test 3 and only use the results of the 62 throughout the study (ie. for post tests 1 to 3). Can anyone advise? Many thanks!
 
#3
Analysis

Basically, post-test scores under experimental treatment are compared to post-test scores under control treatment. After testing for significance between experimental and control scores on each test (using a t-test, which showed significant results across all tests), I calculated the effect size (mean difference and Hedges' d) for all post-tests and the confidence intervals for d.

Since I am interested in the amount of forgetting (i.e. how much the scores fall under each condition from post-test 1 to post-test 3), I would like to be able to compare d across post-tests (it actually gets bigger). I also tested the significance of results under each treatment condition over the 3 post-tests and found that no significant changes occured between post-test 1 and post-test 2, but between post-test 1 and 3 and 2 and 3, significant changes did occur.

Finally, I analysed the data by participant, i.e. whether each participant scored higher(or the same) or lower on each test. This yielded percentages of particiants scoring higher, the same and lower under the experimental treatment compared to control treatment. Again, I would like to be able to compare these percentages across tests.

As an illustration I pasted a simplified table of results below.

Thank you for your help.

Mean experiemental score (sd), Mean control score (sd), MD (mean difference), Hedges' d

test 1 n=68 51.9% (18.2%) 42.5% (22.3%) 9.3% 0.46

test 2 n=68 53.6% (20.0%) 39.9% (21.8%) 13.6% 0.65

test 3 n=62 40.7% (18.1%) 28.5% (17.6%) 12.2% 0.68
 

Xenu

New Member
#4
For the percentages, there might be a problem in using all the participants, since the missing persons might not be random.

When you test the hypothesis, I guess you do pairwise before-after comparisons? In this case, just remove the subjects for any comparison that includes the third group.
 
#5
Thank you, Xenu. So, you think in a table such as the one I produced above, it would not be misleading to read off the development of d (for example) across the three tests (i.e. 0.46 in test 1, 0.65 in test 2 and 0.68 in test 3) and claim that the effect has increased over time?