PDA

View Full Version : Method Comparison * T-test help

Higgs Boson
03-25-2011, 09:46 AM
Good day all,

I'm developing a new method of sample preparation for a common type of analyses performed in my workplace laboratory. I've choosen 59 different samples and run them all in duplicate on each method. I'm comparing the means of the 2 results obtained from each of the methods in MS Excel. I made the mistake of not taking a statistics course in University so things like the t-test output in Excel I'm finding a little confusing.

I'm using Bland-Altman plots as my main means of displaying the difference/correlation/bias between the two methods, and standard deviation of the ranges between the duplicates in each method to show their precision. I also figured the precision obtained betwen duplicates in the reference method would make a nice benchmark of precision for which to shoot for between methods. I hope I've explained that reasonably clearly.

The t-test is something I'm told I should also include in my report in order to show the significance of the difference, but I'm not familiar with it, and what I could find elsewhere online hasn't provided me with much clarity. I assumed the paried t-test was the proper choice given that the individual samples are not related, and I am comparing a sample's results on one method to the other. I'll paste my output from excel here if somebody would be so kind as to explain to me the significance of it and perhaps how I would phrase it in my report.

Here it is:

t-Test: Paired Two Sample for Means

____________reference method _____test method
Mean __________ 44.63249988____45.80050293
Variance ________610.7968112____541.8444512
Observations_____________59_____________59
Pearson Correlation 0.977400872
Hypothesized Mean Difference 0
df 58
t Stat -1.693469869
P(T<=t) one-tail 0.047865435
t Critical one-tail 1.671552763
P(T<=t) two-tail 0.095730871
t Critical two-tail 2.001717468

Would it be helpful to also perform the t-test between the duplicates of the reference method for comparison?

Thank you for any input,
Higgs

gianmarco
03-25-2011, 10:33 AM
Hi!

T-test is aimed to verify if the difference in mean value between two samples is significantly different. It return a statistic (t) and an associated probability (p). The latter informs you about the significance: a p < 0.05 tells you that the difference in mean value is significant.

That said, I do not well understand if your two samples are dependent or independent. You talk about of "samples not related", and in another point you make reference to the paired t-test.

As for the difference between the two, see:
http://udel.edu/~mcdonald/statttest.html
http://udel.edu/~mcdonald/statpaired.html

If the samples are independent, unpaired t-test should be used.

As for Excel, I am not familiar with its built-in stat analysis. Another way to run the test is to use a free tool called PAST (free of charge), that provides a lot of statistical analyses as well (just search by googling).

In any case, I see that Excel reports the two-tail probability: if I am not wrong in interpreting that output, the difference is not significant at alpha= 0.05. The reported p(robability) is, in fact, 0.095.

Finally, about t-test and its assumptions, there must be a lot of earlier thread in this forum. You could give a look to them

Hope this helps
Regards
Gm :wave:

Higgs Boson
03-25-2011, 11:05 AM
Hi,

Sorry, I worded that horribly, I'm almost certain the paired t-test is the correct test as I have the two results for each sample and I'm interested in the difference between them. Sample 1 via the reference method vs Sample 1 via the test method, and so on down the line of samples. I said unrelated because Sample 1 and Sample 2 are independant of each other. The results are all naturally paried. It would make no sense for instance to have 58 samples by method 1 and 59 via method 2.

I've confirmed that the data is normally distributed. I believe that is one of the assumptions.

I'll check out those links though, thank you.

Higgs.

gianmarco
03-25-2011, 11:11 AM
Hi!
Another link, from a "medical" website. May be that context is more familiar to you:

http://www.gpnotebook.co.uk/simplepage.cfm?ID=x20050323103508411760

Regards,
Gm

Higgs Boson
03-25-2011, 12:24 PM
Hi again,

I've browsed those links and feel I am using the t-test correctly but still find myself confused as to what the output values mean. What tells me if the t-test is finding the difference between the two methods to be significant or not?

Higgs

gianmarco
03-25-2011, 01:49 PM
Hi,
may be I was not so clear in my earlier post. Sorry ;)

If I get right the Excel's output, I suggest to you to pay attention to the two-tailed p value: to put it in a nutshell, it is telling you what is the significance of the difference in mean values between the two samples.

A difference is significant when the probability value is smaller than a threshold: this is usually set at 0.05 (and usually indicated with the Greek letter alpha). If p is smaller than 0.05, this means that the probability is low to get the observed difference just as result of sample variation. In this case (p < 0.05) the difference is said to be "significant".

When p is greater than 0.05 (as should be in you case: 0.095), the difference is not significant.

I hope to have been clearer this time.

If you do not mind to post your dataset in Excel format, I would not mind to give a look to it using the tools I have at my disposal.

Regards,
Gm

Higgs Boson
03-25-2011, 02:27 PM
Hi,

Good info, how about the tstat, in my case negative (is that significant?), and t critical? What do I gather from them?

I'll attached a modified version of my results, just removing anything proprietary or confusing.

I really appreciate your efforts here.

Thanks,
Higgs

gianmarco
03-25-2011, 05:25 PM
Hi :wave:

First, I runned a paired t-test. The program adviced me that Normality test failed. I went on to perform the test. Results are in the pdf.
The comment I made in my earlier post still holds true. At alpha 0.05, the test shows that the difference in mean values between the samples is not significant. Besides, as you can see, the 95% confidence interval for the difference between means cover the 0. That is to say, again the difference is not significant, since it could also be 0 (that is, no difference at all).
Please note that the program informs about the power of the test. Less power affect the reliability of the test; for more details see here http://en.wikipedia.org/wiki/Statistical_power

Secondly, because of the warning about normality assumption, I runned the non-parametric correspondent of paired t-test, namely the Wilcoxon singned rank test: see http://udel.edu/~mcdonald/statsignedrank.html
This test shows that the difference in median values between samples is significant. See attached pdf.

I hope not to have confused things to much.
I am unsure about what test you have to pick up. May be some one else in the Forum could provide further help on this.

Hope this can be of some use,
regards
Gm

astro_girl
04-07-2011, 10:49 PM
I think if it's neg, it just means it's on the neg side of the distribution curve, but let me check. I had this happen, too for some of my t values.

I'm sorry...I don't fully understand your initial question w/in the context of your data comparisons. What I can say is that I had three groups of data to compare. I wanted to compare (1.) all 3 groups (A,B,C) at the same time, and then I also wanted to compare (2.) one group to another, 2 at a time.

So in the first comparison (1.) I did an ANOVA and Levene Test for A vs. B vs. C. The f- and t-tests were instrumental in comparing (2.) A vs. B, A vs. C, and B vs. C. ANOVA is the multi-group comparison equivalent to the t-test, and the Levene test is the multi-group equivalent to the f-test.

An ANOVA considers the shifts in LOCATION (population means) among the 3 groups. A t-test considers the shifts in LOCATION (population means) between 2 groups (provides more detailed info. re. the ANOVA). On the other hand, Levene tests for the equivalence of variation (population standard deviation) among the 3 groups, while the f-test does the same thing for 2 groups at a time...it provides more detailed information about the Levene test results. Hopefully this at least helps to clarify what the t-test evaluates within the context of multiple comparison groups.

All this should be done, by the way, AFTER plotting the data and confirming that it follows a normal distribution. The method I chose to do this was to first compute the normal probability plots of the data, and to compare the resulting Pearson product moment correlation coefficient (i.e., normal probability plot correlation coefficient, PPCC) to the corresponding critical PPCC value, which is a formal test of the hypothesis.

Hi,

Good info, how about the tstat, in my case negative (is that significant?), and t critical? What do I gather from them?

I'll attached a modified version of my results, just removing anything proprietary or confusing.

I really appreciate your efforts here.

Thanks,
Higgs