t-test or f-test

#1
Hi all,

Apologies for the nature of my post but I'm a relative beginner to statistical tests. What I would like to know, is if I have two sets of data (as I'm a microbiologist, the data sets are likely to be bacterial colony size measurements on two different agar plate types), why would I choose to do the t-test over the f-test and vice versa?

From my understanding, the t-test is used when the standard deviations of the two sets of data are not significantly different from one another, and the f-test is used when the SDs are significantly different from one another. But my question would be: how do we define "signficantly different from one another" without performing some kind of analysis? Any help would be appreciated. Go easy!

Thanks,

Craig
 

hlsmith

Less is more. Stay pure. Stay poor.
#2
Are you interested in comparing central tendency? If so, the distribution of the residuals and sample size usually dictate if you run a parametric or nonparametric test. T-test is a parametric test used to compare two means. The f-test is not used for this specific purpose, it can be used for an overall test if you have 3 or more groups you are comparing and then the t-test is used for pairwise comparisons.

So if you have two sets you would examine the residuals and see if the t-test or Wilcoxon rank sum would be appropriate. If you had more groups you may use the f-test within ANOVA to initially compare all of the groups if the residuals were normally distributed or the same size large enough (sometimes a cutoff of n=30 is used). Then you may follow-up with pairwise ttests. Another test is typically used to examine the equality of the variance to help understand which ttest results to use.