1 vs. 2 sample t test > calculation of standard error

Hello there,

I have the following issue in understanding the concept:

> when you calculate the t value for a 1 sample t test you divide the numerator by the standard error of the sampling distribution based on the sample.
> when you calculate the t value for a 2 sample t test you divide the numerator by the two standard errors of the sampling distributions of the two samples.

WHY has the 2 sample t test two standard errors in the denominator? (Please don't answer "because there are two samples")

Regardless where I search online, they always explain how the 2 sample t test is calculated but not WHY ..

I attached a picture to make my question more clear ...

Many thanks in advance!



Less is more. Stay pure. Stay poor.
You are controlling for the weighted variability of the two samples - so you can see how far the difference in means is beyond the inherent variability of values in the two samples.

Well there are two samples - that is obviously important right!!


Well-Known Member
You are subtracting two numbers both of which are uncertain. The uncertainty in the difference combines the uncertainty in each. It turns out, if you do the maths, that these uncertainties are combined Pythagoraswise if there is such a term. The basic idea is the variance of a difference is the sum of the variances.