Hi guys, I am analyzing some regression results and the p-value for the F-test is large, but at the same time, the t-test for some of the independent variable shows that the coefficients for them is significantly different from 0. I thought that the F-test result meant that I cannot reject the hypothesis that all coefficients are equal to 0? If so then how do I reconcile these 2 tests?
The F-test is a test for the overall performance of all the coefficients. The t-test is for each of them separately.
For example, the whole football team is good? F-test
Then, to see each of the players separately, do a t-test. So the overall is not significant. But some players are good, significant.
Remove the non-significant ones based on the t-test. actually, remove the worst one (highest p-value in the t-test) and do the regression again and see what happens. Do it again and again. At some point, the F-test will be significant and all the t-tests as well.
Lecturers will always say that you shouldn't exclude individually insignificant variables from a specification if the standard error of the regression goes down, R^2 goes up and F-test indicates joint significance upon including it into the model, of course assuming that this variables doesn't cause any violations of the Gauss-Markov assumptions.
However there can be some issues w.r.t. the efficiency of the estimators if you spam the model with jointly significant variables.