A basic question. Pls help...


When discussing the large sample theory, the professor in my econometrics course said sth that I found hard to grasp. He said something like... in large sample theory anything small/tiny deviations from *** (I don't remember what it is, maybe Assumptions???) will lead the null hypothesis to be rejected... Something like that. Could anyone please help me explain this, based on your knowledge of large sample theory?

Also, when discussing bootstrapping he's been going on and on about it. Here is how I think about bootstrapping and please correct me if I'm wrong or point out if I'm wrong. Bootstraping helps reduce uncertainty in obtaining the estimates such as the estimates of coefficients or standard errors. When we run a regression based on a dataset, we obtain these statistics but we're not sure if these statistics are reliable or not. So we perform resampling so that we have more reliable info about the coefficients and other statistics. Am I right or wrong? I'm not really sure yet.

Please help... T

Thanks a lot


New Member
Well in terms of large sample theory, a sample that is extremely large, when significant tests are conducted, it takes smaller values to become significant.

For example, a correlation of .10 may be significant with a sample size of 10000, but no significant with a sample size of 10