large samples and resampling stats

First post here. Mods- please move thread if in wrong place.

I am working with a very large dataset (total n = 30,000+) of normally distributed measurement data on birds. Sample sizes are such that any and all traditional (e.g., ANOVA) hypothesis tests are statistically significant. Rather than use an information theory approach to model selection I decided to run resampling stats (bootstrap) to generate probabilities of observed values under the null hypothesis.

First, does this approach seem reasonable? Second, can anyone provide any published examples of this application of resampling stats?
The following only makes sense (perhaps) if I understood you correctly:

With a sample size that big, you are unlikely to have problems with sampling effects, and therefore you basically don't need inferential statistics anymore. Instead you can concentrate on the question of effect size (are the differences "large" or "small"), which is more a question of biology (i.e. difficult) than statistics.

If your data come from normal distributions, and with that kind of sample size, bootstrapped confidence intervals should give you almost exactly the same numbers as the usual parametric estimates, so that may not help much :(
Last edited: