Most of these examples involve large volumes of data that I'm sure Fisher never envisioned when developing experimental design in agricultural fields.
From my limited experience (my 2 cents) I've seen both frequentist and Bayesian methods produce volumes of output from analyses on the same large data -- thing is, with Bayesian methods there's more room for "tweaking" so that fewer "significant" things are outputed.
Good to hear someone say it, Fishers test were indeed designed for exactly for those types of simple experiments. However, I didn't mean that, data from that kind of study is pretty objective - and the so is the study design that dictates your test. It's the subjective nature of face recognition, spam filtering or
statistical translators that make a Bayesian approach more feasible. If you get into it you'll see that in these cases the more intuitive approach is to start with Bayes. That is what most of those examples have in common - not the fact that they involve large volumes of data. No, certainly not, many have small sample size problems as well.
However also the argument that Bayesian methods are feasible for large observational datasets is beside the point, true born-again Bayesians have compelling arguments to use Bayes on small data as well. These arguments are so compelling that Bayes is becoming the standard in many drug testing trials. A major argument is that you should risk as little patients as possible, and correct application of Bayes apparently helps, you can make more complex inference with less data - and thus less risk of human life.
A recent paper by
Johnson makes the case that significance determined with classical test only amounts to marginal evidence (especially with small datasets) and that the wide use of classical tests - were alpha levels of 0.05 - 0.01 are seen as significant - is a major contributor to the appallingly large proportion of study were people fail to reproduce results in some fields. He argues that the application of Bayesian uniform most powerful test would reduce this proportion greatly.
I suspect that a major contribution is also that many classical tests are being applied to study designs that are not appropriate so I am highly skeptical that 'standard' Bayesian tests built for ANOVA type study designs will solve this - but this is again completely besides my point.
I simply do not agree with the notion that Bayes is not widely applied. You may not use it in your field, you may not have seen it during your undergrad courses, but you're seeing it's application in your daily life and likely also using it daily - from your traffic avoidance app to
the collision avoidance software in your new truck or the
face recognition on your camera. Bayesian methods are already everywhere. That is my only point, the rest are semantics.