There is a balance here. To Lowpro's excellent points, the purpose of a pilot is not to prove the hypothesis to a measured statistical certainty. Depending on the patient population a well-powered study runs northwards of 500 patients and that ain't no pilot! There is also the matter of funding in that pilots are less expensive to run and thus easier to get funded, but still not cheap.

The other matter is that to compute the required sample size to prove a hypothesis you need to know the alpha and beta coefficients (easy) but also the size of the treatment effect and the standard deviation. With all of those data points the required sample size is simply a matter of plug and chug, and then adding in 10-15% extra to account for dropouts. However, if you have never done a human study using your intervention then how are you supposed to know the size of the effect and the range of variability? One purpose of the pilot is to calibrate these figures more precisely as well as figuring out which markers and tests are most highly correlated with the outcome. It is one thing to try 50 markers on a small trial of 30 patients, but quite another to do it on 500 patients - the cost will eat you alive. Better to identify 10 markers that are highly indicative and then collect those 10 on the larger trial. You also discover all the things you forgot about in the protocol when you try to do the analysis on the pilot data.

So while a pilot will be many times smaller than what is needed to prove the point (unless you get extremely lucky) it has to be large enough to calibrate the unknown variables. Does 5 patients do it? Hardly. Is 200 overkill? most of the time yes. So it is a bit of an art form to pick the right number, but the sample size is never irrelevant. In my case I can't even look at the literature to get a ballpark estimate of what the effect and standard deviation might look like, hence my appeal for better ways to approach the problem.

Lowpro is cautioning to avoid the human tendency to read too much into the tea leaves based on a limited size data set. When you see biotech stocks on Wall Street skyrocket on smallish Phase II studies, it is people that don't understand the purpose of the small trial extrapolating underpowered results and assuming that Phase III will come out the same way. That pushes the stock price higher for a while, but more often than not the results do not hold up and the company crashes when Phase III disappoints.