Hypothesis design

I have a question regarding hypothesis design. In other words, how to properly assign H0 and H1.

For example, if I have statement: Some drug additive may be detrimental to human body. I can choose two hypothesis as H0. It is safe or it is not safe and then provide test.

Which is better to choose in this case? Why?
well u know, statistic is funny.

if we want to prove that "i am alive". then, we prove it reversely!! :p
that is, we first prove that "i am dead" is not correct; so, "i am alive" is correct!!
in this example, H0 is "i am dead"; we reject that H0 in the end and say that H1 or the alternate hypothesis is correct.

conclusion: whatever you want to prove, take it as the H1 and the reverse effect is H0.
if you want to prove that "drug usage is bad for health" then choose H0 as "drug usage is good for health"

well u know, statistic is funny.

conclusion: whatever you want to prove, take it as the H1 and the reverse effect is H0.
if you want to prove that "drug usage is bad for health" then choose H0 as "drug usage is good for health"
many thanks.
but still need more info - this is just something like an algorithm which is more or less clear to me now - but what is the reason of such algorithm?
does it have something to do with minimisation of type I error? what happen if we take H0 and H1 in reverse compared to the algorithm mentioned above?

What to do in case when we have equally likely hypotheses?

And sometimes there is also point of view which is relevant (or not?).
Example: If pharmaceutical company develops pill and want to prove it is effective, they take it H1 in this case and H0 si then "Pill is not effective" (according to algorithm). From the point of view of customers, which don't believe that company unless they prove effect of the drug, they should take the statement "Pill is not effective" as H1, and reverse as H0. Does it make sense?

So this part of statistics looks like alchemy, not science to me. Only rigorous arguments are acceptable to me and I am not able to find them in any source - I tried more books (~5) and lots of web.

[Sorry for disturbing with such dumb questions.]

It is because a useful result was proven using this methodology which coincidentally is called the Neyman-Pearson paradigm. The Neyman-Pearson lemma simplies to the test mentioned above when the specifics are filled in. It is important to realize the two got together to really set down a framework that people could trust. For example, the answered the question is there any way to test this data that will detect departures from some mean with greater efficiency? They proved that there was not a way to do it. And prior to that nobody had a clue, it was truely alchemy I suppose.

It is to bad you do not have jstor access, could toss you a paper.

It is interesting that they had to fight the god father of statistics at the time (Fisher) tooth and nail to gain ground on this issue. He was much more interested in what has the most evidence in favor of it than what to reject as unlikely. Or more specifically induction with a sense of variance.

Neyman-Pearson preferred a deduction. If we assume the null we deduce that the distribution of this statistic is P. If we observe that the realization of the statistic is unlikely to occur in its distribution, we deduce the null is not true.

They were also responsible for the idea that the p-value is to be discarded after it has served its immediate purpose. Fisher like levels of evidence and prior to them often a p-value or approximate p-value was all that was reported. Neyman-Pearson pushed the idea of rejection, it either is statistically significant evidence against the null or it isn't. Fisher disliked that.

If this discipline had evolved later it might be more concerned with levels of support represented by the p-value than black and white inference which set the environment to make Neyman-Pearson so dominate. As it is, during its critical evolution, Fisher himself had aided Neyman-Pearson when he set about making tables of p-values in a popular publication that would identify bounds for your p-value.

Neyman-Pearson followed a long and pointed out how you could structure your thought, bound your p-values, make conclusions that couldn't be denied, nor improved without different assumptions or data. Though I did look up (in Lehmann quoting Neyman-Pearson) a tidbit:

"Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behavior with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong."

This, what you're arguing about, is those rules they developed, to not be too often wrong.