I don't know about baysian approaches, but I was always taught you could never conclude the null was true. It is either rejected or not rejected.

It depends a little on the framework. In Fisherian NHST, and the "hybrid" method most people learn now, you can't conclude the null is true.

But in Neyman-Pearson NHST, you can "accept" the null hypothesis if p > alpha. That said, one isn't strictly saying that the null hypothesis is true, just that you have evidence to justify acting as if it were true (i.e., you have grounds for a decision to guide behaviour).

In some Bayesian tests - e.g., some of the Bayes Factor tests being developed at the moment - it is possible to provide evidence to support a point null hypothesis. Which is useful if you're doing something like testing for extrasensory perception, where the exactly null hypothesis being tested might actually be true. But in Bayesian estimation more generally you typically specify a continuous prior probability distribution for the estimated parameters, which means you're implicitly saying that the probability that the null hypothesis is exactly true is zero.

That is why researchers set up the alternate hypothesis to test what they really believe is true.

In obscure tidbits of information for the day: Another way to use NHST is the "strong" form. I.e., You have a theory that makes a

*quantitative* prediction about what the exact true value of the parameter should be (which might not be zero). You then specify this value as the null hypothesis, and see if you can find evidence to reject it. This use of NHST fits more with a Popperian approach to science (i.e, you specify a theory and then try to falsify it). I've never seen it done in practice - theories in psych are almost never specific enough to predict the exact value of a parameter. But apparently this approach has been used in physics.