I read a statistician who had the following to say about power (paraphrased). First, has the study already been conducted? If yes, and you found a nonsignificant result, then power = 0. If yes, and you found a significant result, then power = 1.

I don’t know how to interpret the above statement: as agreement?, irony?, joy of formulation? mockery? or disagreement?

I can’t dig up the paper I read to back up the formulation I did. But from this

link I copy the following formulation:

“Here are two very wrong things that people try to do with my software:

Retrospective power (a.k.a. observed power, post hoc power). You've got the data, did the analysis, and did not achieve "significance." So you compute power retrospectively to see if the test was powerful enough or not. This is an empty question. Of course it wasn't powerful enough -- that's why the result isn't significant. Power calculations are useful for design, not analysis.”

To say it again: it was not significant, with this data, of course it had low power.

Here post hoc power was discussed. But suppose if someone had done a power calculation beforehand and found out that it would be enough with 26 interview persons – out 30 in total. Is it anybody who believes that they would have just interviewed 26 persons? Of course they went for all the 30. In such a case the question is more do-or-don’t-do the investigation.

But is the power very relevant here? No! Not in such a study. But “margin of error” is relevant and I would prefer to report standard error (that is: standard deviation divided by square root of n.)

Maybe the must important thing with such a study is that the staff gets the chance to express themselves anonymously.

@Leehud74. I do hope that this thread – your thread – will not be kidnapped for other discussions. I just did not want to make things unnecessarily complicated.

Yes, it was an interesting formulation, wasn’t it?