I agree with ondansetron and GretaGarbo.

To expand on their comments a little, "post-hoc power" (i.e., doing a power analysis where you just plug in the observed effect size, observed sample size, etc.) produces a "power" estimate that is simply a transformation of the p-value that you already observed. A p-value of p = .05 translates to post-hoc power = 50%. If you rejected the null, then post-hoc power will be >50%. If you failed to reject the null, then post-hoc power will be <50%.*

What this means is that if you already know the p-value for your analysis, then post-hoc power literally adds no new information. It is mathematically impossible to have a non-significant p-value & high post-hoc power; and it is impossible to have a significant p-value and low post-hoc power.

(* Note that this is slightly oversimplified; in truth the post-hoc power value that corresponds to observed p = .05 is 50% for large-sample z tests; for finite samples, this equilibrium power value approaches 50% from above as the sample size grows, but is slightly greater than 50% for small samples. See

Russ Lenth's excellent paper on the topic.)

Also, some of the journals I hope to submit my results to ask that power is reported so that it can inform the design of future research.

Yes, but what they want is a prospective power analysis, not a post-hoc power analysis. Since you can't do a

*truly* prospective power analysis (since you can't go back in time), what you can do instead is to present a power analysis for a range of small, medium, and large effect sizes (NOT just for your observed effect size) for your observed sample size, so that readers have an idea of what your a priori chances of detecting common effect sizes were. That would actually be informative, unlike post-hoc power.