Making claims about effect size

#1
Hi,

I'm working with a very large sample, and have realized that almost any blip in the data will register as p=0.000 due to the sample size. Of course there is a big difference between statistical significance and actual significance.

I've been reading up on strategies for calculating effect size. Increasingly, APA journals are immediately returning articles that fail to include relevant effect size information. I want to make sure that my claims are compelling to even the most nitpicky reviewer.

My question is this: How much latitude do we have with the guidelines for interpreting the meaning of effect size. I'm looking at a data set that suggests some fascinating links between two nominal variables, and Cramer's v is 0.23. According to the guidelines, this is just a moderate effect size. But if Cramer's v were 0.25, the guidelines suggest a "moderately strong" effect size. (Note: I realize the difference between an effect and an association, and am just using the phrase 'effect size' to suggest strength of the correlation.)

For obvious reasons, I would love to be able to claim a moderately strong association between the two variables. I know that Cohen expressed dismay about the way that researchers were using his d estimates as a standardized template for evaluating meaningful effects. There is obviously some room for interpretation, but what sorts of things can help make the case for leaning in one direction or another?

I realize that it's difficult to answer this question without context, but am wondering if others have seen excellent examples -- in any context -- of scholars making compelling interpretations of effect power in ways that acknowledge existing guidelines without using them as a strait-jacket.

Whew! Hope that question made sense!