Comparison of linear and power law regression models

eis

New Member
#1
I'm quite a newb at statistics, with only one college-level statistics course behind me. I apologize in advance if my terminology and/or understanding are in error.

My fundamental question is how do I determine whether a given dataset is better fit by a linear model or a power law model? This determination is central to a research project that I'm attempting to complete.

It seems sensible to compare R^2 or adjusted R^2 values, but there arises an issue in that power law models are forced to go through the origin whereas linear models are not, effectively handicapping power law model fits. I realize that an essentially brute force approach involving iterative convergence on a maximum R^2 value could optimize the power law model and thus minimize that model's inherent "handicap", but such a computationally intensive (and, at my low level of skill, programmatically difficult) approach is simply not feasible. (Of course, if there is a rather straightforward and efficient way to implement such an approach for hundreds of datasets, I'd love to know!) Is there another way to fairly compare the goodness of fit between linear and power law models given the above described handicap?

On a related (and, I assume, much simpler) note, how precisely should I adjust the R^2 values before comparing them? I'm unsure whether the standard Ezekiel method applies to power law models, and if so, how the number of predictors are counted in each model. (Like I said, I'm a newb!)

Kind regards,
eis