Idea (probably not new) about a metric for comparing forecast errors across settings

#1
Hi there,

I had written a large post, but was kicked off upon submitting, so I will keep this post to minimum...

What are your views of the following metric for comparing the forecasts of different forecasters working in very different settings:

|actual - forecast| / standard deviation of actual (from prior observations).

Some FOR points:

* When variance is small, forecasting is easier, and larger errors are 'punished' more heavily.
* The metric does not suffer from the problem of tendencies to very small actual values in the same way that percentage error does.
* Large forecast errors in small variance situations signal special events.
* Is (apparently) setting neutral.

Feedback is very much appreciated, and ideally, some links to literature about this metric!

Regards,

BigBugBuzz