Thoughts on appropriate comparison tool

Greetings from a new joiner! Great forum!

I had a question about analysis. I have a decent background in analysis, though have been "out of practice" for some time.

My question focuses on appropriate comaprison of data, though for confidentiality reasons, I'll need to speak in generalities.

What would the best test be to compare percentage scores between 2 - 4 organizations? (i.e. Org. 1 scores 50% in a metric, Org. 2 scores 75%, etc.)

The populations the data is obtained from differ in both size and composition between organizations; more specifically, there is NOT a normal distribution. The sample populations are subsets of a subset of a normal population. Further, there is stratification within the sample population that varies between organizations (i.e. Org. 1's population may be 25% type A, 45% type B, 30% type C, while Org. 2's population could be 75% type A, 20% type B, and 5% type C). For accuracy sake, the population sizes between organizations will likely vary from 50 to 175.

At present, the percentages are compared straight across with each organization ranked accordingly. It is my contention that this is not a valid comparison as the populations are neither homogeneous or similar between organizations.

So, long story short, should a correction factor be applied to mitigate variance in the populations to allow for more accurate comparison?


TS Contributor
Looks to me like you're stuck in the classic "apples to oranges" comparison, and unfortunately there's no simple correction factor to sort it all out.
I agree and have said so to he powers that be, yet they insist on using this method to compare performance. It's hard to be motivated to acheive when the comparions are neither valid nor reliable!

Any thoughts at all on improving comparison accuracy?


TS Contributor
Maybe report the metric for each "type" within each organization? Maybe then weight it (the overall %) based on the relative proportions of each type within each org?