[Measurement] What do we need to know about the differences for interval measurement?

CB

Super Moderator
[Warning: This isn't really a statistics question. I'm taking a punt that one of the others interested in psychometrics around here might know the answer to this ]

I've working again on my off-again on-again attempt to provide a guide to the whole controversy of admissable statistics.

Something's troubling me in representationalist measurement theory (i.e. the theory that measurement is about representing empirically observable relations amongst objects).

Specifically I'm wondering about the exact requirements for an interval scale. Obviously we need to make some kind of observations with respect to the differences between objects. That's what distinguishes an ordinal from an interval scale.

But do we need only to make observations of the type:
1) The difference between A and B is noticeably greater than the difference between B and C

or do we need to be able observe ratios of differences, e.g.
2) The difference between A and B is twice the difference between B and C?

It seems to me that if we only need to make observations of type (1), then some non-linear transformations would preserve the information we have about the empirical relations (whereas the permissable transformations for an interval scale are theoretically limited to linear transformations [edit to clarify: by linear transformation I mean x' = ax + b, where x could be a vector but a and b are constants]).

Any takers? Sorry for brevity; I can flesh this out if people unfamiliar with the area are nevertheless interested.

Last edited:

spunky

Can't make spagetti
Re: [Measurement] What do we need to know about the differences for interval measurem

I've working again on my off-again on-again attempt to provide a guide to the whole controversy of admissable statistics.
yeah... good luck with that

Something's troubling me in representationalist measurement theory (i.e. the theory that measurement is about representing empirically observable relations amongst objects).
i'm gonna have to say that i don't think i know a lot about this one in particular. if i remember correctly, both Classical Test Theory and the Latent Variable approach beat the crap out of it and left it for dead, which is why, if my memory serves me right, it's just slowly wasting away in some corner in downtown San Francisco :true story:

Specifically I'm wondering about the exact requirements for an interval scale. Obviously we need to make some kind of observations with respect to the differences between objects.
why not the relationship between objects as well? we're all about finding differences as much as we're about finding relationships in this business of data analysis.

It seems to me that if we only need to make observations of type (1), then some non-linear transformations would preserve the information we have about the empirical relations (whereas the permissable transformations for an interval scale are theoretically limited to linear transformations).
could you elaborate on this a little bit more, please? particularly on restrcting the transofrmations to linear for the case of interval scales? it seems like the real juice of whatever you're trying to get at is here.

CB

Super Moderator
Re: [Measurement] What do we need to know about the differences for interval measurem

yeah... good luck with that
Hee. It's not as ambitious as it sounds! I just feel like a lot of people have this vague idea that you can only do certain things with ordinal data, but resources aimed at everyday data analysts aren't very good at explaining the connection between measurement theory and admissable statistics. So I'd basically be saying stuff like:

1) If you just want to make inferences about relationships between your variables exactly as operationally defined in your study, without worrying about whether the variables actually measure the hypothetical constructs they're intended to, then you're an operationalist. and you can use whatever analysis you like (as long as its statistical assumptions are met)

2) If you want to make inferences about empirical relations amongst objects, you're a representationalist, and arguably you should only use a very restricted range of analyses with ordinal data (i.e. rank-based non-parametric tests)

3) If you want to make inferences about latent variables underlying your measurements, the parametric vs non-parametric debate is perhaps a bit redundant because you should really use actual latent variable analysis.

i'm gonna have to say that i don't think i know a lot about this one in particular. if i remember correctly, both Classical Test Theory and the Latent Variable approach beat the crap out of it and left it for dead, which is why, if my memory serves me right, it's just slowly wasting away in some corner in downtown San Francisco :true story:
Haha! I guess it depends on what you mean by beat for dead. Representationalism is probably a lot less well known than the other two amongst most psychologists, but as a complete theory of how measurement works it's probably a lot more advanced. E.g., it led to the conjoint measurement model, showing how an attribute can be demonstrated to be quantitative (in the strict sense that Joel Michell uses the word) without a concatenation operation. It's covered a fair bit in Denny Borsboom's Measuring the Mind book

could you elaborate on this a little bit more, please? particularly on restrcting the transofrmations to linear for the case of interval scales? it seems like the real juice of whatever you're trying to get at is here.
This goes back to good ol' S. S. Stevens. E.g. take the example of ordinal measurement. Say I have 3 objects, A B and C (these are objects, not attributes. They might be people, lumps of rock, whatever). With respect to some attribute, I have observed that object B is noticeably greater than object A, and that object C is noticeably greater than object B. I haven't observed anything about the differences between the 3 objects though.

I can then assign numbers to represent the relations amongst the 3 objects. My choices are broad: Anything that demonstrates the observed relations (A < B < C) will do. I could use A=1, B = 2, C=3, but I could also use A = 0.01, B = 0.02, and C = 94354. It doesn't really matter. Any monotonic transformation of a scale that shows that A < B < C is just as good at representing the relations observed. And if I follow the logic of all this, I would only want to use a statistical test whose results would be invariant regardless of the numerical assignments I chose to use (as long the assignments preserve the fact that A < B < C).

That all makes sense to me in an abstract sense. But I'm less clear about interval scales. The distinction between ordinal and interval measurement has something to do with being able to make observations not only about one object being greater than another, but also about the differences between objects. Unlike ordinal measurements, the idea is that your choice of scale (/numerical assignments) with interval measurements is more restricted.

E.g. imagine I have observed that the difference between B and C is twice the difference between A and B. Then I could use a scale such that A = 1, B = 2, and C = 4. Or I could use A = 3, B = 6, and C = 12. Either way I have represented the information I have: That A < B < C, and further that B-C = 2*(A-B). There are a bunch of other numerical assignments I could use, but if I go outside a linear transformation, the assignments won't represent the relations observed amongst the objects. (This lines up with what Stevens said; only linear transformations preserve the empirical relations observed with an interval scale).

But he wasn't all that clear about what we actually needed to observe about the differences. So say I've observed again that A < B < C, and I've also observed something about the differences. But this time I know only that the difference between B and C is greater than the difference between A and B. I don't know the ratio of the differences.

The situation seems different here because the numerical assignments that could represent the empirical relations observed are not just ones that are within a linear transformation of each other. For example I could use A = 1, B = 2, and C = 4; OR A = 1, B = 2, and C = 5. These two scales are not a linear transformation of one another, but either preserves the observed relations (that A < B < C, and that B-C > A-B). So do I have an interval scale? An ordinal scale? Or something in between?

spunky

Can't make spagetti
Re: [Measurement] What do we need to know about the differences for interval measurem

Representationalism is probably a lot less well known than the other two amongst most psychologists
not just psychology, but i'd argue it would go as far as saying that the social sciences don't even acknowledge its existance anymore. the first time i ever heard of it was actually just last semester in a G-theory course where i believe the professor devouted a good... what... 2 minutes in mentioning it? and then he moved on. intrestingly enough, i realize that you know it because of denny's book. lazar, jake and i talked about it... because of denny's book as well! so... yeah, there're not many fans of it out there. do you know if people use it in interpreting their scales and measures? i guess i just feel queasy if this is one of those "paradigm shifts" where you'd kind of need the bulk of social scientists to stop thinking about their tests and scales in terms factors and coefficients of reliability and move on to thinking about them in some way else. trust me, we've got enough going on in the Bayesian crusade now when it comes to different ways to think about making inferences from your data

This goes back to good ol' S. S. Stevens.

stoopid Stevens... it's been 40yrs after his book and we're still cleaning up the mess that he created

The situation seems different here because the numerical assignments that could represent the empirical relations observed are not just ones that are within a linear transformation of each other. For example I could use A = 1, B = 2, and C = 4; OR A = 1, B = 2, and C = 5. These two scales are not a linear transformation of one another, but either preserves the observed relations (that A < B < C, and that B-C > A-B). So do I have an interval scale? An ordinal scale? Or something in between?
Clarifying note: After having a brief chat with Dason, i realized that there is an ambiguity in my argument: i am assuming that the vector V = <0,0,1> is in the vector space where you are operating. i'm guessing i just assumed you were talking about the whole R^3 as vector space. but depending on how you define your vector spaces, my counter example may or may not work. so i guess i'd just need you to elaborate on that to see whether what i said is valid or not.

Clarifying note: CowboyBear had further specified that the specific linear transformation he's talking about is x'=ax + b in which case my counter-example doesn't work

uhm... CbBear... those *ARE* within a linear transformation of each other and that's very easy to show.

$$\left( \begin{array}{c} A=1\\ B=2\\ C=4\\ \end{array} \right)+\left( \begin{array}{c} 0\\ 0\\ 1\\ \end{array} \right)=\left( \begin{array}{c} 1\\ 2\\ 5\\ \end{array} \right)$$

so there you have it. it was straightforward to find a vector V = <0,0,1> that, when added to your original one, results on the one you needed. and adding vectors together is a linear transformation

That all makes sense to me in an abstract sense. But I'm less clear about interval scales. The distinction between ordinal and interval measurement has something to do with being able to make observations not only about one object being greater than another, but also about the differences between objects. Unlike ordinal measurements, the idea is that your choice of scale (/numerical assignments) with interval measurements is more restricted.

E.g. imagine I have observed that the difference between B and C is twice the difference between A and B. Then I could use a scale such that A = 1, B = 2, and C = 4. Or I could use A = 3, B = 6, and C = 12. Either way I have represented the information I have: That A < B < C, and further that B-C = 2*(A-B). There are a bunch of other numerical assignments I could use, but if I go outside a linear transformation, the assignments won't represent the relations observed amongst the objects. (This lines up with what Stevens said; only linear transformations preserve the empirical relations observed with an interval scale).

But he wasn't all that clear about what we actually needed to observe about the differences. So say I've observed again that A < B < C, and I've also observed something about the differences. But this time I know only that the difference between B and C is greater than the difference between A and B. I don't know the ratio of the differences.
now, this is the thing that drives me NUTZ about Stevens work, because he "created" these stoopid "typologies" of measurement scales over which people are now killing each other. those distinctions do not exist!!!. to the best of my understanding, the problem Stevens (and the measurement people) were facing back in the day was that they needed to make sure social scientists received a decent-enough training in measurement while keeping the statistical requirements at a minumum. in reality (and i think we all learn that in any intro stats courses) there are two kinds of random variables: discrete and continuous (excepting all the weirdo stuff that we never use in our fields). that's it: only discrete and continuous. now, they both require different statistical approaches (like discrete RV's have probability MASS functions VS the probability DENSITY functions of the continuous RVs and stuff) but this whole issue about whether there's a true zero or not or if they behave like numbers or not was a way to bypass the need for social scientists to become more proficient in data analysis without the need to learn about its theoretical underpinnings. without Steven's typologies, those ridiculous "decision trees" at the back of any research methods course would have to disappear... but then again, because we just *love* to make things simpler than they should, we opened the door to this really dark and weird place where now, even 40yrs after today, we still keep on arguiing over the same stuff.

i'm all for maybe trying to bring in representational measurement theory back into the spotlight to see what it has to offer, but i'm a little bit reluctant to believe we can attempt to reconcile fabricated differences in statistical analysis that shouldn't even exist. yes, i know Stevens made them up in good faith, but i think we're already beyond the point of even acknowledging them. still, i highly doubt they'll *ever* go away, of course, because, if they did, social scientists would have to face the oh-so-horrible task of actually learning some statistical theory.

CB

Super Moderator
Re: [Measurement] What do we need to know about the differences for interval measurem

i guess i just feel queasy if this is one of those "paradigm shifts" where you'd kind of need the bulk of social scientists to stop thinking about their tests and scales in terms factors and coefficients of reliability and move on to thinking about them in some way else. trust me, we've got enough going on in the Bayesian crusade now when it comes to different ways to think about making inferences from your data
Don't worry, I'm not trying to inspire a paradigm shift back to representationalist measurement I think representationalism is interesting, but it's not how I think about measurement, I don't expect others to, and I don't really think that it's how we should approach measurement.

The reason I'm talking about it and trying to improve my understanding of it is because I feel like everyday data analysts face a bunch of experts saying "measurement level matters! No parametric tests with ordinal data!", and then a bunch of others saying "measurement level doesn't matter! t-tests for everyone!". But no one ever refers back to the original argument for admissible statistics in the first place (that is, outside of articles and books aimed at people with a special interest in measurement theory; like Denny's book, or like this article by he who shall not be named).

Basically, to dispense with an argument - or show the conditions under which it can be ignored - surely you have to explain the original argument? I think that most psychologists implicitly take either a latent variable or an operationalist perspective to measurement; if they knew that the original argument for "admissible statistics" was predicated on a perspective to measurement that they don't share, it might help them sleep easier (and focus on more important things). But when the source and nature of the original argument is unclear, we get nowhere. In fact, where we end up is with popular articles claiming that we can ignore measurement levels because ANOVA is robust to distributional assumptions, despite measurement levels and distributional assumptions being completely different things.

to the best of my understanding, the problem Stevens (and the measurement people) were facing back in the day was that they needed to make sure social scientists received a decent-enough training in measurement while keeping the statistical requirements at a minumum...
From what I've read - and going from Stevens' article itself - I don't think this is quite right, at least as a descriptions of Stevens' motivations. Stevens was trying to respond to the big measurement debate of the day, which was whether or not psychological measurement is actually measurement. Physical scientists said no, psychologists said yes, no one resolved anything. So he made up a very broad definition of measurement - any assignment of numbers according to a rule - but distinguished different scale types, acknowledging that psychologists don't often achieve the "top" two types of measurement. Psychologists loved this because they could now refer to what they did as measurement, though nothing had really changed. Boom, instant popularity.

Now later on, yep, people took to focusing hugely on the admissible statistics issue, which probably wasn't the main focus of his original article. And yes I agree that the use of measurement levels in decision trees became a way to make decisions about data analysis without knowing anything about statistical theory. I'm not sure that he envisioned this (though it's a crappy situation nonetheless).