Hi,
Please only post a thread once. I deleted your other thread. Thank you.
Hi,
When running an item-scale correlation, what does it mean when research have corrected for overlap?
Is there a way to do this in SAS?
Hi,
Please only post a thread once. I deleted your other thread. Thank you.
I don't have emotions and sometimes that makes me very sad.
What procedure are referencing in particular?
Stop cowardice, ban guns!
Hi
A lot of papers will have item-scale correlation when trying to determine item score with scale score in order to determine discriminant validity. However, a lot of them will have the words (corrected for overlap) underneath. I'm wondering what's the procedure? For example here: http://medwelljournals.com/fulltext/...i.2009.974.977 "item-scale correlation after correction for overlap"
Shirley
From William Revelle's R documentation for his 'psych' package:
"Three alternative item-whole correlations are reported, two are conventional, one unique. r is the correlation of the item with the entire scale, not correcting for item overlap. r.drop is the correlation of the item with the scale composed of the remaining items. Although both of these are conventional statistics, they have the disadvantage that a) item overlap inflates the first and b) the scale is different for each item when an item is dropped. Thus, the third alternative, r.cor, corrects for the item overlap by subtracting the item variance but then replaces this with the best estimate of common variance, the smc."
(Where 'smc' is Revelle's name for Guttman's reliability index)
It's just this general idea that if you correlate an item with the total soore of a test without doing something to it, the variance of the total includes the variance of the item you're correlating so the correlation is inflated. That is 'item overlap' and there are a few ways to correct for it.
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
proc1125 (10-22-2014)
uhm... the little blurb i posted tells you that? like the two methods the 'psych' package uses are item-total correlation if-item-dropped (i.e. it removes the item you're correlating from the calculation of the total score and then correlates the scores on that item with the new total scores) and item-total correlation corrected for shared variance, where the item you're correlating to the total score doesn't get removed from the calculation of the total score and said item's variance is substituted by some estimate of the item's communality when the correlation is being calculated (in R's case, Guttman's lambda is used it could just as well be Cronbach's alpha or some other estimate of reliability). this last method is obviously not so straight-forward as the item-total correlation if item dropped but it is more appropriate.
there are other ways, but i'd say the item-total correlation if item dropped is the most common one (albeit not the best one).
it probably does, but i don't know how to get them. SAS has arcane and mysterious ways. the light of R should always guide you!
but seriously, even SPSS does item-total correlation if item dropped, so SAS should be able to do it as well...some way... somehow...
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
proc1125 (10-03-2014)
Hi Spunky,
Another quick question I have for you. If I have a measure with 5 items. Two of the items are dichotomous or Yes/No. The other 3 are answered with Likert scale from 1-4. If I run a item-total correlation with Pearson correlation, is this correct. Do I need to take into account that 2/5 questions have different response options?
generally, it is not recommended to run a Pearson correlation on discrete variable options unless you're on the 5-7 range. if your data is very skewed you'll need around 8-9 options.
what you'd need to do is calculate the polychoric/polyserial correlation matrix and work from there.
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
proc1125 (10-22-2014)
yeah, it's not super clear. sorry.
i meant to say you need to have 5 to 7 likert type response options (like "strongly disagree", "disagree", "netural", "agree", "strongly agree" would be 5 response options) or 8 to 9 response options depending on your data
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
proc1125 (10-22-2014)
Hi,
In here you mentioned polychoric/polyserial correlation matrix. So, I should run polychoric/polyserial correlation on the item and indicator total?
Furthermore, from what I can remember, polychoric/polyserial correlations assumes that my latent variables are normally distributed. If I have no clue how they're distributed, is it better to use nonparametric or spearmen method? If that's the case, then what about Cronbach alpha?
yes
have a click here and look for Bengt Muthen's Aug 2003 rant on why the normality (even continuity) assumption behind polychoric correlations is not really a *thing*.
uhm.. you could, potentially, use Spearman's correlation matrix but a lot of psychometric analyses have not been extended to the context of Spearman's rho so you analyses could end up being somewhat dubious.
(standardized) cronbach alpha is a weighted ratio of item correlations from the item correlation matrix. you can use the polychoric correlation matrix to find alpha (just make sure you're not including the item-total correlation in that matrix and just the ones among the items)
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
proc1125 (10-22-2014)
Hi another question I had was about calculation item-total correlation and cronbach alpha for missing responses.
For example, say I have 4 questions. A subset of the sample will answer the first two, and a subset of the sample will answer the last two. The whole sample will answer all the other 16 question son the survey. If I want to group those 4 questions as a measure, do I have to use multiple imputations to fill in the missing data and then calculate the item-total calculation and cronbach alpha? Thank you.
Tweet |