Do you mean you are following these 50 people at 4 timepoints? Or that at each timepoint you get 50 different people?
I have a sample of 50 respondents to a cross-sectional (identical) survey administer at four time points. The instrument asks the potential respondents to report their confidence level (on a 5-point Likert scale) performing 12 tasks.
I assume that I could perform a cronbach alpha on each time point and report on whether certain items have responses that correlate, right? But I was wondering if there was a way for me to use all four time points and see if there are certain questions that respondents are not confident performing, so patterns. I am not sure if I would be able to generalize that the responses have a comparable latent construct or not between items or what my options are.
So far I have only collapsed the Likert scale to binary and looked at question specific trends across the study period, etc. Lastly, all respondents to all questions had a near maximum score by the last time point.
Thanks!
Stop cowardice, ban guns!
Do you mean you are following these 50 people at 4 timepoints? Or that at each timepoint you get 50 different people?
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
But like... are we really the same "person" at any other point in time. Whoa man... things just got philosophical.
I don't have emotions and sometimes that makes me very sad.
Spunky, exactly the same 50 people at each of the four time points. So: 50, 50, 50, 50. No study intervention, just passage of time.
Dason, you are correct that we are not the same person at each time point. In my field we talk about the moment a study is over time (the over dimension: time) keeps moving so everything is actually different, so can you generalize results to a new sample from the ever changing population. But I don't care about that in this post. Though you are closure to the equator now, so your irradiation of components are denaturing faster, so you aren't component-ly the same bot.
Stop cowardice, ban guns!
Thing is it seems to me like you'd like to probably do a more exploratory type of analysis. Maybe plotting the items over time and seeing if they change or if the trend flattens or something?
I don't think Psychometrics can offer any specific insight at this point given the kind of question you're asking.
Now, if you're making the claim that the latent construct changes over time and you would like to model it, then you would end up doing some type of structural equation model or item response theory... but at N=50 well... I kind of think you're stuck with basic classical test theory type analyses.
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
So do I need to run factor analysis on the baseline data (time 0) to say, hey people suck at these 4 things. I guess I am also thinking about whether certain people have low confidence on similar items. So if I am no good at this logic, so I will also be poor on this one as well.
I am content with my current results, which are fairly descriptive - but I feel like the may be an opportunity to better describe an underlying pattern. Any input would be appreciated.
Stop cowardice, ban guns!
And more humidity in general as well as bugs getting into your works. You should take out some more insurance to help provide for Gideon in the case of catastrophic damage or the blue screen of death some day.
Stop cowardice, ban guns!
Factor analysis will not really tell you that. Factor analysis will be trying to answer more the question of "how many distinct (albeit related) types of latent abilities are needed to accurately explain these results?" It doesn't really factor difficulty in its initial concept.
That's why I was thinking if you take the route of classical test theory approach you can use the proportion-correct total score as an indicator of whether or not learning is occurring. Where basically the idea is that if you see the proportion of correct answers increase over time, then you can assume people are becoming more proficient.
Again, N = 50 scares me for factor analysis. The number people have kind of agreed upon in the Psychometric world about sample needed for factor analysis is around N>200.
Plus you mentioned towards the end most people are getting things correct, right? (all respondents to all questions had a near maximum score by the last time point.) Items that cannot accurately discriminate among people are bad for factor analysis. If everyone's getting the item correct then you're going to have very little variance which translates into lower-than-expected correlations.
for all your psychometric needs! https://psychometroscar.wordpress.com/about/
SSPPUUNNKKYY, Teach me what you mean here:
I had treated the outcome as binary (correct/not correct, given your lingo) and modeled time, which had a positive association. Or what I deemed a positive trend. What are you referring to, something similar? Awe me with the Social Sciences here.classical test theory approach you can use the proportion-correct total score as an indicator of whether or not learning is occurring
Stop cowardice, ban guns!
Tweet |