Likert scale statistical analysis

#1
Hi, I am doing research using a questionnaire which uses likert scale ratings. There are 8 different hypotheses and the questionnaire is set up so that 4 or 5 questions relate to each hypothesis. I don't really know how to analyse this data, i.e. which tests to carry out on the data. I know i need to do a summative analysis as there are 5 questions which ask very similar questions in a positive and negative way.

can some one help please?
 
#2
You should not sum "scores" from ordinal variables since they do not contain any information about distances (eg. they are not equidistant).

What you can do is to calculate medianscores from the likertscale questions and compare those with nonparametric tests such as Mann-Whitney. But it depends on what your research question is, if you want to compare two or more independent groups or if you want to study changes within a group etc.
 
#3
there are 2 research questions, the first is about which factors affect perceived ease of use of technology, i have 3 different factors to test, which are complexity, trialability and observability and i have done a questionnaire which asks a number of questions for each, i.e. complexity asks 3, trialability 4 and observability 2. these questions use a likert scale rating and what i want to do is firstly see how each factor affects the perceived ease of use individually and then to answer the second question how perceived ease of use affects attitude toward tech, so i need to combine the 3 different factors into one category to answer that one.

i hope this makes it clearer
 
#4
In some fields, such as psychology and communication, it seems that researchers often get away with treating Likert scales as interval data.

Without starting a holy war, I wonder what other researchers think about this question. In practice, is it possible to treat data from well-constructed Likert scales as interval data? Will it play in Peoria?

Aaron
 
#5
there are 2 research questions, the first is about which factors affect perceived ease of use of technology, i have 3 different factors to test, which are complexity, trialability and observability and i have done a questionnaire which asks a number of questions for each, i.e. complexity asks 3, trialability 4 and observability 2. these questions use a likert scale rating and what i want to do is firstly see how each factor affects the perceived ease of use individually and then to answer the second question how perceived ease of use affects attitude toward tech, so i need to combine the 3 different factors into one category to answer that one.

i hope this makes it clearer
So, you will give that questionnaire to a treatment group and to a controlgroup? Or to a treatment group before treatment and after?

adelwich: Yes, researchers such as psychometricians not only get away with it, they often have to do so to get published in scientific papers.. How can one treat a variable derived from sums from "scores" on ordinal scales (measuring for instance feelings) as an interval, equidistant and often normaldistributed variable? It's more tradition than science.
 

spunky

Can't make spagetti
#6
Yes, researchers such as psychometricians not only get away with it, they often have to do so to get published in scientific papers.. How can one treat a variable derived from sums from "scores" on ordinal scales (measuring for instance feelings) as an interval, equidistant and often normaldistributed variable? It's more tradition than science.
because the assumption is that such score is the manifested version of a continuous (or interval, whatever you wanna call it) latent variable... now, i've always felt kind of iffy about likert scales with like 4 or 5 points but, if my memory is correct, i can point out towards one or two monte carlo studies in which they show that after i believe either 11 or 13 likert scale points, analysing such scales as interval/continuous or categorical data makes no practical difference in terms of results, stability of models, accuracy of standard errors, etc...
 
#7
because the assumption is that such score is the manifested version of a continuous (or interval, whatever you wanna call it) latent variable... now, i've always felt kind of iffy about likert scales with like 4 or 5 points but, if my memory is correct, i can point out towards one or two monte carlo studies in which they show that after i believe either 11 or 13 likert scale points, analysing such scales as interval/continuous or categorical data makes no practical difference in terms of results, stability of models, accuracy of standard errors, etc...
Well, you can make assumptions about everything I guess. But is it an assumtion that is testable? How will you know that the steps in the scale are equidistant (even if it measures som kind of underlying continous variable)?
 
#8
If you have literature backing up that those questions (items) load onto those factors, then I would first calculate alpha for the three subscales and overall - this sort of backs-up the literature. If you don't have literature (you designed this questionnaire) then you need to do a factor analysis. Both of these options establishes validity.

Then you can test your hypotheses. One thing you can do is calculate the mean for each factor.

If all of your data are Likert scale including the ease of use, then you may want to use correlation/regression.
 
#9
Lumhearts: yes i do have literature which supports all the hypotheses, previous studies tended to agree with the hypotheses, most use the cronbachs alpha initally but they dont fully explain how they tested their hypotheses. the questionnaire was designed by me but has been adapted from questions used in the other studies.

yes all the questions use the likert scale rating
 

spunky

Can't make spagetti
#10
Well, you can make assumptions about everything I guess. But is it an assumtion that is testable? How will you know that the steps in the scale are equidistant (even if it measures som kind of underlying continous variable)?
see lumharts' reply after yours for the answer... i couldn't have phrased it better