Significance test for questionnare

Hello everybode I would be forever greatfull for some guidence. I have been browsing the internet for 2 hours now but not gotten any spot on answer therefore I now turn to you folkes.

I have been conductiong a questionnare for two different groups one at location A and one at location B. About 500 people in each group.

Now i want to compare if there are statistical significance between the groups for each individual question.

The alternative is for all questions are 1-8 and therefore it's an ordinal scales.

I assumed that the only suitable test for this would be the Mann whitney test. So i conducted the test in R and recieved a warning message "cannot compute exact p-value with ties".

My test for each question looks like:
x y
1 3
3 2
5 2
. .
. .
. .

wilcox.test(df_data$KA,df_data$Sah, paired=FALSE)

Question 1 :
Why does ties affect the p - value? Is this anything I should be concerned about or take into account when i present the results?

Question 2 :
Can a one sided test be conducted? I could just found 2-sided tests in R.

Thank you very much!

With 500 subjects, there is no problem with using a t test because the sampling distribution of the means is so near normal as makes no difference. A more urgent problem is that if you have a lot of questions there is serious chance of a false positive, unless you make the p value for significance correspondingly smaller with something like Bonferroni. The easiest way is to do it all in Excel using the =TTEST() formula.

Cheers, Kat


New Member
I agree with Kat, especially about multiple testing. The reason you are getting the warning message is because ties cannot be counted for or against the null hypothesis. So the p-value could be a range of values. This is only a problem when there are many ties.

Have you seen the documentation for this function? You can get it by typing "?" in front of its name in the console. That will tell you how to do a one-tailed test. Though from your description I don't see why you would assume consistently higher values for one group than the other. Unless you have strong reasons to believe so, the two-tailed test should be fine.