Sorry if my seach of this forum failed to find a previous discussion of this issue. My simple understanding of surveys for which reponses are voluntary is that one cannot validly calculate things like bias, or validly adjust for any perceived bias, or determine a coefficient of variation for a population extrpolatd from the responses, because the sample (the responses) are non-random.

My undersanding seems reinforced by this thread: http://talkstats.com/showthread.php?t=2515&highlight=voluntary+survey

However, I continually see published results of surveys showing professors wailing away at statisical analyses of responses to voluntary surveys. A frequent analysis I see seems to be comparing responses of a survey conducted by phone to responses by mail, and then by comparing the differences of the means in each (using a z-test) they come up with a statement on the bias or lack thereof in each data collection method.

In addition, I also see attempts to calculate coefficients of variation in the nonrandom sample to determine a 95% confidence population range. Given that all responses were voluntary, I don't see how this is valid.

I am summarizing of course to make this post easier to follow, and I do not want to embarrass or "out" any author of any study, so I will not identify any specific study in which I've found this. My goal is to understand "why" this is done.

Are such attempts to use statistics complete nonsense? If so, why do I see it attempted by what appear to be tenured professors in reputable universities? I can't confirm if the study was published in a peer-reviewed publication, but I can confirm wthout a doubt that they performed such studies and analyses and they arte findable on the internet.

My key concern is from a practical standpoint in using less-than-perfect data to make reasonable real-world decisions, not from a purist theory view. Does my need make such use of stats on voluntary results better than not attempting such calculations, or are such attempts pure nonsense no matter what?

Thanks for any insight!

My undersanding seems reinforced by this thread: http://talkstats.com/showthread.php?t=2515&highlight=voluntary+survey

However, I continually see published results of surveys showing professors wailing away at statisical analyses of responses to voluntary surveys. A frequent analysis I see seems to be comparing responses of a survey conducted by phone to responses by mail, and then by comparing the differences of the means in each (using a z-test) they come up with a statement on the bias or lack thereof in each data collection method.

In addition, I also see attempts to calculate coefficients of variation in the nonrandom sample to determine a 95% confidence population range. Given that all responses were voluntary, I don't see how this is valid.

I am summarizing of course to make this post easier to follow, and I do not want to embarrass or "out" any author of any study, so I will not identify any specific study in which I've found this. My goal is to understand "why" this is done.

Are such attempts to use statistics complete nonsense? If so, why do I see it attempted by what appear to be tenured professors in reputable universities? I can't confirm if the study was published in a peer-reviewed publication, but I can confirm wthout a doubt that they performed such studies and analyses and they arte findable on the internet.

My key concern is from a practical standpoint in using less-than-perfect data to make reasonable real-world decisions, not from a purist theory view. Does my need make such use of stats on voluntary results better than not attempting such calculations, or are such attempts pure nonsense no matter what?

Thanks for any insight!

Last edited: