Statistical significance of observed incidence

#1
Lurker here that just signed up to ask a question that I am unable to find the answer among prior posts nor using various other search strategies.

Here's my situation. I need to determine if the incidence of a categorical phenomenon that I have observed in a sample is significantly different from an accepted incidence of the phenomenon published in the literature. I would generally use a Chi square test in this situation but my reference incidence is reported as only a percentage (31.6%) and the incidence in my sample is 35.8% for a sample size less than 50. Basically, I know what I want to do, but don't know what values to enter into the stats software for the reference. Since the reference incidence is reported as a percentage I could use proxy values of 31.6 with a made up N of 100 but I know that the published incidence has been determined from a data set that is in the hundreds of thousands and my fear is that this would alter the calculation....but I can't articulate a specific reason that makes sense why this would happen (the curse of being an amateur).

Any assistance/guidance would be much appreciated. Maybe I am completely off the mark and there is a more appropriate test to apply.

I have access to SPSS, SAS Enterprise Guide, and R, although I am at best minimally proficient at using these but must use the "output" from one of these in my project in order to satisfy the requirements.
 
#2
Hi Brutane, I assume want to perform a goodness of fit?

If this is this case the statistic chi-squared equals Σ(observed-Expected)^2/Expected.
You know the observed value and the expected value is based on the total of the observed value.
If for example if N=39, Observed=14 and the expected is 0.316*39=12.234.

In the goodness of fit, the total must be equal for the observed frequencies and for the expected frequencies.
you should use 2 groups ("incident" and "no incident"

If you don't have experience in the software you mention you can use a simple online calculator as a reference, to ensure you used the software correctly, and then use SPSS/SAS/R for the "output". (like http://www.statskingdom.com/310GoodnessChi.html)

Is this answer your question?
 
Last edited:
#3
R, for example, is also easy to use, but it will give you "output" but not explanations.
You can also use the P directly instead of the observations.

obs <- c(14,36)
chisq.test(obs, p = c(0.316,0.684))
 

ondansetron

TS Contributor
#4
Lurker here that just signed up to ask a question that I am unable to find the answer among prior posts nor using various other search strategies.

Here's my situation. I need to determine if the incidence of a categorical phenomenon that I have observed in a sample is significantly different from an accepted incidence of the phenomenon published in the literature. I would generally use a Chi square test in this situation but my reference incidence is reported as only a percentage (31.6%) and the incidence in my sample is 35.8% for a sample size less than 50. Basically, I know what I want to do, but don't know what values to enter into the stats software for the reference. Since the reference incidence is reported as a percentage I could use proxy values of 31.6 with a made up N of 100 but I know that the published incidence has been determined from a data set that is in the hundreds of thousands and my fear is that this would alter the calculation....but I can't articulate a specific reason that makes sense why this would happen (the curse of being an amateur).

Any assistance/guidance would be much appreciated. Maybe I am completely off the mark and there is a more appropriate test to apply.

I have access to SPSS, SAS Enterprise Guide, and R, although I am at best minimally proficient at using these but must use the "output" from one of these in my project in order to satisfy the requirements.
What is the actual sample size? You can bootstrap a p-value or CI for the true proportion assuming a null hypothesis of p0= .316 with a two sided alternative.

With a large sample (how large is large is always the real question) the z-distribution could be used for a proportion (or difference in proportions) and it would be equivalent since a the square of a Z stat would be equal to the chi-square statistic. Bootstrapping would just allow you to set Ho =the literature value for p and may be more appropriate if the sample is too small to make good use of the standard normal approximation. Someone can correct me if I'm mistaken, please.
 
Last edited:
#5
Hi Brutane, I assume want to perform a goodness of fit?

If this is this case the statistic chi-squared equals Σ(observed-Expected)^2/Expected.
You know the observed value and the expected value is based on the total of the observed value.
If for example if N=39, Observed=14 and the expected is 0.316*39=12.234.

In the goodness of fit, the total must be equal for the observed frequencies and for the expected frequencies.
you should use 2 groups ("incident" and "no incident"

If you don't have experience in the software you mention you can use a simple online calculator as a reference, to ensure you used the software correctly, and then use SPSS/SAS/R for the "output". (like http://www.statskingdom.com/310GoodnessChi.html)

Is this answer your question?
obh,

OK, so I think this is what I want to do, but see if I'm interpreting what you are saying correctly.

So for my sample I would use the actual "observed" instances of the categorical value and then use the published incidence from the literature (31.6%) to calculate an "expected" instances for my sample for "group 1." Then for "group 2" I would used the actual "observed" instances where the categorical value was absent/negative and again use the published incidence from the literature of "absent" (68.4%) to calculate an "expected" absence/negative for my sample.

If this is correct then to answer your question....YES!! This is answer I was looking for. Thank you!
 
#6
What is the actual sample size? You can bootstrap a p-value or CI for the true proportion assuming a null hypothesis of p0= .316 with a two sided alternative.

With a large sample (how large is large is always the real question) the z-distribution could be used for a proportion (or difference in proportions) and it would be equivalent since a the square of a Z stat would be equal to the chi-square statistic. Bootstrapping would just allow you to set Ho =the literature value for p and may be more appropriate if the sample is too small to make good use of the standard normal approximation. Someone can correct me if I'm mistaken, please.
ondansetron,

I am not completely sure I understand your reply, but in any case my sample is very small so given that your method requires a "large sample" I don't think it would apply to my situation.

Thanks for taking the time to reply though. I do appreciate it! I'm going to read more about the method you describe as my intention is to really learn from these projects rather than simply fill numbers into formulas or software.
 
#7
obh,

OK, so I think this is what I want to do, but see if I'm interpreting what you are saying correctly.

So for my sample I would use the actual "observed" instances of the categorical value and then use the published incidence from the literature (31.6%) to calculate an "expected" instances for my sample for "group 1." Then for "group 2" I would used the actual "observed" instances where the categorical value was absent/negative and again use the published incidence from the literature of "absent" (68.4%) to calculate an "expected" absence/negative for my sample.

If this is correct then to answer your question....YES!! This is answer I was looking for. Thank you!
For every group you have the "observed" instances and the expected instances.

group1 - the issue you calculate his percentage
group2 - complete group 1 for 100%.

For example, If you calculate the percentage of sick people, group1 - sick people, group2 healthy people.

If for example N=39

Group Observed Expected
Group1 14 12.3
Group2 25 26.7

What is your N?
 
Last edited:
#8
For every group you have the "observed" instances and the expected instances.

group1 - the issue you calculate his percentage
group2 - complete group 1 for 100%.

For example, If you calculate the percentage of sick people, group1 - sick people, group2 healthy people.

If for example N=39

Group Observed Expected
Group1 14 12.3
Group2 25 26.7

What is your N?
obh,

Perfect, I did interpret your original reply correctly.

Our N is only 28 for now but we have a couple more weeks to collect data so we are hoping to approach closer to 50 by the end of the project.

Thanks once again for your assistance!

Brutane
 

ondansetron

TS Contributor
#10
ondansetron,

I am not completely sure I understand your reply, but in any case my sample is very small so given that your method requires a "large sample" I don't think it would apply to my situation.

Thanks for taking the time to reply though. I do appreciate it! I'm going to read more about the method you describe as my intention is to really learn from these projects rather than simply fill numbers into formulas or software.
"Large" is a generally vague term but there exist rules of thumb for testing a proportion with z-test and chi-squared. In "large samples" z can be used to test for the true proportion, and will be equivalent to chi-squared. My point is this, you can use the chi-square or z-test to answer your question, and bootstrapping for either of those will be useful if you suspect your sample is not "large" enough (comparing the actual and bootstrap results will help you decide if sample size was "too small" for either.
 
#11
Great :)

As a rule of thumb 30 should be okay for using Z or chi-squared.
But it doesn't say it big enough to reject an incorrect H0.
The bigger the sample the bigger the power of the test to reject an incorrect H0.

When you plan your experience, how did you calculate the sample size?
http://www.statskingdom.com/sampe_size_chi2.html
obh,

We used G*Power to calculate a sample size using an effect size of 0.3 alpha of 0.05 beta of 0.8 with one degree of freedom. The sample size was determined to be 88. Given the time constraints of the project and the scarcity of the subjects of interest it would take too long to attain this sample size. Hence, we understand that our test will not be truly be "significant" regardless of the outcome. We will explain this limitation but must accept it under the circumstances.

Thank you for all your help thus far.
 
#12
"Large" is a generally vague term but there exist rules of thumb for testing a proportion with z-test and chi-squared. In "large samples" z can be used to test for the true proportion, and will be equivalent to chi-squared. My point is this, you can use the chi-square or z-test to answer your question, and bootstrapping for either of those will be useful if you suspect your sample is not "large" enough (comparing the actual and bootstrap results will help you decide if sample size was "too small" for either.
ondansetron,

Thank you, I think I understand. As I replied to obh, we understnand that our sample might be too small but need to report this value and then simply use an explanation of the limitation of the test we conducted and that the result might not have significant chance of error.
 
#13
obh,

We used G*Power to calculate a sample size using an effect size of 0.3 alpha of 0.05 beta of 0.8 with one degree of freedom. The sample size was determined to be 88. Given the time constraints of the project and the scarcity of the subjects of interest it would take too long to attain this sample size. Hence, we understand that our test will not be truly be "significant" regardless of the outcome. We will explain this limitation but must accept it under the circumstances.

Thank you for all your help thus far.
Great Brutane,

It seems to be correct :)
you mean power of 0.8 (beta of 0.2) as you really did.

The power limitation will be relevant only if you won't succeed to reject the H0
You will not know if it is because there is no reason to reject H0 , or because your test power was too weak to reject the H0
If you will succeed to reject H0 with smaller sample there won't be a significant limitation.
 
#14
ondansetron,

Thank you, I think I understand. As I replied to obh, we understnand that our sample might be too small but need to report this value and then simply use an explanation of the limitation of the test we conducted and that the result might not have significant chance of error.
Hi Brutane,

There are two different aspects of the sample size.
1. can you use the specific test?
Since you don't know the standard deviation (, If the sample size is not big enough (rule of thumb 30) you may not use a test based on Z or Chi-squared. otherwise, you should consider different tests.
In your case, there is no such a problem.

2. will the test be powerful enough to identify a change of the effect size from the expected value - H0, and reject H0 if such a change was identified?
In your case, you need a sample size of 88 to achieve a power of 0.8 ( 0.8035)
That says the probability to reject an incorrect H0 will be 0.8 (still there is a probability of 0.2 you won't reject an incorrect H0)
 
#15
Hi Brutane,

There are two different aspects of the sample size.
1. can you use the specific test?
Since you don't know the standard deviation (, If the sample size is not big enough (rule of thumb 30) you may not use a test based on Z or Chi-squared. otherwise, you should consider different tests.
In your case, there is no such a problem.

2. will the test be powerful enough to identify a change of the effect size from the expected value - H0, and reject H0 if such a change was identified?
In your case, you need a sample size of 88 to achieve a power of 0.8 ( 0.8035)
That says the probability to reject an incorrect H0 will be 0.8 (still there is a probability of 0.2 you won't reject an incorrect H0)
ohh,

Thanks for all your help. Although our sample isn't the best, I feel much more confident that I am on the right track, and I feel like I learned yet another helpful bit about statistics that I can carry forward and continue to build on.