1. ## Multiple testing?

Hi all,

I have a question about some analyses that I did. I examined a large hospital admissions data set and the data were grouped by areas 1, 2, or 3. The admissions data were categorized according to diagnosis chapter and then analyzed (to look at the increases in hospitalization rates over time in the area of interest compared to area 2 or 3). So it all of the tests were independent of one another (i.e., the test for diagnosis chapter 1 comparing area 1 v area 2 was independent of the test for the same chapter examining area 1 v area 3). Is there still an issue with multiple testing here? This was just a very exploratory analysis to see if there were areas to suggest further research. Thanks

2. ## Re: Multiple testing?

I would call them pairwise comparisons and correct if I was running the tests.

So you aren't interested in 2 v 3?

3. ## Re: Multiple testing?

Originally Posted by hlsmith
I would call them pairwise comparisons and correct if I was running the tests.

So you aren't interested in 2 v 3?
Sorry, yes, 2 v 3 was also examined, each analysis with the purpose of determining if the relative changes in slope differed for area 1 v 2 and so on. So if you're saying pairwise comparisons does that mean that you would say, yes, it would be considered multiple testing? Proc GENMOD was used in SAS (NB regression) - so how would one correct for that?

4. ## Re: Multiple testing?

Yes, I would correct estimates and p-values.

Don't have code memorized but a

alpha = 0.01667 could work for a bonferroni correction.

Or if there were estimate statements, you can probably add an adjust=bon option.

Let us see what code you actually used?

5. ## The Following User Says Thank You to hlsmith For This Useful Post:

wernie07 (12-17-2015)

6. ## Re: Multiple testing?

Originally Posted by hlsmith
Yes, I would correct estimates and p-values.

Don't have code memorized but a

alpha = 0.01667 could work for a bonferroni correction.

Or if there were estimate statements, you can probably add an adjust=bon option.

Let us see what code you actually used?
Thanks! Let's see....the code for unadjusted models was as follows:

proc genmod data=data1;
class area;
model nsum=area period area*period / dist=NB link=log offset=logpop type3;
estimate "Area 1 v area 2*period" area*period 1 -1 0/exp e;
estimate "Area 2 v area 3*period" area*period 0 -1 1/exp e;
estimate "Area 1 v area 3*period" area*period 1 0 -1/exp e;
run;

Then, for the adjusted models, it was similar minus adding in the data for covariates. I haven't used the 'adjust' option before, but I tried to add that in and it didn't recognise that - not sure if it's only for lsmeans statement? Also, I added in the alpha=0.01667 option and didn't get any errors, but the output still showed the estimates and under 'alpha' it said 0.05...not quite sure what's happening there.

7. ## Re: Multiple testing?

Yeah you should be able to find it in the documentation for Proc genmod. If it doesn't seem like you found it by morning, I will look up the exact code when I am at an actual PC.

8. ## Re: Multiple testing?

Originally Posted by hlsmith
Yeah you should be able to find it in the documentation for Proc genmod. If it doesn't seem like you found it by morning, I will look up the exact code when I am at an actual PC.
Thanks, I see that I had the alpha= option on the wrong line (needed to put it after each estimate statement to get it to show in output). Doesn't seem like adjust=bon works there. So just a (likely fairly basic) question then for stating the methods - I read a paper that said 98.33% CIs were used because alpha was set at 0.01667 to protect against multiple comparisons. Then it said that the overall significance level was preserved at an alpha of 0.05. In looking at the corrected p-values and 98.33% CIs output then, does the p-value cut-off still remain at 0.05 or would I be looking for any p-values < 0.01167? Just unclear on that 'overall significance level preserved at alpha=0.05' statement. Also, would corrected p-values be used only for those analyses where there were significant results, or do I just use correct p-values for everything straight away? Thanks again!

9. ## Re: Multiple testing?

Also, you can always multiply p-values by 3 to get the correction and if they provide estimate and SE you can just substitute 1.96 with 2.395 and calculate 95% CIs by hand/computer.

10. ## The Following User Says Thank You to hlsmith For This Useful Post:

wernie07 (12-18-2015)

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts