I am putting together a statistics course about inference techniques to use when

the predictor variable(s) are categorical (by which I mean, cannot be put into any

meaningful order). I am including both Fisher's exact test for nxm tables (we are

using R, which does this) and the Chi square test. My understanding is that the

two differences between these techniques are:

1) Fisher's exact test is for sampling without replacement, and chi-square is for

sampling with replacement.

2) The Chi square test makes use of the normal approximation.

My question is: given new calculational tools, and the fact that R, for instance, can

implement Fisher's exact test for mxn tables, what is the argument for ever using

the chi-square test for mxn tables?

I suppose I could hypothesize a situation in which the sample size is large compared

to the population, and the sampling is done with replacement, and the sample size is

large enough that the normal approximation is good, but I suspect such situations occur very rarely in real life.

On the other hand, I am familiar also with the work about exact versus approximate

confidence intervals for proportions, and the fact that unless the sample size is small and the proportion is very close to 0 or 1, approximate confidence intervals are

preferable. So it seems plausible that chi-square may in practice give better results.

I am grateful for any help with this!

Eugenie