Hi, A big bag is emptied in X amount of smaller bags. We are trying to detect an error which is clumped. In other words, if an error is detected into 1 of the X we are sure all the other small bags with errors are grouped around that initial sample. Is a sample size described in the following link the best I can get?

Can we incorporate the clumping of errors to get a smaller sample size? If I can just name the methodology I can continue my search for the solution.