Sampling Plan Help

#1
Hi! I'm new to the forum. Hopefully this is the right place to post for some assistance. I have a business partner that asked about control charts, but I think he is looking more for an acceptance sampling plan. I have done this in the very distant past, but it's just been too long... :( Here are the details, I'm hoping someone can help with some suggestions.

Business partner is over a QA function for completed contracts. His team checks for generally 18 specific items, all are go/no go (eg, was the attachment included, was the contract signer authorized for the $ level, etc). Any missing item is considered a defect. He has 4 separate individuals on his review team. To make sure his reviewers are consistent between each other, he wants to implement a monthly calibration in which all of the reviewers will be given the same 3 contracts for review so they can see if they all get the same number of defects on the same items. So they would each review a total of 3x18=54 checklist items.

He would like some statistically defendable rules for any actions he may want to take. For example, if the reviewers are consistent on 50 of the 54 checked items, then is there a defendable reason to say that is good or bad? Or if 3 reviewers are consistent on 50 of 54 items, but one is consistent on 40 of the 54? Or if he does the calibration for two or three months, and one of the reviewers is consistently just a little bit worse than the others?

As for outcomes, he's really just talking about retraining or other similar steps. I'm hoping there is a good example or template I can leverage. We started going down the path of MSA--most of that appeared difficult to implement in a service environment, but he really latched on to the concept of reproducibility, making sure that the reviewers were consistent between each other.

Thanks in advance for any help you can give!