I am not quite sure if the title is right or not but here it goes. I am doing an internship at a manufacturing facility where I am working in a lab department and we have three microscopes for which I performed the Type I Gage Study to make sure that the system is not occupying more than 20% of the Tolerance of the part. My supervisor wants to know (something he wants to give it to our customer) that when we measure any length, how accurate are we in our measurement and the percent range that we coud give to them. For example- We are 100% sure that this object is 15 mm plus or minus 1 percent. This is something that he wants to get for the 3 different microscopes at each magnification regardless of what part it is.

I performed T distribution Hypothesis Test to construct a confidence Interval of the repeated measurements that I have taken of a particular object and from the exten that measurement variation follows standard distribution, I could construct an interval range of the length. I thought this was a right approach but not according to him neither does he have any idea of how to approach this problem. Does anybody have any suggestion or any reading material or topic or anything that I could use to perform this study or to answer his question. Any help is appreciated.