A Gage R&R was done comparing two inspection tools to each other. These tools measure the section modulus of wood, which is used to calculate the SI and whether or not we replace the wood.

The measurements using the tools were normalize, that is, divided by the actual value. I have the mean and stdev of each normalized tool from a sample of 15 poles measured 6 times.

How do i use these values to say that Tool A and B, given the precision (using results from the case study) will be able to pick up X amount of poles before failure ( a normalized value less than 1).

Mean_A = 0.79 Stdev A = 0.25 for example. I can get the probability the normalized value but how do i apply that to the actual number of failures in a network.