Error band for calibration correction of an instrument

#1
I apologize if this is the wrong area to post this. It seems like it could fall into a couple different areas.

I have a question about calculating the expected error around a calibration curve for an instrument. I have a calibration curve for a load cell. It was measured at a few different repeated known loads and then a 4th order best fit curve was applied. So I can now see visually and mathematically how it performs across the spectrum. It is easy to use the equation to convert between indicated value and the most likely nominal “true load”. But what’s throwing me is putting an associated error with that. So for example if I use that load cell to weigh something and the indicator says 5001 lbs, from the curve I can calculate that the load is really 5004 lbs based on that indication. However I know that that is only the most likely value and it should be 5004 ± something. Calculating what that something is is throwing me for a loop though. I’ve been reading about confidence intervals vs tolerance intervals vs prediction intervals and think that there is some importance there. I don’t need to get super detailed. Really what I need is a range that I can say “we can be confident that true value is between these two values” (I understand that confident is relative and I would lean on the side of being conservative as long as I can use some mathematical justification. I enjoy statistics and would love to learn the math behind this, but frankly I’m not sure where to look right now. I’m hoping someone out there can explain the concept I need here in layman’s terms so I can build on that by digging into the mathematics.
 

FFA

New Member
#2
The errors you should be concerned about in this case would be the random and systematic errors. The random is measurable by taking repeatable measurement.
I would suggest measuring the accepted calibration load (your calibration standard) 12 times. I would then calculate the relative standard deviation of these measurements and multiply it by 100 to get a percentage. This is your random percent error.

The systematic error or what is sometimes referred to as bias is in essence the absolute distance of the measured value from the accepted value of you calibration load. Take your calibration repeatability measurements sample mean and subtract it from the acceptable calibration load value and this is your bias. I would dived this number by the accepted calibration load value to get you bias relative to the accepted calibration load value and multiply this number by 100 to get a percentage. You can now take the random percent error calculated and your systematic percent error and add them in quadrature to get your total calibration uncertainty in a percentage. TU = ((R-error)^2+(S-error)^2)^.5

It is safe to assume this error measurement is valid on future measured items, that is assuming that each measurement does not have a large bias associated with. This will occur if you measure loads that are vastly differently in composition and homogeneity than you calibration load.

It is common practice to incorporate a coverage factor into the calibration uncertainty to compensate for this. You might have heard the term 2 or 3 sigma error. It is basically taking the total calibration uncertainty and multiplying it by a 2 or a 3 to compensate for bias differences between the calibration load and the items being measured. The larger the coverage factor the smaller the confidence in the measured value.
 
Last edited:

Miner

TS Contributor
#3
I believe that you are looking for calibration regression, also called inverse regression.


From Minitab: A large-sample approximation for a confidence interval around a point estimate of X is given on pp. 172-174 of Neter, Wasserman, and Kutner's 1985 text, Applied Linear Statistical Models.