I have a question about calculating the expected error around a calibration curve for an instrument. I have a calibration curve for a load cell. It was measured at a few different repeated known loads and then a 4th order best fit curve was applied. So I can now see visually and mathematically how it performs across the spectrum. It is easy to use the equation to convert between indicated value and the most likely nominal “true load”. But what’s throwing me is putting an associated error with that. So for example if I use that load cell to weigh something and the indicator says 5001 lbs, from the curve I can calculate that the load is really 5004 lbs based on that indication. However I know that that is only the most likely value and it should be 5004 ± something. Calculating what that something is is throwing me for a loop though. I’ve been reading about confidence intervals vs tolerance intervals vs prediction intervals and think that there is some importance there. I don’t need to get super detailed. Really what I need is a range that I can say “we can be confident that true value is between these two values” (I understand that confident is relative and I would lean on the side of being conservative as long as I can use some mathematical justification. I enjoy statistics and would love to learn the math behind this, but frankly I’m not sure where to look right now. I’m hoping someone out there can explain the concept I need here in layman’s terms so I can build on that by digging into the mathematics.