Hi all. This is my first post so I hope it makes sense and is appropriate for this forum.

For years we have been setting the temperature of our (reflow) ovens based on engineers experience rather than on direct temperature readings on our products and then making the necessary adjustments. The engineers are trying to prove that based on their track record that the expense of performing direct readings is not necessary. (requires destruction of product to embed thermal probes)

Each product (circuit board) has 5 temperature parameters that must all be within a certain range for it to be considered acceptable. What we are thinking of doing is using the following equation (from a Minitab blog) to determine how many boards to run then measuring the temperature on each. The premise being that if the engineers get the temperature settings correct the first time every time, we will be able to say with X% confidence on subsequent runs that the temperatures (without the need to measure and adjust) will be correct.

n=Ln(0.05)/Ln(Reliability), where the reliability is the probability of an acceptable item.
For us 95%: Ln(0.05)/Ln(0.95)

This is basically the one-sample proportion calculation in Minitab.


Is this an appropriate approach? Is it really as simple as this?