Optimizing Burn-in Testing

I'm new to the field and have a problem that I'm currently stuck on. For burn-in testing of a machine, we cycle test it 500 times. Sometimes it'll fail before that point, get repaired, and then continue testing. Sometimes it won't fail by the conclusion of the test. The question I've been tasked with is: is 500 cycles the right number? I think a better way to put it is at what cycle number can we be 90/95/99% confident that all failures that will show up have occurred.

I've started collecting data on failures during cycle testing, but before I have real data to work with I've created a fake dataset. To see if I can generate real results, and this is where I'm failing. The dataset consists of the following failure times: 20, 42, 67, 104, 171, 194, 451, and 14x500 (right-censored datapoints). Even excluding the censored datapoints I'm at a bit of a loss. I've plotted the datapoints on Weibull plotting paper (from Weibull.com) and I think they're close to a shape parameter (beta) of 1.0 (which I believe would suggest a constant failure rate--a beta of less than one would fit for population with a decreasing failure rate, which is what you would expect if conducting burn-in testing) but I wasn't going to stress if the shape parameter didn't make the most amount of sense given that this was all arbitrarily selected datapoints. But having the datapoints plotted on paper leaves me no closer to answering the above question.

Any thoughts on the problem or resources I can be directed to would be greatly appreciated.


TS Contributor
If you search using the terms: optimizing burn in time, you will find a lot of good information. This article from Reliasoft should give you a good overview. There are a number of others at Weibull.com.