Is my continuous data relevant, or am I stuck with attribute data?

#1
I would like to validate a laser-welding process, which joins two metal components. We have a reliability/confidence requirement of 95% Reliability with 95% Confidence.

We did a process development experiment and pulled 20 subassemblies apart to failure, and they all exceeded the minimum strength requirement, however, the failure mode was always through the bulk of one of the components – not through the weld itself. The data passes normality testing.

The design engineer would like to determine the sample size for the final process validation testing using continuous data if possible, because the resulting sample size from a tolerance interval approach (using mean, standard deviation, and k-factors) is considerably smaller than the sample size for attribute data. Her rationale is that she is testing the strength of the subassembly, so it’s not necessary to know the strength of the weld.

However, my view is that mean and standard deviation of the data is not from the process being validated (laser-welding), but rather it’s from the process of manufacturing the raw material of the component that is breaking first, and therefore it’s not relevant to the task of validating the laser-welding process. In the absence of any continuous data on the strength of the weld itself, we don’t know what the shape of the distribution is, whether it’s normal or not, skewed or not, what the sample mean is, and what the sample standard deviation is. All we know is that for all 20 subassemblies tested, the weld strength was greater than our requirement. My concern is that the standard deviation may be large enough to occasionally result in a failure, and 20 subassemblies may not be enough to detect that. All we know is that the weld was “good”, or “passing”, and therefore it should be considered attribute data. So I would think that we are stuck with requiring the sample size for attribute data, which is 59 subassemblies per lot for 95% reliability with 95% confidence.

The validation may require five lots: two for the OQ to test the extremes of the process, and three for the PQ, to test the middle of the process factoring in other variables, such as different operators, material lots, equipment fluctuations, time of day, etc. That would be 5 x 59 = 295 subassemblies, whereas if we could test let’s say 20 per lot (I’m not sure yet – that’s approximate), that would be 100 subassemblies.

Does anyone see a way to justify using the variable/continuous data? If so, what is the rationale? If not, how can I explain it to the engineer in a more convincing way?

Thank you!
 

rogojel

TS Contributor
#2
hi,
you can consider this as a test with no failures for your laser weld.
Here http://www.reliasoft.com/pubs/2010_RAMS_reliability_estimation_for_one_shot_systems.pdf you have a pretty general formula (9) which says essentially that the realiability of the weld at the force where the part breaks for other reasons is higher then (1-cl)**(1/n) where cl is your confidence level and n the sample size.
This is generally giving quite large sample sizes but you might want to take a look at it.

regards

As for your question, I believe you are fully right - if another part fails earlier it has no point to test with a continuous approach bcause you test the part and not the weld.

regards
 

Miner

TS Contributor
#3
I don't see any options other than an attribute plan as long as weld strength is your response variable. A good weld should always be stronger than the base material. If you can demonstrate a relationship between the weld penetration and the weld strength (or weld failure mode), you could potentially use weld penetration as a surrogate measurement.
 
#5
Thank you for your reply. Great point about weld penetration. We did that at a previous company. Using a diamond saw and potting and polishing system we were able to view cross-sections of the weld and measure the penetration depth and then we'd have continuous data. So that's a good option.