Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Best QC method for controlling failure rate in destructive testing 2

Status
Not open for further replies.

ProEpro

Mechanical
Feb 5, 2002
247
0
0
US
We have a product that we buy around a billion pieces of a year. The main quality criteria is break strength in a destructive test. We currently use AQL sampling and reject any lot with more than the threshold of failures. Is this the current best practice for this type of inspection?

We want to reduce the defect rate but I am unsure that lowering the AQL threshold is the right way to go. My experience is QC of dimensions that are not destructively tested. With those CP/CPK type controls work very well. I don't feel they work as well in this situation because we have an average value that is 6 sigma better than the minimum spec but still have a high defect rate. Raising the spec or the average has not helped. Does this mean I have non normal data and need to use different statistics? What are the best measures to use in this situation.

The cost of the product we are testing is low but the cost of testing and rejected shipments is high.

A visual or other inspection can not be substituted for destructive testing.
 
Replies continue below

Recommended for you

What it means is that you are testing for a symptom.

You cannot cure the cause this way.

You need to figure out why the process allows faulty parts to be made and fix the process.
 
The defect rate is 0-10% on the majority of shipments. On 20% of shipments the failure rate is 25%. We can not find the difference in performance without a destructive break test. Even when we are stable below 10% we still get customer complaints about high failure rates.

You are both correct that the supplier needs to do a root cause analysis to reduce the failure rate. My problem is that I don't know if I am using the best QC and statistical tools to monitor this failure rate. What is the best type of specification to give the supplier as a goal in improving performance?

thank you both. I know you are well respected long time contributors to this site.
 
Play out the numbers a bit.

Imagine that you have an incoming batch of 10,000 parts.

You test 1% of them (100 parts) and 10% of those (10 parts) fail. You call this a "good batch".

Now you have 9,900 parts sent to customers and 10% of those are likely defective. That is 990 defective parts.

Out of 10,000 parts that you bought your yield is only 8,910 good parts.

That's a "good batch"?

Next, think about this quote:
ProEpro said:
The defect rate is 0-10% on the majority of shipments. On 20% of shipments the failure rate is 25%. We can not find the difference in performance without a destructive break test. Even when we are stable below 10% we still get customer complaints about high failure rates.

You are not stable. These numbers are an absolute indication that the process is not under control at all. Ever.

we still get customer complaints about high failure rates.

How are failures in service detected? Customers are not doing a destructive break test are they? This suggests that there is another way that failures could possibly be detected.

Do parts fail within a typical time, or number of cycles or something? (infant mortality).
Do parts fail for some customers but not others? (difference in application environment)


 
OK, I agree that this is an INSANE failure rate for an established product. It also means that you are essentially throwing away a huge chunk of your profit per unit from warranty failures. That's just crazy. You have nothing whatsoever to do with the SPC; that can only be done at the manufacturer's end. You have no control and unfortunately, your thread title is showing that you don't really grasp that you are simply a customer, and as such, you cannot change the defect rate, only the manufacturer can do that. You cannot, and should not, EVER think that you can test your way to a better defect rate.

Where are all the customer returns? Have you done any failure analysis of them? As MJ suggests, your customers are able to tell that there's something wrong; either get their returned product, or actually go and talk to them to find out what failed in their eyes.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
I understand my situation all to well. It is this situation that is driving me to change the specification for the product so the suppliers changes their processes. The only tools I have to get the process under control are the product spec, incoming inspection and rejecting bad shipments. Is AQL the best way to specify failure rates?

The consumers use the product the same way I do in destructive testing. The ones that fail below spec are the ones that fail in use. It is a well understood infant mortality problem.
 
Care to divulge what is the nature of these parts, their purpose, and failure mode(s)?

-AK2DM

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"It's the questions that drive us"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 
You need to define some sort of causal relationship to the failures. Assuming a single source supplier, is it chemistry related, processing related or a combination thereof. Then you can tighten specifications and institute greater inspections at the source. You need to detect and eliminate the nonconforming product at the source not after receipt.
 
"The only tools I have to get the process under control are the product spec, incoming inspection and rejecting bad shipments."

No, the biggest tool you, or your company, has is the pocketbook. Tell your supplier you are going to start charging them 100x the original cost of the part per warranty claim, or that you will find a new supplier that can hold the service failures down below 1% or .1% or whatever. The hidden costs of in-service failures can be huge if lost sales are factored in.
 
It seems to me that the type of testing that is being done here should actually be characterized as "proof testing" rather than quality control. This should be done at the manufacturer prior to shipment and be invisible to you as a customer. The manufacturer can then decide if that is the most economic process or if the manufacturing process needs to be improved. Expect the cost of the parts to you to increase by at least 20%. This would still be a great cost savings to you. If you are unwilling to accept any cost increase then it is your company that is the cause of the problem and the vendor is simply complying with your wishes.
 
It is a commodity fastener sold at retail.

We have a charge back system in place for rejects. I would like to add a bonus system for quality improvements but require better metrics.

Once I have the right metrics in place I can start moving business to the supplier with the best quality. Again I need confidence that the statistic I am using is the right one.

 
Did I just read something between the lines?

"Once I have the right metrics in place I can start moving business to the supplier with the best quality"

So are you receiving these piece parts from separate suppliers?
Supplier A is 0% rejection
Supplier B is 10% rejection and
Supplier C is 25% rejection?

 
The performance of the suppliers is close using the current metrics. However, I think one of them will respond better to incentives to improve performance. This will require some more frequent testing on our end. I want to make sure that I am getting the full value out of that investment in additional testing.

I wish I could trace failures back to a production lot. Sometimes I can at least identify which of the 100 possible skus it came from and thus identify a supplier.

Thank you everyone for the thoughts, discussion, even the criticism :)
 
We are also working with our current suppliers to open some factories. So we will both be very busy with PPAPs and monitoring performance of the product from these new factories. This is another reason I want to make sure I have the most appropriate metrics in place.
 
Instead of measuring the strength at failure, would it be possible to convert the test to a proof test,
i.e., does the test article withstand a given load without failure?
The test load would be something high enough to break an unsatisfactory fastener,
but low enough to not damage a good one. That is not always possible, depending on your product design,
but your design team should be able to answer the question.

If proof testing is appropriate, then you could automate it to test any arbitrary portion, or all, of your incoming parts.

If your sources' process is out of control, as it seems to be,
and you have zero influence over your sources' behavior, as it seems,
then proof testing 100 pct is the only way you can reduce the failure rate that is apparent to your customers.
... assuming that enough parts survive the proof test to keep you in business.





Mike Halloran
Pembroke Pines, FL, USA
 
Mike

That is a good option. We had tried it in the past but the supplier never bought off on it. However, I will give it another try.
 
"We had tried it in the past but the supplier never bought off on it. "

But it sounds like you have more than one supplier, so why can't you threaten to fire them unless they come up with a definitive approach? Do they not warranty their product? Maybe that's the REAL problem.

TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! faq731-376 forum1529 Entire Forum list
 
"We had tried it in the past but the supplier never bought off on it. "

Because you never stopped buying their crappy parts.

Letting a bad supplier dictate your business is a bad place to be.

Your acceptance criteria are too lenient.

Set it much tighter. Like 1% failure is acceptable.

If a batch fails then double the sample percent on the next batch, same 1% failure is acceptable.

Fail two batches in a row = contract cancelled.



 
Status
Not open for further replies.
Back
Top