McLeod
Mechanical
- Jan 22, 2002
- 70
How does your company use confidence level in setting its reliability design inputs and test acceptance criteria?
Given that the confidence level approaches zero as the bounds approach the nominal Weibull curve, what do you consider a minimum confidence level to set as the acceptance criteria? 90%? 70%? 50%? Lower...? Do you use the nominal estimate as the criterion?
While we understand the meaning and application of confidence level in statistical analysis, we also understand that the most commonly used value of 95% is an arbitrary historical choice going back to R. Fisher. The main problem is twofold:
[ol]
[li]The typical samples sizes that have been regarded as economically acceptable for tests of our more expensive products result in 95% confidence bounds that are rather wide in the area of interest.
[/li]
[li]The stress level applied by the users in the field varies so widely that a high degree of statistical confidence about our simulation test results is probably not as meaningful as one might think. (It's analoguous to trying to achieve 99% mesh convergence with a finite element model when your load case may vary by ±50%.)[/li]
[/ol]
Given all that, I've identified several options:
[ol]
[li]Set a minimum confidence level for reliability design inputs and test criteria at 90%. Everyone will just have to live with the higher costs associated with larger sample sizes and/or longer test times, or use more conservative (lower) reliability targets and claims.[/li]
[li]Allow any confidence level to be used, as long as it's stated in the design inputs and the test protocol before testing is initiated. Levels less than 70% will probably sound less rigorous than just using the nominal estimate.[/li]
[li]Omit confidence levels from the design inputs and test criteria and use the nominal estimate for pass/fail decisions, but require that 90% confidence bounds still be reported for informational purposes and a gut check for Quality and management.[/li]
[li]Write the design inputs and test acceptance criteria such that the lower 90% confidence bound is the limit for a clear pass, the nominal estimate is the limit for a clear fail, and in between is a yellow caution zone that triggers a management review and pass/fail decision. In practice, this would probably be equivalent to option #3.[/li]
[/ol]
Comments?
Given that the confidence level approaches zero as the bounds approach the nominal Weibull curve, what do you consider a minimum confidence level to set as the acceptance criteria? 90%? 70%? 50%? Lower...? Do you use the nominal estimate as the criterion?
While we understand the meaning and application of confidence level in statistical analysis, we also understand that the most commonly used value of 95% is an arbitrary historical choice going back to R. Fisher. The main problem is twofold:
[ol]
[li]The typical samples sizes that have been regarded as economically acceptable for tests of our more expensive products result in 95% confidence bounds that are rather wide in the area of interest.
[/li]
[li]The stress level applied by the users in the field varies so widely that a high degree of statistical confidence about our simulation test results is probably not as meaningful as one might think. (It's analoguous to trying to achieve 99% mesh convergence with a finite element model when your load case may vary by ±50%.)[/li]
[/ol]
Given all that, I've identified several options:
[ol]
[li]Set a minimum confidence level for reliability design inputs and test criteria at 90%. Everyone will just have to live with the higher costs associated with larger sample sizes and/or longer test times, or use more conservative (lower) reliability targets and claims.[/li]
[li]Allow any confidence level to be used, as long as it's stated in the design inputs and the test protocol before testing is initiated. Levels less than 70% will probably sound less rigorous than just using the nominal estimate.[/li]
[li]Omit confidence levels from the design inputs and test criteria and use the nominal estimate for pass/fail decisions, but require that 90% confidence bounds still be reported for informational purposes and a gut check for Quality and management.[/li]
[li]Write the design inputs and test acceptance criteria such that the lower 90% confidence bound is the limit for a clear pass, the nominal estimate is the limit for a clear fail, and in between is a yellow caution zone that triggers a management review and pass/fail decision. In practice, this would probably be equivalent to option #3.[/li]
[/ol]
Comments?