Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations GregLocock on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

emissions failure probability 1

Status
Not open for further replies.

harmony

Civil/Environmental
Dec 11, 2002
9
Hi,
I am looking at the probability of emission failures during a European emissions test. could any one please explain the following text:
With a minimum sample size of 3, the sampling procedure is set so that the probability of a lot passing a test with 40 % of the production defective is 0,95 (producer's risk = 5 %) while the probability of a lot being accepted with 65 % of the production defective is 0,1 (consumer's risk = 10 %).

Thanks,

Derek
 
Replies continue below

Recommended for you

Can you explain a little bit more about "looking at the probability of emission failures during a European emissions test"?

My automatic interpretation of that bit doesn't click at all with everything that follows it.

A.
 
You'll probably have more luck on the statistics forum.

I assume that all the cars in the sample have to pass in each scenario.



Cheers

Greg Locock

Please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Given a bit of mulling time, I think I'm beginning to understand. Someone has been a bit creative here.

The second statement is a fairly standard statement of the operating characteristics of an industrial batch sampling scheme - you decide how many samples you're going to test from each batch, how many of these have to pass for you to pass the whole batch and how many of them have to fail for you to fail the whole batch.

Depending on the decisions you make, the scheme will be more or less good at telling the difference between a good batch and a bad one.

What you normally do is to define the percentage of a batch which would have to be defective for you to want to be pretty sure that the sampling scheme failed it, and the percentage that you could accept, and would want to be pretty sure that the sampling scheme would let through. You then calculate the probabilities that the scheme will come up with the wrong answer given a population at each of those percentages (knowing what you do about process variation) to give you two measures, the supplier's and producer's risk levels, which define how good the scheme is at telling good and bad batches apart.

I think the cute thing that's been done here is to take the same theory, but apply it to a different scenario. Instead of taking the various items of a production batch as the population, I think what they've done is to take successive instances of a test being carried out on the same vehicle.

A vehicle is presented for emissions testing. A smoke probe is stuck up its exhaust pipe and it's revved right up to the governor a specified number of times. Each of these is a sample drawn from the population of "every time the driver stamps on the pedal"

Associated with the test is a set of rules telling you how to turn this series of measurements into a pass or a fail.

If you then say that you want to be sure of failing any vehicle which is smoky on 65% or more of its accelerations, and want to be sure of passing any vehicle which is only smoky on 40% or fewer of accelerations, you can use the same processes as the industrial statistician does to find out how effective your sampling scheme is.

In this case, the scheme designer is saying that he thinks he will pass only 10% of vehicles that are so smoky they should have failed, and will only fail 5% of vehicles which are clean enough that they should have passed.

A sampling regime that did many more accelerations would be much less prone to getting the wrong answer, but likely to destroy rather more engines in the process - hence the compromise.

I believe the current methodology in the UK is to perform three accelerations and to average the readings from these. If the mean is below a given threshold, the test passes. Otherwise, another acceleration is carried out, and the average of the last three accelerations is checked against the threshold. If this fails, another acceleration is done, and the vehicle is only declared a failure if you haven't got a good "average of the last three" after you've done six accelerations.

At first sight, this scheme looks like one with a sample size of four, with up to three defectives being acceptable. In fact, I'm not sure this works out because the samples, being averages with measurements in common with one another, are heavily cross-contaminated, rather than being "drawn at random from the population" as the theory demands.

Hope that helps with the question you had in mind. Apologies if I've gone off on a long tangent.

A.

 
No, I'm pretty sure that an OEM would have to test several cars. For instance, we have to test to a statistical limit for driveby noise in Australia. That is, we have to test sufficinet cars to be confident that the mean+ 3 sigma level is below the legal limit. So, if our cars are very quiet, and uniform, then we don't have to test many.



Cheers

Greg Locock

Please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Greg,

In the context of a homologation test (or even of production assurance testing), I agree with you completely.

If only because of the percentages involved (40/65% would be barking for a type-test), I think the intended context might actually be the mandatory periodic emissions testing of in-service vehicles (Plating/MoT testing in the UK - don't know what the Aus equivalent is).

It looks to me as if sampling theory might be being used to analyse how effective the test design is at cutting through the "within vehicle" variability to get a quick opinion on the condition of the engine (instead of its usual role of cutting through the "between vehicle" variability to get a quick view of batch quality).

Derek, can you clarify the context?

A.
 
The context of the test is conformity of production (CoP)A vehicle that has been homologated (European drive cycle) has to show ongoing compliance (type 1 taol pipe emissions). Initially the mass production vehicles are selected for test to show the process is in compliance, this is done with a min3/max32 statistical analysis which throws up a pass/fail/test another vehicle scenario. After that the manufacturer must demonstrate continued on-going compliance for the life of the vehicle.
Hope that helps.
 
.... which is far closer to what Greg was describing than I thought it was going to be. The percentages still seem odd to me (but then maybe that's just another reason why my employer has to pay premium prices for everything).

A.
 
So harmony, what part of the stats don't you understand? The test is called a binomial distribution, if you need to look the maths up.

Cheers

Greg Locock

Please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor