Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations waross on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Decimal place inspection 2

Status
Not open for further replies.

weabow

Industrial
May 2, 2005
3
US
If I have a drawing that reads 1.105 +/- .001 and the inspection is done with a precision instrument that shows the result to be 1.1063 is this nonconforming?
What if the result is 1.1065, or 1.1069. Where is the line drawn? Do you stop at the 3rd decimal place as the drawing shows or if you have an inspection result that shows the dimension should be rounded to the next number should that be done? Appreciate some input and opinions.
 
Replies continue below

Recommended for you

weabow-
Imagine if the "precision instrument" showed only three digits to the right of the decimal point instead of four. Rest assured that the instrument will automatically round the answer to the nearest third digit. Rounding conventions certainly apply and are universally done by as follows:
1.1063 rounds to 1.106 (conforming)
1.1065 rounds to 1.106 (conforming)
1.1069 rounds to 1.107 (non-conforming).
If the fourth digit is 5 and the preceding number is even, then the five rounded down. If the fourth digit is 5 and the preceding number is odd, then the five rounded up. For example,
.1225 rounds to .122
.1875 rounds to .188
and so forth.

Hope this helps!




Tunalover
 
Thank you Tunalover, but I guess I wasn't completely clear. Rounding numbers isn't what I'm having difficulty with, it's a debate over inspection readings. If the drawing says 1.105 +/- .001 and the instument reads 1.1068 I would automatically round up. However, other engineers here believe that if the drawing says 1.105 +/- .001 you don't read beyond the 3rd decimal place - period.
I was hoping for some specification rules that could help me resolve this.
Thanks!
 
Weabow, I don't have a definitive answer for you but given that the tolerance is +/-.001, I would have to assume that the mating part is similarly close toleranced. I assure you that when the engineer did his tolerance stackup study (they did do a study and didn't just pick a tolerance out of the air right?) they assumed 1.106 exactly was the upper limit. (I would never assume a tolerance added onto a stated tolerance based on the number of decimal points shown.) Anything larger than that could invalidate the study. Suppose the following scenario:
The design tolerance study assumes:
the pin is 1.105+/-.001
the hole is 1.108 +/-.001

that means we could have a clearance range of .0050 max (1.109 - 1.104) and .0010 min.

now what happens when this pin is used in this assembly?
you have a clearance range of .0027 to .0007.
This is assuming that the mating part is not similarly undersized. (Remember, O.D.'s tend to be on the high side and holes tend to be on the low side due to the machinists trying to give themselves the best chance of turning out parts that meet tolerance.) I would contend that .0007 clearance may be just a tad too tight for proper operation depending on the function of the part.

The part in my opinion is non-conforming. It may still be fit for use depending on the function of the part (which may mean that the engineer should revisit his choice of tolerances), but until that is reviewed, I would not assume that to be the case.
 
weabow,
The rule in measurement is the measuring device must have 10 times better resolution than your smallest tolerance. The the ability to read to .0002 is required by your +/-.001 tolerance.

The gage error is normally given as 1/2 of your smallest unit. The reading error is also 1/2 of your smallest unit. If your gage had a resolution of .0002 your gage error would be .0001 and the reading error .0001. The combined error is the square root of the squared sum of the errors which calculated to .00014.

When a part has a tolerance of 1.105+/-.001 the theoretical maximum and minimum size including errors would be 1.10386/1.10614. Your part is out of tolerance if the gage reads 1.1065. You have a valid argument if the gage reads 1.1061 but nothing larger.
 
According to ANSI Y14.5M-1994 (and 1982), Paragraph 2.4 on page 25

"All limits are absolute. Dimensional limits, regardless of the number of decimal places, are used as if they were continued with zeros.

Examples:
12.2 means 12.20...0
12.0 means 12.00...0
12.01 means 12.010...0

To determine conformance within limits, the measured value is compared directly with the specified value and any deviation outside the specified limiting value signifies nonconformance with the limits."


Drk
 
BillPSU: You wrote: If your gage had a resolution of .0002 your gage error would be .0001 and the reading error .0001. The combined error is the square root of the squared sum of the errors which calculated to .00014.

I guess don't know what I'm doing because if I add .0001 to .0001 and square it I get: .0141421. If I square that I get .1189205. I must be reading what you've written incorrectly. Can you clarify?
 
weabow

I didn't make myself clear but the following should clear it up.

(.0001^2 + .0001^2)^.5 = .00014
 
weabow,

If what you are trying to measure is that critical, in addition to gage error as BillPSU points out, you should also consider operator error by conducting a formal gage r&r (repeatability and reproducibility) study. For key dimensions a total gage R&R of <10% is (rule of thumb) considered adequate. There are various types of studies that allow you to delve into the different factors involved in total gage error. Some statistical software programs provide R&R study modules or you could set up your own study.


Publication MSA-3 I have found to be a useful resource.

Regards,
 
Here's one that really happened -

I watched a QC inspector measure a large diameter with a Tape Measure (Wind-up scale if you live in Alabama). After carefully moving the tape back and forth (I assumed looking for the max dimension) he made a entry on the inspection report form of 62.4375".

When I questioned this, I was told that he was using a "Calibrated" tape (he showed me the sticker on it) and that the decimal of 7/16 is equal to 0.4375, which is what he put on the form.

I started looking for the Candid Camera crew but alas, this was just the standard procedure for parts bigger than the available micrometers........

Racing ... because other sports only require one ball...
 
There is a rule we use in the gage business called the "10% rule". It says that to have proper discrimination in my gage, my resolution should be no larger than 10% of my total tolerance. Now a days, since we need to pass a GR&R study, I find 5% works better. Applying that thinking to your situation, I would recommend an indicator with a resolution of .0001. Since this resolution is needed for proper discrimination, then 1.1061 would be out of tolerance and should be rejected. I hope this helps.
 
1.105 +/- .001

If you measure anything outside of the range 1.104-1.106 the part is no good. There is no square this divided by the cube of that. Bottom line, if you measured 1.1063 you are out of tolerance. If the engineer intended 1.1063 to qualify as a good part the tolerance would have allowed for it. If someone made the part for me I would return it, and I would recieve 100% full refund and possibly a deviated price for the replacement, provided I still wanted to buy from them.

That is the equivilant of saying if the contractor said make it 12 feet +/- 1 foot, if you measure 13.2 feet the job is no good.

The opposite end of the spectrum, if I require a press fit between parts and I require the shaft diameter to be 1.030 +/- .0002 and the part is 1.031 it is no good period.

Fill what's empty. Empty what's full. And scratch where it itches.
 
aamoroso
You are absolutely correct in your statements however there are other considerations when you actually measure the part. Lets take your 1.0300 +/-.0002. Using a micrometer with a .0001 vernier on the thimble. The measurement read is 1.0302, the part is good. Now measure the same part on a super mike with +/-.00001 discrimination. The part reads 1.03025, the part is bad. The problem is the discrimination of the measuring device and the accuracy with which a human reads the device. Even the 1.03025 reading has inaccuracy in the reading.
 
Precision measurements get expensive. Yes you may have a tool that has high precision and an acceptable gage R&R but how accurate is its calibration? What are the environmental conditions it and the part being measured are in? What is the generational traceability of the standard used to calibrate or verify the gage vs the appropriate national standard?

All rhetorical questions but they should show that really the only absolute is whether or not the part actually works as intended. The accuracy of any measurement has grey areas that really end up providing you only with various levels of confidence in your result. This level of confidence tends to be inversely proportional to the tolerance spread.

Regards,
 
BillPSU

I agree that the issue may be splitting hairs (as my boss loves to say) but the fact of the matter is if the part is beyone the allowable tolerance and can be substantiated by the end user then the part is not good.

The error of measurement is the manufacturers problem to deal with, not the customer. If I design a shaft to be .999+/-.001 and bore to be 1.001 +/-.001 my parts should always go together with a maximum interference of .0001 if the shaft measured 1.0002 I would have a worst case scenario of .0003 interference, if I wanted that, I would ask for that. Try this scene, a block is toleranced 36" +0-1", now try getting a block that is 36.020 through an opening that is 36" just because it required measuring with calipers to define 36.02 does not make the part ok.

Fill what's empty. Empty what's full. And scratch where it itches.
 
BTW, I gave you a star for your enthusiasm and efforts on that. The more I think about it the more examples I can come up with, ultimately though if stating that it is ok to pass product that is clearly measured beyond the allowalbe limits the next answer would be the designer or customer should include more decimal places for the tolerance and then you will charge more to make the part and they will buy it elsewhere.

PSE as for the calibrations, if the company performing precision measurements has calibrations done to anything less then a NIST traceable cert they are gambling. Point is, the manufacturer has to eat the margin of error not the customer, the customer is not claiming they can do the job, the mfg is.

Fill what's empty. Empty what's full. And scratch where it itches.
 
aamoroso,

I agree with you that the manufacturer is responsible for controlling their measurement errors such that the customer still gets conforming parts. My point is to show that even when you read the number off your gage, you have to realize that it is not an absolute. I have had situations where I needed to let parts sit in my controlled measurement environment for a day or so before measuring them and handle them carefully lest the heat from my hands distort the results (measurements were in the nanometer range).

As for traceability to a national standard, each generation of "master" that is removed from the actual standard contains measurement uncertainties that can stack. NIST has their standard, from there they use working standards for their gages (first generation error). If you are lucky, you might be able to get your master read directly from these gages. This would only give you a first and second generation error. If you have working masters based off the NIST traceable... (as you can see it can go on and on). You also need to environmentally control the gages and references so that you can retain confidence in the result. Reading a reference at 25-30 deg C will not give the same results as when it was NIST certified at 20 deg C. As a result, you can encounter situations where two areas are claiming they are both NIST traceable yet are getting different measurement results from the same parts.

Regards,
 
All good valid points PSE, I do think you bring a valid point about enviornments, but in general quality control and tolerance verification should be assumed to take place in controlled environments, 68 degrees (with calibrated controls) unless otherwise required by the design.

Fill what's empty. Empty what's full. And scratch where it itches.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top