Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations cowski on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Capability Studies on Positional Tolerances (RFS) 5

Status
Not open for further replies.

Brandy7

Automotive
Apr 27, 2007
33
Capability Studies on Positional Tolerances
Position with RFS (regardless of feature size) can be used but why not Maximum Material Condition (MMC)?
 
Replies continue below

Recommended for you

If one has a positional tolerance at MMC, it reflects a virtual condition boundary and not the centers per ASME Y14.5M-94.

What would affect the virtual condtion boundary? Certainly the size of the feature and its location (position) from true position but also the shape of the feature. We could play with stats on the centres but we could not include shape.

One could have a calculated tolerance (actual to virtual) and have a centre of a round hole (as an example) just inside the calculated tolerance. We still may not get a checking pin in this hole once we set the part in a checking fixture. Why? The shape of the hole may not be round.

If the secondary and tertiary datums were also at MMC, you are in a real mess now.

Use checking fixtures for positional tolerances at MMC. Call them "Major Functional Characteristics" and keep away from faking the stats if you can.

Dave D.
 
Capability predictions can be done for both position tolerances with constant value specification limits (RFS) and position specifications with variable specification limits (MMC / LMC).

The problem with doing these predictions with variable specification limits is that the basic formulas for Cpk-Ppk are designed only for constant value limits. That problem can be fixed by changing the equation to include the variable portion of tolerance and its inherent variation. The variable portion of tolerance is a distribution itself. It is actually the distribution for size and if that distribution is considered properly in relation to the distribution for the position deviation then the interference of the two distributions represents the “probability of a defect” for a variable limit tolerance.

If you were to make a histogram for a position deviation distribution you would likely see a skewed distribution with greater frequencies of smaller deviations crowding the lower boundary (zero position) and fewer frequencies larger deviations approaching or even exceeding the USL. To make that histogram reflective of a variable tolerance you would need to show the distribution for size on the same graph. To do that you must recognize that the (zero position) boundary corresponds to the “virtual condition size”, the USL corresponds to the size limit that represents the minimum variable tolerance, and the position tolerance extends to the value for size that represents the maximum variable tolerance or the “resultant condition”.

When you align the legend values for size and position next to each other you will see that the values ascend and descend correspondingly for holes with tolerances specified at MMC and shafts at LMC. Consequently the values ascend and descend in opposite directions for holes at LMC and shafts at MMC.

If you have done the histogram properly you will see two distributions side by side, one for position (typically skewed) and one for size (typically normal). The area of the intersection of those two distributions reflects the probability of a defect for a “variable limit” tolerance and if both distributions were “normal” it could be accurately estimated by using the classic reliability formula for strength vs. stress.

I posted a presentation that I gave at a conference last year to another forum. It is toward the end of the 2nd page of comments, and it explains this process better with pictures (ppkmmc.pdf) and you will find two spreadsheets there (ppkmmcxy.xls & ppkmmc.xls) that graph and perform these calculations with your data.

BTW

Dave, I agree with some of your comments and disagree with others.

You are correct when you say that “a positional tolerance at MMC reflects a virtual condition boundary” and incorrect when you say “not centers”. Read on in the standard… section 5.3.2.1 (a) & (b).

Again you are correct when you caution that the capability analysis does not address “Datum Shift” To correctly address datum feature tolerance modifiers all features identically referenced from the from the same “mobile” datum reference must be considered simultaneously. Therefore the freedom to shift the datum reference cannot be applied in unique magnitudes and directions to the various individual features. Attribute “go position gages” (when they encompass all features that have these simultaneous requirements) prevent this freedom from being applied improperly. To do it analytically, from size and coordinate data, for numerous features that each have unique amounts of variable position tolerance is as you say “a real mess”.

The reason that Attribute “go position gages” are undesirable for manufacturing is that with current customer requirements for quality or capability demonstration the number of samples required for an attribute check to demonstrate conformance to the specification is astronomical. If a customer expects that the probability of a defect reflects 1.33 Cpk then there can be no more than 1 defect in 31,574 parts and if that requirement is 1.67 Cpk then there can be no more than 1 defect in 3,488,555 parts. Consider the sample sizes required for these frequencies with some minimum repetition of non-conformance to make valid predictions. That is why a continuous data capability study is preferred for these tolerances.

Paul F. Jackson
 
Paul:

Loved your presentation method that you sent me. It is kind of neat but I still do not agree with you but I have worked in the quality field forever, so you know where I am coming from.

You are correct that 5.3.2.1 reflects Positional at MMC using a centre line but the note at the bottom indicates where shape may not be perfect (extreme form deviation) then "the surface interpretation shall take precedence". Are the holes perfectly round. If not, then the surace condition supercedes centres. Are they confirmed prior to the study?

Sure we can calculate the actual diametrical tolerance zone at MMC. It is simply the difference between the actual diamter and the virtual condition diameter.

+ 3 and -3 are they the same??? If the true position is 0, then each reading is 3 digits away but are they the same - NO. In statistics we look for repetition (estimate of the standard deviation) and then aim of the process.

Referencing datum holes at MCC as in most cases are changed to RFS so that we can measure the position of each feature. I don't think that is correct either. We lose tolerances.

How do we reflect the position of the holes relative to primary datum A. Do we check each hole at the bottom or top and report the worst case situation? No we don't. We are only interested in the X and Y axis only.

If the holes were slots, how would the centres confirm the orientation? They don't.

I love stats and used in the correct situation, they really do help but on positional at MMC, it's more show than go. You got to have a Ppk of at least 1.67 or greater so everyone does.

Dave D.
 
So Brandy7 now you have insight into why we can’t predict capability with variable tolerances! It’s complicated.

The problem that you proposed still lingers without common resolve. Those that understand dimensioning and tolerancing see finite limits and can articulate their boundaries with all due respect. They know that if you just relied upon “functional gauging” your problems would be solved. Those that understand uncertainty see variation as infinite and can prove that if you employ their equations with all due caution you can predict conformance and. They know that if you just analyze the data according to their equations your problems will be solved.

The problem that I see is that few statisticians and software developers understand dimensioning and tolerancing well enough to realize that the limits can themselves be variable and few quality practitioners realize that a sometimes significant portion of tolerance is being ignored when they compute capability. Without popular recognition that there is a problem there is no need for a cure. If you study and employ the resources that I suggested you will still find yourself in a predicament because you, as a producer, will find yourself attempting to convince your customer that his analysis of the data is flawed. Good luck!

I developed a method to include “datum shift” in the statistical analysis by considering each features’ residual or remaining tolerance in a simultaneous requirement pattern and then iterating shifts and rotations of the DRF respective of the actual datum feature size limits to maximize the smallest residual but as you can see it is even more complicated. If we can’t convince statisticians and software developers that measurements can have “variable tolerance limits” and we can’t convince GD&T people that “attribute gauging,” is highly inefficient way to control the quality of a production process then why pursue more complicated stuff.

There is another dilemma that you should be aware of… it is that variable tolerance modifiers are often applied to features that they shouldn’t be. You can test whether an application of a variable tolerance modifier is functional by answering a simple question. Does the function worsen as this feature is permitted to deviate from its ideal location or orientation? If the answer is YES then the modifier should be RFS and if NO then MMC or LMC may be appropriate. From assembly to gauging to clearance stack calculations many will claim when and where MMC or LMC should be used but I have found that often the modifiers are selected more out of habit to support “attribute gauging” than from a critical analysis of function. “Functional gauges” are just “attribute gauges” when the tolerance modifiers don’t reflect the functional liabilities of variation. So when “bonus” tolerance is ignored in a capability analysis for a feature that shouldn’t functionally have a variable tolerance that is a good thing. Consequently when it is applied in the capability analysis for a feature that shouldn’t functionally have a variable tolerance that is a bad thing.

Paul
 
Thank you all for the explanations and comments. As Design Checker I try not to use MMC as much as possible-But the Suppliers are always trying to get every bit of Bonus or extra Shift Tol.-I give it to them on clearence holes,etc. I will in addition to your test question ("Does the function worsen as this feature is permitted to deviate from its ideal location or orientation?")use RFS on all Cpk-Ppk or Criticals. I will keep adding to the this list as we go along. Shift I usually do not use because it really is not "Design Intent"-But Suppliers are alway after that little extra Tol. Any comments or additions to the list would be appricated.
Thanks Brandy
 
Brandy:

There is a reason why the suppliers are asking for MMC on positional tolerances and also on the datums if the datums are holes.

They want to develop a checking fixture which simulates the assembled state in the worst possible case. Years ago, we used to try out nonconforming products on the assembly line and if they worked, the HOLD tag was taken off and the parts were shipped. Unfortunately, we didn't know how the mating parts were made.

Now, the checking fixture is the mating part made as the worst possible state.

If you have holes as datums, what goes in those holes? If it is a tapered pin locating on the datum holes, then go RFS. I don't think so though. I think that there are bolts or shafts that go in the holes and they are cylindrical in shape.

If that is the case, then the datum holes should be referenced at MMC to simulate the assembly. If we make the datum holes larger, there would be accrued tolerances just like on the assembly line.

There is no "hocus - pocus" here. The gauge now is made to simulate assembly. Once a gauge is made, then the feature would be checked on the shop floor hourly by the Operator.

Dave D.
 
Brandy and Dave,

I am not saying that it is inappropriate to use variable tolerance modifiers I am just saying that their use should reflect the functional liabilities to variation. Let me give you a couple of examples to illustrate my point.

One of the first persons to inquire about variable tolerance capability was a black-belt was trying to solve a piston leakage problem on a transmission clutch cylinder assembly. The outer cylinder surface is part of roll formed (grobed) sheet metal shell. It is welded to an inner turned hub. When assembled there is a inner OD cylinder surface on the hub and an outer ID cylinder surface on the shell that will function with a flat aluminum piston that has both inner and outer seals. Typically the specification has one of the cylinder surfaces serving as the datum feature for the position of the other. Both cylinders have size tolerances and unfortunately they historically have MMC modifiers for both the datum feature and the measured feature. The gauge that they had to verify conformance was sized to the maximum inner cylinder diameter and the virtual condition of the outer cylinder diameter.

The black-belt called me and asked how to predict conformance of the tolerances with continuous data. I told him that if this was simply a static fit problem with no functional consequences then he could use the formulas and predict conformance just like the attribute gage that they had. But…I told him that the seals react differently to the problem. If the I.D. size on the outer cylinder was big and the O.D. size on the inner diameter was small allowing maximum variable tolerances then the seals could experience vast differences in static pinch from one side to the other. I told him that the use of MMC modifiers with these positional tolerances was inconsistent with the function of the assembly.

To further exacerbate the issue he cited on of my earlier papers where my equations that claimed that when there is a one-to-one relationship between the datum feature and the referenced feature, both with variable tolerance modifiers and no other simultaneous requirements, then the datum shift can be included in the equation with its mean bonus and squared stdev (variance). It’s true but it is too susceptible to misuse if the conditions are not proper.

The callout with its variable tolerances insured assembly of the piston without stacked physical interference but it totally discounted function. They were running that stupid “functional gauge” as a containment measure until the print got fixed by removing the variable tolerance modifiers. Then they finally focused on the problem.

One more example to illustrate LMC…which many designers feel is un-gauge-able and therefore shouldn’t exist.

In a transmission valve body there is a separator plate with precision orifices/holes that serve as a connection between the labyrinths of the upper and lower bodies (the hydraulic equivalent to thru circuit connections of a double sided electronic circuit board). Some of the holes permit clamping bolts to pass thru and the others permit pressurized oil to pass thru. The ones that permit the fasteners to pass thru are variable tolerance problems that match MMC conditions. If the hole is bigger and more off location it will still permit the fastener to pass thru without interference. The others (the oil connection orifices) are sized to allow maximum or regulated flow without encroaching on the boundaries of the labyrinth. One problem is position at MMC while the other is position at LMC.

The supplier has a precision punch die that installs all holes and orifices simultaneously. If all of the holes are toleranced with MMC modifiers (which they commonly are) then the “functional gauge” will look exactly like the die that produces the holes (at their virtual conditions) and if he runs his punch sizes big they will be less likely to break, more likely to fit the “functional gage” and “life is good.”

Function however is compromised when “bigger-is-better” orifices are permitted additional location tolerance because they are “bigger” THEY ARE MORE LIKELY TO INTERFERE WITH THE LABRYINTH WALLS AND GASKET BOUNDARIES! The designers have to predict the interference in their stacks which reduces the possible size of the orifices just because they are toleranced with MMC modifiers that permit “functional gauging.” What a crock!

I counseled them to use the appropriate functional modifiers and they said that it couldn’t be gauged. I told them that the attribute gauge would look exactly like the gasket with its orifice boundaries at their virtual condition (an overlay if you will). Why must we use an overlay attribute gauge? Because we cannot predict capability of variable tolerances with continuous data! Let’s fix this problem by predicting conformance of variable tolerances with continuous data.

One more (I’m on a roll) a boring tool and subsequent reamer fashion a coaxial series of cylinders (lands) in that same valve body. The position tolerance for every land is 0.01mm @ MMC, the datum for the measurement is the axis of all of the coaxial lands at MMC. The size tolerance for each of the stepped lands is 0.02mm. These specifications are typical of automotive valve bores. 30 microns total tolerance from the virtual condition to the resultant condition. THIS IS A CLASSIC VARIABLE TOLERANCE PROBLEM. Most of the OEM suppliers use a plug gauge (resembling the bore tolerances at their virtual conditions). But there is bore distortion from fastener clamp loads and bending from surface warpage. The question is??? What size should the reamers be set to…to insure uninhibited free movement of the valves with bore distortion while limiting leakage for calibration stability? You will find that optimization calculation in the spreadsheets and presentation materials that I referenced earlier.

Paul

BTW, I am retired but looking for work if you need assistance write me. mailto:spcandgdtman@yahoo.com
 
Capability Studies on Positional Tolerances (RFS)
To the Next Level

The same question as above except on the next drawing level-sub-assembly drawing. I have a detailed part-Leg-to attatch to Base part(Welded). I will use a Ref. Hole Dia.(9) that I will attach GD&T Position callout of 0.5-But I can not make it MMC because Ref.(9)does not have Tol. specipied at this Dwg.- thus no Bonus Tol. because it is detailed on lower level Dwg. Do I have to live with RFS or can I legelly add Tol. to the Ref. Dia. (9+-0.02)? It is now Ref. dim. with a Tol. and also fully dim. on detail with tol. Can you give this same hole a MMC Bonus twice-once on detail and once on sub-assy? Hope you can follow what I'm getting at. Thanks
 
Bonus tolerance?? That does not exist in the ASME Y14.5M-94 standard.

What goes in the 9 mm diameter hole? Is it a tooling hole or does it have a function? If something cylindrical goes in this hole at assembly, then I would suggest MMC on the datum. If it is a tooling hole, then RFS would be sufficient.

If I were looking at this drawing and it had a positional tolerance at MMC referencing a 9 mm dia. hole on another part at MMC, I would make a checking fixture simulating assembly. One would have to go to the detail drawing of the mating part to find the tolerance of the 9 mm diameter hole but that is OK. There is nothing wrong with that.

Note: I have seen Design people arguing with Mfg. Engineering or Quality who want MMC on the drawing. Sometimes the Design person would say something like "I will put MMC on the drawing but the dia. tolerance of 0.4 is now 0.3". That is just plain wrong.

Dave D.
 
Brandy, You mentioned, " I have a detailed part-Leg-to attatch to Base part(Welded)". I suggest that you start a new thread or search the archives for theads that discuss "inseperable assemblies". I would suspect that there are some pretty stout opinions in this forum on how the inseperable assemblies ought to be toleranced!

In the fundamental rules of the dimensioning and tolerancing standard Y14.5 you'll find a statements that say that "Each necessary dimension of an end-product shall be shown" and "Dimentions and tolerances apply only at the level where they are specified" look carefully at paragraphs C and N.

Many believe that a reference dimension must only refer to a toleranced dimension on the same drawing. Others think that if a tolerance is that is stated at a sub level and restated on an "inseperable assembly" level that the feature is "duel dimensioned".

Start another thread to talk about these issues this is not the "same question as above"

Paul
 
dingy2 (Mechanical) Dave D.

Y14.5 Members are tossing the bonus term around. We all use it. As far as I can tell it can be traced back to Lowell Foster (One of the Fathers of GD&T) in the late 50s or early 60s. He used it in seminars. Although it is not found in the Standard it is a handy term. I will continue to use the term because people seem to grasp the concept quicker when I do. Thanks for all your comments.
 
I agree. While the term "bonus tolerance" may not be in the standard, the concept is, and is quite useful.
 
Use RFS vs. MMC for CAPABILITY
MMC should only be used on clearance holes such as screw holes where you can do simple attribute gage check to determine if the part will fit its mating part. This is good for determining part acceptance to the print; however it is not good for determining capability. The sample size would be astronomical to do a capability study based on attribute data.

Capability must be calculated based on variable data RFS. Even when a datum is referenced at MMC, the CMM fixture must locate on the datum feature RFS to insure proper measurement repeatability. In the GM, DCX, and FORD gage standards its states that all CMM holding fixtures must use RFS locating pins for datum features regardless of how they are delineated on the print.

When doing RSS Statistical Stacks, five requirements must be met to insure the validity of the stack up;
1. The process must be in control
2. The spread of the process must equal the
tolerance range
3. The process must be centered to the specification
range: If you multiply the RSS answer by 1.5,
then your process center can vary from the
specification center by 1.5 standard deviations
sigma)
4. There must be at least 5 dimensions
5. The dimensions (tolerances) must be independent:
Bonus and Shift are not independent tolerances; they
are dependent on the size of the feature.

Having at least 5 independent tolerances is a requirement based on the laws of probability to insure sound statistical results. Therefore, all capability studies will have two separate calculations one for the size of the feature, and a separate calculation for the true position tolerance RFS.

The purpose of a capability study is to determine the capability of the process and not part acceptance. In the processing of a typical machined part the hole size is independent of its location and it location is independent of its size. A part may have good capability for its size but not for its location. The size of a hole is only affected by the size and wear of its cutting tool not the location of the cutting tool. Therefore, if the location is out, we will then want to make the necessary adjustments to the process to bring the location of the hole into control without affecting the size of the hole. Remember "the purpose of a capability study?"
Also, MMC is only used to determine part acceptance and therefore, should only be verified on an attribute gage, never on a CMM. All CMM inspections of parts should treat datums as RFS and never calculate Datum Shift based on MMC for part acceptance. Parts that reference datums at MMC are not legitimate applications for MMC.

One more point to be aware of, all CMM software incorrectly calculates the CP, CPK, PP, PPK, range and standard deviation for true position. Quality engineer should calculate the variation for each axis to get reliable data.
Submitted on behalf of friend:
Quality Analyst-GD&T/Inspection/Gaging
ASME GD&T Professional-Senior Level

 
Brandy:

Your friend is wrong. Using X and Y axis does not reflect a diametrical tolerance zone but at least it does give direction. It also does nothing about the primary datum.

There is no PP or Cp on a unilateral tolerance since there is no lower spec. limit.

Thank goodness the Japanese automakers are not trying to do this. This costs huge $$$$ but we can get pretty data. Of course it is not valid but looks good. It is called the "automotive game" and is played by Ford, GM and Chrysler suppliers.

No more comments from me on this subject.

Dave D.
 
Brandy,

As I told you before “It’s complicated.” There are a lot of well respected authorities in the industry and SPC has come a long way in scrutinizing the product that comes from our processes but the problem of applying predictive statistics to “variable tolerance limits” is rarely understood by experts.

Where to start? Some of what your friend says is true, some of what he says is irrelevant to this problem, and some of what he says is false.

MMC, LMC, and RFS should be declared on product designs according to the features’ function (specifically its functional liabilities to variation from its ideal size, location and/or orientation). I explained this earlier with a few examples.

Attribute gauging is a proper way to check to “variable tolerances” but highly inefficient in determining the “probability of a defect” because as your friend says “The sample size would be astronomical to do a capability study based on attribute data.”

GM, DCX, and FORD have gauging policies (they may refer to them internally as standards but they are not national standards). There is a national gauging standard B89, but it doesn’t address SPC. I debated a corporate technical expert, ad-nauseum, who insisted that the individual coordinates of a position achieve a capability ratio of 1.0. He said that if the individual “in process” coordinate variation could not demonstrate capability to .707 of the diameter position tolerance (the square zone within the circle) then the position capability of 1.33 or higher could not be achieved. If the individual coordinate variation in a process is assumed to be equivalent then there may be some validity to his claim but when it is not (it never is) his policy punishes the process owner by demanding coordinate capability to specifications that were never specified. The tolerance specification is circular… if there is less variation in Y more variation can be tolerated in X.

I believe that too much emphasis is placed on capability demonstration in-lieu of process control and variability reduction! The emphasis is largely levied by buyers and their quality arms to insure the “goal posts” of the contract. It does little to make product quality better. I think that the advice given to Asian producers (Dr. Deming) after the war with emphasis “variability reduction” has benefited their business over the long haul.

Tolerance stacks are irrelevant to the question that you originally asked. Whether they are linear, 2D, or 3D, they predict the probability of clearance/interference at a chosen place of the component or assembly. To respond to the five requirements I would say:
1. RSS stacks are typically done in the design stage before there is a process! Process control is indeed necessary for generating reliable process capability predictions but not for design stacks.
2. That is how stacks are done either in a RSS calculation or in a Monte-Carlo scenario by varying each the contributors from its specification high limit to low limit typically as a normal (Gaussian) distribution.
3. The RSS stack result can be compared to any pre-determined level of risk. If by 1.5 your friend means the “safety factor” that Arthur Bender suggested in his 1968 SAE paper then the probability of the predicted clearance/interference would reflect +/- 4.5 sigma. If by 1.5 he means the Six Sigma long-term capability factor (touted by Dr. Mikel Harry) to account for process drift then he is confusing process capability predictions with stack predictions. One analysis uses all of the design tolerance the other uses the actual process data.
4. He says that at least 5 contributors (dimensions) are required to perform the analysis. This is typical advice for a stack analysis but the variation in the result can be heavily influenced if there are big differences in the tolerance amounts of the contributors so some yank and adjust up or down depending on the difference in the % contribution from each of the contributors. All of this has nothing to do with process capability!
5. Coordinates are independent if they can be controlled independently just as size is independent if it can be controlled independently by adjusting the tool size to drill, punch, EDM, ream, burnish, swedge, or anything else the hole! Size determines the amount of variable position “bonus” tolerance that there is so if size is independent so is the “bonus tolerance” they are one and the same.

Your friend says “The purpose of a capability study is to determine the capability of the process and not part acceptance.” This is false! The purpose of a capability study is to predict the “probability of a defect” of the process variation to the product specifications. “In process” predictions of these capability estimations are valid “if and only if” the features’ variation in relation to the specification at the end of the process has not changed! Most manufacturers estimate capability at individual process steps to prevent defects from propagating further in the process therefore preventing “value added” losses from occurring. If the semi-finished features do not directly relate to the product specification at a given step in the process, surrogate specifications sometimes to surrogate datum references are created by the process owner to perform the capability estimation. These “in-process” capability estimations protect the process owner. The end-process capability estimations protect the customer. If the differences in the two are negligible then the “in-process” assumptions are valid and when they are not they’re not!

He says that, “all CMM software incorrectly calculates the CP, CPK, PP, PPK, range and standard deviation for true position.” Typically CMM software performs a least squares regression analysis or “best fits” on the data collected to a chosen feature shape or form, although there are optional solutions available like Max Inscribed, Min Circumscribed, etc. It is the add-on process statistical analysis software that performs the process capability analysis and if the parameters are chosen improperly as they can as with any other software the prediction can be erroneous. I disagree with your friend that Cpk and Ppk indices are done incorrectly when the tolerance and datum shifts are expressed as RFS but do agree when they are specified MMC or LMC. The Cpk and Ppk indices often are or definitely san be calculated correctly when the parameters i.e. subgroups and skewed distribution functions are employed correctly. But I do agree with your friend when he counsels that the individual coordinates and the size should be monitored and controlled separately. Furthermore, I agree with him when he says that that the Cp and Pp are incorrectly calculated for geometric location or orientation tolerances. That is what my presentation and spreadsheets explain how to fix.

I’ll bet that he hasn’t downloaded or studied the presentation “Ppkmmc.pdf”. It explains how to examine the potential capability “the potential Ppu” of a RFS position tolerance by re-computing the position deviations when the means of the individual coordinate distributions are adjusted to their basic targets. Its greatest benefit comes from the process optimization calculations that show how to minimize defects for variable geometric tolerances by targeting feature size! The method is best illustrated with the spreadsheets by analyzing a variable position tolerance that is specified “Zero at MMC.” You can do that by setting the proper size limit to the virtual condition and putting a zero in the cell for the position tolerance.

Tell him to contact me mailto:spcandgdtman@yahoo.com I would be happy to explain further.

Paul F. Jackson
 
This is a most excellent thread. Thank you Paul Jackson for your very educational posts and your calculation spreadsheets.


Brandy7's friend wrote:

"One more point to be aware of, all CMM software incorrectly calculates the CP, CPK, PP, PPK, range and standard deviation for true position. Quality engineer should calculate the variation for each axis to get reliable data.""

Hello Brandy, could you have your friend explain this statement a bit further? I am curious as to the root of the perceived "incorrect calculation" - is it based in wrong statistics formulae (or approach) in ALL the various softwares, or is it based in the common least-square measurement calculation methods of "raw" actual measured features that most CMMs use by default?

Thanks in advance!

FYI, we never use the CMM softwares for stats - we take raw output from them (usually set to least-squares but occasionally max/min) and feed it into a stand-alone reporting software (GDM-3D by Dimensional Control Systems inc) for all statistical calculation and reporting.

- Josh Carpenter, CMM Jedi
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor