Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Measurement Value, Error Range, and Dsitribution

Status
Not open for further replies.

WhiskeyHammer

Mechanical
Sep 26, 2013
4
I'm preparing to do some statistical analysis that relies on instrumentation with a known ±%, specifically optical chronographs and pressure gauges. My question is on how the measurement values are distributed across the error range: is it random or is normally distributed?

The method of distribution seems important to the confidence of the final result and to the testing methodology. Random distribution suggested that my confidence will be limited no matter the sample size so I can use the bare minimum, whereas normative suggests that my confidence would increase with the sample size so I would benefit from a much larger group.
 
Replies continue below

Recommended for you

I think you are missing the terms. "Normal" is a one form of a random distribution.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529
 
i'd express it a little differently (FWIW) ... you're talking about population distributions ... normal or uniform.

but i think you're confusing "measurement error" and "population distribution" ... measuring "with a known ±%" is an error band for the instrument around the reading, normal distribution is more about what the operator would read (it depends on the standard deviation).

Quando Omni Flunkus Moritati
 
Might I suggest that a monte carlo sim of your experiments with your assumed error distributions will answer your question and teach you much.

Cheers

Greg Locock


New here? Try reading these, they might help FAQ731-376
 
I probably should have included an illustrative example of what I'm talking about. I didnt realize my technical knowledge was so sub par.

Say I have an instrument that measures with an error of plus-or-minus 2. Then when making a measurement, I get a value of 100. According to the plus-or-minus error, my actual value exists somewhere between the range of 98-102. What I want to know, is if the actual value has an equal chance of existing at any point in the range (a constant uniform distribution - which I called random) or a a higher chance of existing at the measured point and a diminished chance of exiting at the extremes (what I referred to as normal).
 
In the narrow scope of your question, no, the probability is generally normally distributed. I think you are reading too much into the specification. ± x is simply equivalent to something like ± 2σ, it's neither an absolute limit, nor is it a characterization of the distribution as being uniform.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529
 
WH,

The +/- % specification with the instrument is its measurement uncertainty - it's an evaluation of the accuracy and precision that the instrument CAN achieve, usually expressed at the 95% confidence interval. The method used to determine this spec should be available from the instrument manufacturer, including the distribution function they used (there are many more possible pdf's than normal or random, although normal distribution is often assumed).

For your specific example...if the 95% confidence interval is +/- 2 and the TRUE value is known to be 100, then you should observe a measurement between 98 and 102 for 95% of the observations you make. Obviously, if it is normally distributed your will get a diminishing number of observations the further you get from 100.

It should be noted, there are a lot of factors that will influence the observed results that the quoted measurement uncertainty doesn't take into account (calibration, linearity, test method and operators, etc). A measurement systems analysis would be needed to assess the actual distribution and confidence limits of your observed results.
 
isn't there a different statistical test to say "what is the range of true states given a (or several) measurements of the property?"

i mean there is a true pressure being read, does it Have to fall within the error band of a measurement ? (i don't think so)

if you read 99 and have an error of +-2, then you are confident that the value is between 97 and 101. how confident ? this would help set up the normal distribution ('cause if you had 90% confidence, wouldn't this mean 90% of the population is within these bounds ? and so you could determine the standard deviation from normal population distribution statistics).

Quando Omni Flunkus Moritati
 
isn't there a different statistical test to say "what is the range of true states given a (or several) measurements of the property?"

>> There is no statistical "test" other than collecting and histogramming the data. A normal distribution is not necessarily that distinguishable from a Gaussian distribution, unless you collect gobs of data.

i mean there is a true pressure being read, does it Have to fall within the error band of a measurement ? (i don't think so)

>> What value you measure is dependent on a number of error sources, only one of which is the instrument. Even the actual pressure, which is the result of billions of atoms colliding with each other and the instrument's sensor element is actually a Poisson distribution, which is related to the "shot" noise and numerically equivalent to the square root of the number of individual atomic collisions detected by the sensor. In most cases, this results in a signal to noise ratio that's in the millions, and can be considered to be a constant. But for other quantities, and even lower pressure sensing, the statistical distribution of the actual measurand is significant. In the case where the measurand's noise is significant, the standard errors need to be appropriately combined to get the effective standard error. Given that, a measured value should be within ±2σ essentially 95% of the time. However, that's not to say that there might not be a measurement that's, say, 4σ from the mean value. Of course, the mean value itself has a standard error as well, related to the fact that the mean value is the average of a bunch of noisy measurements whose standard error is the root sum of squares of the individual standard errors.

if you read 99 and have an error of +-2, then you are confident that the value is between 97 and 101. how confident ? this would help set up the normal distribution ('cause if you had 90% confidence, wouldn't this mean 90% of the population is within these bounds ? and so you could determine the standard deviation from normal population distribution statistics).

>> That appears to be a circular rationale. If you have "an error of ±2" then you, or someone, has already done the analysis, or fudged it. Verification of a measurement, which is what a calibration lab essentially does, simply means that you are comparing your instrument with a "golden" instrument whose noise is substantially lower than that of yours. Assuming that everyone upstream has done their jobs, then there is confidence that given a input, the measurement is within ±2σ 95% of the time. However, given a single measurement data point, you cannot absolutely be sure that this single value is within ±2σ of the true value. Only by taking multiple measurements could you achieve confidence in the measured value.


Much of this is highly dependent on how tight the requirement is, and how much test accuracy ratio (TAR) you have. The same ±2 applied to 1000 psi would automatically confer a certain level of confidence that your measurement is reasonably good, as compared to getting a reading of 5 psi.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529
 
i was trying to get a handle on what confidence the instrument maker gave (with the error band) ... with that i think you can do some statistics on the population. of course it may just reflect that the instrument reads in 4psi increments.

Quando Omni Flunkus Moritati
 
Instrument makers rarely give you that much information.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529
 
seems odd that "Instrument makers rarely give you that much information." ... what does the error range mean (if it's anything more than reading error (ie 1/2 the smallest unit)

and then how do you do what the OP wants ? (whatever that is, but it sounds like some statistical analysis on a bunch of readings)

Quando Omni Flunkus Moritati
 
You do what Greg suggested, or contract the instrument mfgr. to do so.
 
I agree with Greg regarding a Gage R&R study as it can show contributions of operator variation, gage variation and within part variation. A second aspect is calibration of the gage itself based upon the "generations" removed from a national standard of the calibration master used for the device. The fewer generations removed the better.
 
"what does the error range mean (if it's anything more than reading error (ie 1/2 the smallest unit)"

It means whatever the original person who wrote it meant. However, those are generally derived from analytical analyses, and not from direct measurements. The analyses would typically assume Gaussian distributions, but there's guarantee that they were. Truly proving accuracy is an expensive proposition, since it requires measurement and source standards that must, themselves, have been proven to be substantially more accurate than the instrument being proven. In the US, there is essentially a three-tier system, NIST maintains and proves primary standards, and verifies secondary standard accuracies; calibration laboratories and other interested parties maintain secondary standards, and common users generally get tertiary level of accuracies. Unless you are willing to pay someone, like NIST, to perform a complete test or analysis, you only get some sort of ±value on a datasheet, with possibly a couple other constraints, but nothing further than that.

"ie 1/2 the smallest unit"

This, in itself, is not even a valid concept, since 1/2 the smallest unit is actually the precision, and not the accuracy, i.e., it's an indication of the ability of the system to distinguish two adjacent values as distinct. However, accuracy is often presented as some multiple of the smallest resolution unit. As seen here the basic accuracy is specified as: 0.0035 + 0.0005 (% measurement + % of range), where 0.0035 represents 35 multiples of the minimum resolution, but on top of that are fractions of the measurement itself, which represent the shot noise, and a fraction of the total range, which usually represents an overall nonlinearity in the measurement process.

So, the basic instrument noise component is probably Gaussian, as it represents the sum total of all the noise processes within the instrument itself, and being an aggregate almost automatically results in a Gaussian distribution, even if the individual components are uniform or whatever. This is how one can use the sum of 6 uniform distribution noise sources in Excel to create a Gaussian noise source. The fraction of the measurement probably represents the shot noise of the measurand, which is most likely to be a Poisson distribution. The fraction of the measurement range is neither, since nonlinearity is actually fully deterministic, but is typically treated as a Gaussian distributed quantity.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529
 
rb1957,

The error range stated will be either type A or type B (in the ISO standard at least)...

Type A - the error and its distribution are estimated from direct measurements. This sort of thing is done in a calibration lab by men in white coats - these test conditions are difficult to reproduce.

Type B - the error and its distribution are estimated using "best scientific judgment" based on knowledge of the errors associated with the inputs. In this case the distribution will be assumed Gaussian, triangular or uniform depending on what is known about the inputing factors. A uniform distribution, for example, often just means that nothing is known about how the error is distributed so any value within the range is assumed to be equally probable.

Either way, if whatever the OP is measuring is sensitive to these errors, then it would be better to analyze a set of measured results rather than rely on the quoted accuracy (assuming that collecting this data is not too difficult, time consuming or expensive).

As stated above, a gage R&R will give you a level of confidence in your test processes - this should be part of a wider measurement system analysis (MSA), which will also take bias, linearity and stability into account. Bias and linearity at least can be determined from the calibration documents.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor