Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Measurement accuracy 4

Status
Not open for further replies.

sysengineer

Electrical
Feb 16, 2012
56
0
0
GB
I have a calculation I need to make based on a value of recorded pressure but need to know the error in the measurement. This could be measured either by Bourdon tube or by diaphragm/piezoresistive pressure sensor.

Typically for pressure sensors the manufacturer will publish an accuracy as a percentage of span and can be as low as 0.02%. A typical sensor may drift during its operational life and therefore require to be recalibrated against some means of test equipment within a given, traceable accuracy. In this application they are re-calibrated if during a 5-point check over the sensor span the measurement is out by over 1% of reading.

Assuming the sensor or gauge used has been re-calibrated before the measurement is taken against traceable test equipment how do you work out the accuracy of the sensor best case? (Worst case is assuming 1% of reading or worse).

Also, if the manufacturer publishes an accuracy of 0.02% of span is this only when first shipped or is it always 0.02% accurate?

Is the accuracy of Bourdon tubes any different?
 
Replies continue below

Recommended for you

I would say that 0.02% of span is the absolute best the instrument can achieve. The actual accuracy depends on the last calibration. The usual rule of thumb is that the calibration equipment must be tens times better then the standard to which you are calibrating. For example if the calibration equipment used to calibrate the instrument is rate at 0.05% then at best the instrument will have an accuracy of 0.5%. I might also add that reading an instrument to that level of accuracy is a task in and of itself. Usually the conventional 4-20ma will be inadequate.
 
Bourdon tubes are subject to fatigue, jumping gears, needles falling off. Recalibrating them on a regular basis was important. I used to be a Measurement Engineer and I've reviewed thousands of calibration reports. Before digital equipment the reports always had a significant difference between the "as found" columns and the "as left" columns.

When digital instruments were first introduced in the 1980's they drifted as much as the old analog equipment had. That started to change by the early 1990's and the last few years that I looked at those reports the occurrence of adjustments to digital pressure (or temperature or dP) instruments had fallen to nearly zero. These things are amazing.

The manufacturer's uncertainty number is a percentage of "calibrated span". It is simple to re-span these devices. If you've calibrated a pressure instrument 0-100 psig then 0.05% is ±0.05 psi. Re-span that instrument to 0-10000 psig (assuming it is able to handle that kind of pressure) ±5 psi. I've found that as long as it can be calibrated, its uncertainty is still about the same.


David Simpson, PE
MuleShoe Engineering

In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual. Galileo Galilei, Italian Physicist
 
The gauge selection guide should mention that the accuracy spec is always at 'reference conditions'. Temperature shift away from the reference condition produces an error.

For example:

Ambient conditions
The normal ambient temperature range for WIKA pressure gauges is -40°F to +140°F (-40°C to +60°C) for dry or silicone-filled gauges and -4°F to
+140°F (-20°C to +60°C) for glycerine-filled gauges. The error caused by temperature changes is +0.3% or -0.3% per 18°F rise or fall, respectively.

The reference temperature is 70°F (20°C). The correction is for the temperature of the gauge, not the temperature of the measured medium.

The better manufacturers will spell out temperature limits, for instance:

Permissible temperature
Ambient: -40 ... +60 °C
Medium: +100 °C maximum

Short term, intermittent maximum medium temperature limits
(Optional instrument glass window required)
260 °C - without liquid filling
130 °C - gauges with glycerine filling
(WIKA Data Sheet PM 02.10 10/2008)


So temperature effects have a serious impact on gauge accuracy.

For electronic transmitters, the same caveat for accuracy at reference conditions is spelled out.

In addition, the smart transmitters (with very wide turndowns) maintain the reference accuracy spec over some limited turndown, but when the turndown exceeds a break point, a calculation is provided to calculate the accuracy at higher turndowns. The calculation also includes a temperature effect:

The calculation for temperature effect, static effect and turn down is spelled out by the major players:

n2liq8.jpg


1z5i143.jpg


orhw1j.jpg


2s6lxck.jpg


My experience mirrors zdas04's - analog transmitters drifted very noticeably and smart transmitters have reduced drift to negligible values. In fact, one does not really 'calibrate' a smart transmitter with a pressure source like was done in the analog days to range a transmitter. If a smart transmitter is so far out of whack it needs a wet cal, it probably needs its pressure body replaced. (There is a trim function for the analog output).
 
Thank you for your responses.

I've had a read through a typical reference manual for a pressure transmitter and it pretty much had words to the same effect as what danw2 is referring to. Only point to make is that there is also a calculation to establish what the calibration interval should be based on a target uncertainty value under continuous service conditions i.e. temperatures and static pressure etc.

The Bourdon tube is slightly harder to figure out. Clearly the calibration of a bourdon tube gauge cannot be any better than the test equipment it is calibrated to but for example if the accuracy of the test equipment is known how can you work out the accuracy of the gauge it is calibrating? Again, the manufacturers publish an accuracy tolerance but a gauge can only be as good as the equipment used to calibrate it correct?
 
>how can you work out the accuracy of the gauge it is calibrating?

One gets the accuracy/uncertainty of a given gauge by determining the maximum deviation of its readings from a reference standard calibrator at several test points after any cal adjustments are made.

Convention calls for a calibrator (the reference standard) to be 4x more accurate than the device being calibrated. For bourdon tube gauges, that's easy to achieve because their inherent uncertainty is greater than smart transmitters'.

One tests the bourdon tube gauge with a pressure source with the calibrator connected to the same pressure source. At various points, (25%, 50%, 75%, 100% whatever) the readings of both the bourdon tube gauge and the calibrator are recorded. Better gauges can have might have an adjustment for span and another for zero. After any adjustments, subtract the reference and test measurements and determine what the maximum difference is as a percentage of full scale. That's the gauge's 'accuracy' (actually 'uncertainty', but the common term accuracy is in wide use).

The table shows a calibration for a 4-20mA device, but the concept is the same, just substitute Calibrator reading (ideal) for the standard for "Output current (ideal)" and Gauge reading (measured) for "Output current (measured)"

URL]


The expectation is that one buys a new gauge and if it hasn't been dropped on the floor, it will be within the stated accuracy.

Theoretically, after some period of use, a calibration is done, possibly with adjustments, to determine the suitability of the gauge for continued use. This is not uncommon for industries complying to regulatory requirements or for those testing for regulatory compliance. But in much of the process world gauges have been supplanted by transmitters that are far more robust and nearly driftless. The gauges that remain are rarely calibrated in my experience (USA). The days of an instrument tech repairing and calibrating 4" process gauges is merely a memory to me.
 
Thanks again. If It is possible to adjust the reading of the instrument to match that of the reference standard calibrator does this make the accuracy of the instrument equal to the accuracy of the reference standard calibrator?
 
Yes, at the conditions under which the calibration was performed (temperature) and assuming all the cal points match, not just one of several.

The 'accuracy' is taken as the worst uncertainty value, +0.188% in the table above.

Why the emphasis on getting a less accurate device to calibrate to the same level of uncertainty as a standard?
 
I'm just trying to understand how to calculate the accuracy of a sensor that can be trimmed to match the reading of a reference calibrator for either Bourdon tubes/analogue pressure sensors or smart transmitters.

From what I've learnt then the same principles apply to all three. In summary the manufacturers quoted accuracy applies over the given turndown when 'as new' then drifts throughout service requiring recalibration at some interval. If during a calibration test there is error between the instrument and the reference the sensor is trimmed accordingly and the accuracy then becomes equal to that of the reference?

If that statement is correct then i'm happy.
 
> the sensor is trimmed accordingly and the accuracy then becomes equal to that of the reference?

No. But now I think I see where you're coming from.

It is costly to calibrate - labor and the cost of maintaining certified standards. Particularly on mechanical gauges, a span adjustment might well be iterative or require several attempts to get it close. It's costly. The better pointer adjustments are quick and easy, but messing with internals is time costly.

So most calibrations are not calibration adjustments at all; calibration is the process of documenting the magnitude of the error and confirming that the error is less than a declared acceptable error. The declared acceptable error is determined by the application where the measurement is being made, not by the measuring device itself.

Typically, during a 'calibration', if the deviation at any given point is less than the required accuracy (say 0.5%) for a given application then frequently that value is recorded and it's on to the next value because the device falls within the required accuracy, unless it's an easy offset adjustment.

Hence, an 'as found' cal might well be the 'as left', if the instrument's max deviation is less than that required for the application.

On the table above, if their requirement was plus/minus 0.2%, there were no adjustments made; someone signed and licked a sticker, stuck it on the unit and was on to the next unit.

The idea is not to make a lesser device as good as a better device, it is to check the performance and see that it meets some minimum uncertainty spec determined by the application the instrument is going into.

 
We should be EXTREMELY careful about the wording we're using, and what we think the wording means. Typically, calibration is technically changing the scaling and offset error of the instrument, by taking a large number of measurements and adjusting the instrument so that the MEAN of the measurements matches the calibration standard. However, if the instrument is fundamentally "noisy," i.e., its repeatability is poor, then even with the scaling error corrected, you may still have a significant error in any given individual measurement. So, a poor accuracy instrument may have both scaling and repeatability problems, of which, calibration only fixes the former and not the latter.

Additionally, said instrument may have poor linearity, and for a cheap instrument, there may not be any knobs that can be adjusted to correct for the nonlinearity during calibration.

Therefore, using a cheap instrument and calibrating it with a top-end calibrator does not mean that you will get better accuracy. There's a reason lousy accuracy and low cost go together, If all cheap instruments could be made accurate simply by calibration, then high-end instruments would have no market. As the old saw says, "You can't make a silk purse out of a sow's ear."

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529


Of course I can. I can do anything. I can do absolutely anything. I'm an expert!
There is a homework forum hosted by engineering.com:
 
IRStuff is correct, the big problem is in the terminology. Error is the difference between the measured value and the true value. Accuracy refers to the ability to produce a measurement that closely agrees with the true value of the parameter you're measuring (unfortunately, that is not how many manufacturer's use the term in their literature). Uncertainty is the estimate of the error in a measurement due to both systematic and random effects, and that is typically what manufacturer's incorrectly call accuracy.

Calibration does NOT affect the the uncertainty in the reading of an instrument. Random uncertainty is a function of random scatter in the measurement, i.e., it's ability to have good repeatability. Calibration corrects for systematic error, and cannot correct for random error, and thus cannot change the random uncertainty estimate. You should be able to calibrate any instrument to achieve really good agreement to the standard being used, and this can account for instrument drift due to time or temperature, etc. You're removing systematic error when you do this, but you can never reduce the uncertainty in your measurement by calibration. For example, assume you have a temperature instrument that has an uncertainty reported by the manufacturer of [±]1 [°]F, and you put it into a calibration oven with an uncertainty no greater than [±]0.25 [°]F (I agree that the convention requires that the standard's uncertainty needs to be at least four times less than the instrument's). When the oven is set to 120 [°]F as measured by it's own sensor and your instrument reads 122 [°]F, it means your measurement is 122 [°]F[ ][±][ ]1 [°]F. If you recalibrate your instrument while it's in the oven to read 120 [°]F, it means your measurement is 120 [°]F[ ][±][ ]1 [°]F. You have made the measurement more accurate, but you have not changed its uncertainty.

Random uncertainty is typically estimated (after as much systematic error possible has been removed by calibration) by taking N repeated measurements at a single value of the parameter being measured, then calculating the sample standard deviation S[sub]x[/sub], dividing it by [&radic;]N to determine the standard error of the mean, which is a Type A standard uncertainty estimate (standard uncertainties have confidence level of 68%). If you need a better estimate of Type A uncertainty, you usually take more measurements, however, due to the [&radic;]N in the denominator, the returns diminish after a while as N grows. A Type B standard uncertainty estimate is developed from other sources than calculation, e.g., manufacturer information, handbooks of physical parameters, etc. The total standard uncertainty in a measurement is usually estimated by determining the systematic standard uncertainties not able to be removed by calibration, estimating the random standard uncertainties through experimentation and calculation, then using the RSS combination of the systematic and random standard uncertainties to determine the combined standard uncertainty. The total uncertainty estimate for a certain confidence level is then found multiplying the combined standard uncertainty by a coverage factor k related to the confidence level. The coverage factor k for a required confidence interval is usually determined by using Student's t values if N < 30, or by using Gaussian coverage factors if N is 30 or more. For example, a coverage factor of 2 is usually chosen to achieve an uncertainty estimate with a confidence level of 95%, since this corresponds to two standard deviations if using Gaussian coverage factors.

xnuke
"Live and act within the limit of your knowledge and keep expanding it to the limit of your life." Ayn Rand, Atlas Shrugged.
Please see FAQ731-376 for tips on how to make the best use of Eng-Tips.
 
Ok understood but it should still be possible to put a figure on the accuracy if the sensor can be trimmed to match a reference surely? That is even if under test the two devices produce the same results.
 
Sure, you would perform a repeatability test. Typically, the total error would be the scale factor error and repeatability error, either added or RSS'd. Again, two devices that are scale factor corrected are not necessarily the same accuracy, unless they have the same repeatability error as well.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529


Of course I can. I can do anything. I can do absolutely anything. I'm an expert!
There is a homework forum hosted by engineering.com:
 
I don't know many times I can describe this. The individual measurement error is a combination of a scaling/offset error, which is systematic, and the repeatability, which is random. The reference device can only supply a sufficiently accurate stimulus to correct the scaling/offset to within the scaling/offset/repeatability of the reference, and the correctability of the UUT scaling/offset and its repeatability. The reference device has absolutely nothing to do with the repeatability error of the UUT.

TTFN
faq731-376
7ofakss

Need help writing a question or understanding a reply? forum1529


Of course I can. I can do anything. I can do absolutely anything. I'm an expert!
There is a homework forum hosted by engineering.com:
 
Status
Not open for further replies.
Back
Top