Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

How does one Interpret Unexpected Test Results? 3

Status
Not open for further replies.

RichGeoffroy

Materials
Apr 30, 2004
64
0
0
US


Don’t Trust the Numbers. If the data doesn’t look right, then don’t believe it! All too often laboratory data is taken as gospel. The data comes back from the lab, the values are averaged, and the person writing the report, “makes up” some, albeit plausible, explanation to support the results.

Granted, some pretty unusual and surprising results occur in science, some of which may fly in the face of reasonable, commonly-accepted principles --- but these are uncommon occurrences. More often than not, something is wrong and subsequently the data may not be reliable.

First, check the input data (dimensions, equipment settings, etc). A simple entry error can alter the test results significantly. Next, recheck the calculations carefully. If the calculations are performed automatically, performing a sample calculation “by hand” might be appropriate. A word of caution here --- it’s very easy to make the same mistake over and over again, if you’re not paying attention to the details.

One thing I always try to do is to go back and inspect the individual specimens (it’s a good idea to identify and preserve each individual specimen and all its pieces after testing). A close examination of the specimens may reveal some potential explanation for the unexpected results.

Sampling error can be another source of inaccuracy. We often mistakenly assume that the material we are evaluating is homogeneous and/or isotropic (having the same mechanical properties in different directions). Be concerned with specimen orientation, location of exposed surfaces, skin layers, non-uniform blending, and simple variability from specimen to specimen. If all else fails, go back and question the customer as to exactly what it was that was sent.

Make sure the test equipment is functioning properly, is set at the proper settings, and that the equipment has been properly calibrated. If possible, perform a cursory check with a known standard --- stick a thermometer in the bath, hang a weight on the test cell, etc.

Sample preparation can sometimes be a source of variability. Voids, contamination, or other discontinuities can act as defects in the specimen. Chips or nicks in the edges of specimens may result in stress concentrations which can significantly affect the results of a test. Be aware of anisotropic behavior that may arise from machine direction, or filler/reinforcement or crystalline orientation. Examine the cross-section of the specimen for stratification or possibly even delamination.

Remember, all data has error, naturally-occurring variability. Some materials exhibit more variability then others. Don’t misinterpret normal variability as a significant difference between samples. Examine the data --- not just the averages. Is the average skewed by the results of one or two specimens? Resist the temptation to “throw out” outliers unless there is substantial evidence that the data is erroneous. Outlier data presents a rare view of the uncontrolled variability that can reasonably be expected to occur, although, it’s validity in the context of a controlled test program may require some interpretation. I always prefer to include all test data in a report, identifying outliers, and explaining why I chose to include or exclude a value in my interpretation. That allows others to use the data in some other way to either confirm my conclusion or support an alternative hypothesis.

In this age of computers, most of our test equipment is becoming “computerized” to make testing more reliable and efficient. However, we often unquestionably rely on the “calculated” output without a clear understanding as to how, or under what conditions the computer actually calculates the data. In some situations, transitions can occur outside the detection range set in program, which causes the computer select the “best” point --- as opposed to the correct one. Sometimes filters are used to “average out” data variability --- the results can appear quite different from another test which did not utilize the computer filtering. Certain electronic instruments in the vicinity of the test equipment can result in a power drain or surge which can alter the electronic data being recorded. This electronic noise can be erroneously interpreted by the computer as specific events. There are many advantages to computerization --- but don’t blindly accept the results without question. Understand how the instrument performs its computations, and on what basis it makes its determinations, and always be observant to conditions which may develop erroneous electronic data.

Don’t get fooled by the right answer. Review the raw data to ensure that everything looks right. Technicians quickly learn what the acceptable results are expected to be for a particular test --- this talent comes with proficiency. The technician, however, should be aware of the often unconscious compulsion get the “right” answer. The technician should resist any attempt to interpret the numbers during the testing; rather he/she should merely report the findings, while at the same time maintaining a constant vigil for any out-of-the-ordinary occurrences which may later give some insight into unexplained variability.

Anyone can develop data, however, it takes a sound understanding of the test equipment, the process, and the material behavior to properly interpret the results and provide a suitable answer to the problem.




Rich Geoffroy
Polymer Services Group
POLYSERV@aol.com
 
Well done.

Already printed with a copy going to our lab

Regards
pat pprimmer@acay.com.au
eng-tips, by professional engineers for professional engineers
Please see FAQ731-376 for tips on how to make the best use of Eng-Tips Fora.
 
An excelent clear and well constructed post.

I'd like to share an example which demonstrates that resoltution of these problems can require a degree of tact and diplomacy, especially when you are the supplier and it is the clients labratory.

Almost invariably when i visit site to commission a viscosity system i have designed, there is a discrepancy apparent between the laboratory and the system.

The options are:
a) the system is wrong, the lab is right
b) the lab is right, the system is wrong
c) both are wrong
d) either or both may have multiple failings

Process viscosity measurement has been extremely difficult to do well, especially in refineries and petrochemical plant where an "analytical" measurement is required (viscosity at a reference temperature), and not the far simpler "behavioural" measurement (viscosity at the process temperature).

This means that many processes are controlled steady state and it is often the plant operator who will run the test. Many control rooms are well supplied with excellent equipment such as a Herzog capillary viscometer in a temperature controlled bath with automatic sample processing and cleaning. All to ASTM D445.

Since the system i am commissioning is "novel" and they have used this excelent laboratory instrument for some years, the balance of opinion is on the side of this instrument.

This is not a true laboratory measurement. It is a measuurement taken to control a process. By the time the measurement is complete the circumstances it relates to are long past. So precision in sample handling is often lost. I have witnessed the sample being collected and seen the waxes precipitate as it cools, then seen the operator draw off fluid from the top layer without heating and re-mixing the sample. I have also seen hot samples taken in open containers.

In these cases the process measurement is now almost as accurate as the potential accuracy of the laboratory.
The sample taking and handling procedures need to be re-defined.

Guides such as API 554 and API 555 are invaluable. These recommend sample taking standards and the statistical approach to laboratory measurement.

Talking to one of the oil majors they explained that an external test house was causing them much grief because they discovered a lack in the sample handling (the heat and re-mix problem).

In the field of bunker fuel testing, the independent test houses provide often impecable analysis. But as a measure of the real situation it is fundamentally flawed as there is often a very real problem associated with obtaining "representative" samples.

In these cases it requires tack and reason to find the problem and present it or expose it to the client without causing offense.


JMW
Eng-Tips: Pro bono publico, by engineers, for engineers.

Please see FAQ731-376 for tips on how to make the best use of Eng-Tips Fora.
 
Status
Not open for further replies.
Back
Top