The "Ping" is testing the tube stiffness and comparing to a baseline.
It is a valuable technique.
Another approach is based on taking a sample and measuring the density in the lab and comparing to the online measured value.
If they agree then the assumption is that the mass calibration is also good. (this evolves from or is similar to the Solartron air point tests which determine if the tube has corroded or been coated. easy enough to do air point tests on density meters, not so easy on mass meters so testing the density accuracy is an good alternative - the Solartron density meters are now Emerson Micromotion density meters).
No arguments with anything you say lacajun just some calrifications.
Except, actually, at the time Rosemount bought Fisher and dumped the Fisher Exac meter, the Exac was considered superior to the D type structurally and performance wise.
The structure was more flexible leading either to lower drive energy or better signal to noise (crude electronics in those days) but with far less stress on the welds as at the weld points you had some residual torsional stresses. The D types could suffer weld fatigue due to the geometry.
The Exac helical design, which for single phase liquids, (and that's all the D type did at the time), was superior because it didn't require a flow splitter. It also had a better headloss.
There are various factors that contribute to what makes one meter better than another, not just its accuracy. (Don't neglect what seem minor design points, quite a lot of work has gone into flow splitter designs especially in the early days when you didn't have such sophisticate signal processing to fall back on.)
Exac was an early victim and subject to a major patent war resolved only by the Rosemount purchase but later on a profusion of new mass meter manufacturers sprang up with different tube configurations.
Another new meter to show to advantage was the Schlumberger M Dot.
This also was a twin tube design but its tubes were again more flexible and with less stress on the welds.
No accident this, when setting out to design a competing meter you try and find solutions to the known problems with the entrenched technology.
I didn't like the electronics though (you may have seen my post about Ryvita) but the sensor was excellent.
For a while there, Micromotion seemed to be following the more common product cycle of introducing a new technology and then hanging on to the first generation design beyond the point where it was sensible i.e. when other manufacturers recognise its weaknesses and introduce
"me too but better" designs. Two ways to go, milk the product till the end of its days and bow out of the market or you invest in R&D and leapfrog back ahead again.
In terms of today's technologies, comparing back to the then Exac, undoubtedly the meters today perform very well. Doubtless, if the Exac had remained in production and undergone similar investment over a similar period, it would be much improved today compared to then.
What I was saying was that this historical view might have influenced me to think it would have been better but in reality, it just doesn't seem to offer any advantage when it comes to slug flows.... once you introduce a sizeable air pocket into the flow it replaces a like volume of liquid and makes a big difference to the mass flow rate at that point. Each pocket going through a meter tube represents a step change in the mass flow that may be different in each tube if the flow does not split equally (or, in an Exac, in successive sections of the flow tube). What you have to then contend with is the two tubes trying to operate at different frequencies and with differing mass flow rates steps changing through the tubes.
Signal processing has only gone so far at this time.
Foxboro developments for entrained air and
in situ vereifcation have been conducted at Oxford University; details can be found here:
There are several papers on SEVA (self verification) on the site.
Most users will be extremely happy with their coriolis meters and for good reasons.
What happens though, is a variation of the Peter Principle. Sooner or later they will be pushed into applications where they initially fail.
This isn't a bad thing, on the contrary, it is a good thing where done purposefully.
Conservative companies stick to what they know, to the tried and tested and they stay away from "novel" applications.
They may do this because their product has become a bit of a cash cow and anything that adds to costs is to be avoided and novel applications have high cost of sales and after sales support.
In go ahead companies, what happens is that you keep pushing the envelope.
You try new applications.
You expect to see some extra costs at the outset but once you have gone through the learning curve and know how to make it work, you simple buffer sell into similar applications. In some cases you may need to invest in further R&D.
Every once in a while, you over-reach.
You may not know it till you fail, but if you don't try you never know what you can and cannot do.
But the more challenging the application, the more we move into an area of substantially increasing costs.
This means you start to look long and hard at the market and try to work out if the returns are there.
Failures are a sign that a company is trying to push the envelope. usually it will be because there is a significant market where the problems are anticipated and there is some confidence that the problems will be solved.
One such market is multiphase metering.
Look at the figures in the Neftemer paper; well head metering is a huge market.
Get it right and you can clean up.
However, whatever the technology, this is a market where total MP meter sales are very low and so far coriolis has only a portion of that market accessible.
There is a long road ahead.
What you then do is look for similar but simpler markets to try and recoup some of the costs - and maybe find the same sort of problems there as in the original market.
So when any technology fails at new and challenging applications it is not a negative.
It doesn't impact on the success of the technology in general applications. Failures are a sign the company is trying to push the boundaries and evolve even better technologies.
So there are two paths forward; the coriolis people will accept defeat and leave the application alone or they'll spend the money and solve the problems.
The reason to leave an application alone is if the returns do not promise to repay the costs.
Or if there is a better and simpler way to do it.
So no one says coriolis is not a good technology and getting better, but it is not axiomatic that it is always the best choice nor that it will not fail in some applications.
In MP metering it the size of the market and the installed base suggests the race is still wide open.
JMW