istvanb
Computer
- Nov 16, 2014
- 8
Folks,
first excuse me if I ask something stupid, as an excuse I would say I am rather a software guy.
So I have a stand which has a 1000A and a 50A current transducer (will refer to them as CT. Both of them are hall effect sensors). I need to measure currents up to 900A but I also need to measure low currents accurately below 10A. So the plan was to measure the high currents with the 1000A CT (0.5% accuracy) and the low currents with the 50A CT (1% accuracy).
What I have noticed though: if I take couple hundred readings with my data acquisition device, average these samples and then substract this average from each reading I will take during my process (so basicly do a software offset) then the low currents taken with the 1000A CT and the 50A CT show extreme correlation. My processes take only less than 10s each, so I can do this offset easily before ever run.
I have verified the correlation between several CTs and its true for all of them, and I am confused as generally speaking instruments should not work accurately in their 1% range.
I have tried to understand what do I see and my explanation is this: I take my offset reading in a state when I know exactly how much current we have in the system (0A as the circuit is not energized). So offsetting by this value I can achive pretty much 0.00% accuracy at 0A. The reading combines all the errors (eg.: offset, linearity, gain, temperature etc etc). By start reading values close to this point these errors are start to change, but the change accross 1% of the range of the instrument (0-10A on a 1000A unit) is so small, that I practically can achive very good accuracy in this region.
If this is true then I can drop the 50A CT out of the system and just use the 1000A one with a software offset.
Again, my theory may not be accurate.
Can you explain me whats going on? Your help is really appreciated!
Thanks,
Istvan
first excuse me if I ask something stupid, as an excuse I would say I am rather a software guy.
So I have a stand which has a 1000A and a 50A current transducer (will refer to them as CT. Both of them are hall effect sensors). I need to measure currents up to 900A but I also need to measure low currents accurately below 10A. So the plan was to measure the high currents with the 1000A CT (0.5% accuracy) and the low currents with the 50A CT (1% accuracy).
What I have noticed though: if I take couple hundred readings with my data acquisition device, average these samples and then substract this average from each reading I will take during my process (so basicly do a software offset) then the low currents taken with the 1000A CT and the 50A CT show extreme correlation. My processes take only less than 10s each, so I can do this offset easily before ever run.
I have verified the correlation between several CTs and its true for all of them, and I am confused as generally speaking instruments should not work accurately in their 1% range.
I have tried to understand what do I see and my explanation is this: I take my offset reading in a state when I know exactly how much current we have in the system (0A as the circuit is not energized). So offsetting by this value I can achive pretty much 0.00% accuracy at 0A. The reading combines all the errors (eg.: offset, linearity, gain, temperature etc etc). By start reading values close to this point these errors are start to change, but the change accross 1% of the range of the instrument (0-10A on a 1000A unit) is so small, that I practically can achive very good accuracy in this region.
If this is true then I can drop the 50A CT out of the system and just use the 1000A one with a software offset.
Again, my theory may not be accurate.
Can you explain me whats going on? Your help is really appreciated!
Thanks,
Istvan