RyreInc
Electrical
- Apr 7, 2011
- 205
We are running a cyclical test that involves turning on a 12VDC 100W incandescent lamp (8A nominal, up to ~20A inrush) for 3s every 8s. The primary switch for this is a MOSFET inside a control board.
The lamp circuit resistance is measured in between cycles using a milliohm meter, which is done using a 23A-rated electromechanical contactor (Allen Bradley 100-C23EJ10) to switch out the power circuit, and a smaller relay to switch in the measuring circuit. Both the positive and negative leg of the lamp are being switched. The contactor was programmed to stay switched-in whenever the resistance was not being measured, and the MOSFET would do the actual lamp switching.
After about 150k cycles we started to see lamp faults more and more frequently (using current detection), which would stop the cycle. Taking no action except restarting the cycle allowed the test to carry on, anywhere between a dozen and a few hundred additional cycles.
Finally I changed the program so that the contactor was doing the actual switching rather than the MOSFET. Since then the test has run uninterrupted for over 60k additional cycles and counting.
I am aware of wetting current, i.e. that there is a minimum current needed to break through the oxide layer of non-noble metal contacts. 8A should be plenty, let alone the additional inrush transient. I believe, but could not verify, that there is also a minimum voltage for wetting to work, but it would seem that if no current is flowing due to oxide then there would be 12V across that contact, which is plenty to break through the oxide.
So why is the contactor more reliable when it is doing the switching, rather than just the carrying, of the lamp current??
The lamp circuit resistance is measured in between cycles using a milliohm meter, which is done using a 23A-rated electromechanical contactor (Allen Bradley 100-C23EJ10) to switch out the power circuit, and a smaller relay to switch in the measuring circuit. Both the positive and negative leg of the lamp are being switched. The contactor was programmed to stay switched-in whenever the resistance was not being measured, and the MOSFET would do the actual lamp switching.
After about 150k cycles we started to see lamp faults more and more frequently (using current detection), which would stop the cycle. Taking no action except restarting the cycle allowed the test to carry on, anywhere between a dozen and a few hundred additional cycles.
Finally I changed the program so that the contactor was doing the actual switching rather than the MOSFET. Since then the test has run uninterrupted for over 60k additional cycles and counting.
I am aware of wetting current, i.e. that there is a minimum current needed to break through the oxide layer of non-noble metal contacts. 8A should be plenty, let alone the additional inrush transient. I believe, but could not verify, that there is also a minimum voltage for wetting to work, but it would seem that if no current is flowing due to oxide then there would be 12V across that contact, which is plenty to break through the oxide.
So why is the contactor more reliable when it is doing the switching, rather than just the carrying, of the lamp current??