flickstar
Electrical
- Oct 10, 2001
- 16
Ladies and Gents,
I have recently been using a power distribution system simulation package and have noticed some weird occurrences in fault simulations. I am hoping someone may be able to shed some light on reasons for the results I am seeing.
I am finding that my fault currents vary depending on downstream load. Some of my observations are:
Induction motor loads contribute to the fault current - I know that this does actually happen in practice, however have always thought that such currents are in the order of the motor starting current and this is not what I am seeing in my simulations.
Static loads (of constant impedance) increase fault current at the source - I have no explanation for this.
Capacitor loads actually decrease fault current at the source - I would have thought that capacitors would feed into a fault.
I figured I could get some consistent results by disconnecting all load, but I still get higher than expected results. Using the source impedance my trusty calculator gave me a result for FL = 1/xs. However the simulation package, with only the source in service, gives me a result about 4% higher.
As someone who is regularly given results from this package to use for calculating protection settings, I am worried about an overestimation of fault currents. How can I guarantee my settings are sufficient in detecting a fault when I don't have much room to play with between load current and fault current?
If anyone can provide any insight into why I am seeing the above results, or point out any error I have made, it would be greatly appreciated.
Cheers!
I have recently been using a power distribution system simulation package and have noticed some weird occurrences in fault simulations. I am hoping someone may be able to shed some light on reasons for the results I am seeing.
I am finding that my fault currents vary depending on downstream load. Some of my observations are:
Induction motor loads contribute to the fault current - I know that this does actually happen in practice, however have always thought that such currents are in the order of the motor starting current and this is not what I am seeing in my simulations.
Static loads (of constant impedance) increase fault current at the source - I have no explanation for this.
Capacitor loads actually decrease fault current at the source - I would have thought that capacitors would feed into a fault.
I figured I could get some consistent results by disconnecting all load, but I still get higher than expected results. Using the source impedance my trusty calculator gave me a result for FL = 1/xs. However the simulation package, with only the source in service, gives me a result about 4% higher.
As someone who is regularly given results from this package to use for calculating protection settings, I am worried about an overestimation of fault currents. How can I guarantee my settings are sufficient in detecting a fault when I don't have much room to play with between load current and fault current?
If anyone can provide any insight into why I am seeing the above results, or point out any error I have made, it would be greatly appreciated.
Cheers!