Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

Relevance of 3% voltage drop on the branch circuit vs. a total of 5%.

Status
Not open for further replies.

cuky2000

Electrical
Aug 18, 2001
2,122
0
36
DO
The NEC recommends a maximum voltage drop (VD)of a total of 5% across feeders and branch circuits, and 3% across the branch circuit alone implying 2% on the load circuit.
Since most loads can operate efficiently at 95% of the nominal nameplate voltage, what is the relevance of meeting the 3% voltage drop in the branch circuit?
 
Replies continue below

Recommended for you

A 3% voltage drop would mean a 6% decrease in motor torque if the load served is an induction motor. A 5% drop will mean a 10% drop in output torque of the motor. The phenomenon will be significant on starting as the large current draw will mean a larger voltage drop. If you have a NEMA type B motor with a pull-up torque of 140%(lower end of the range) and your feeder is designed for a 3% VD on rated load amps, the 3% VD will become 18%VD with 600% LRA. Then your motor will develop only 67% of its rated motor torque, it will be hard for the motor to pull up to speed as 140% times 67% is roughly 94%. Good if the motor you have puts up a 190% of PUT->190% x 67% ~ 127%.
 
The total voltage drop may not exceed 55.
Of that 5%, no more than 3% may be on branch circuits.
If the voltage drop on the branch circuits is less than 3% the percentage voltage drop on the feeders may be increased as long as the total does not exceed 5%.
If the voltage drop on the feeders is less than 2%, the voltage drop on the branch circuits is still limited to 3%, and the total voltage drop will be less than 5%.

--------------------
Ohm's law
Not just a good idea;
It's the LAW!
 
Hello Parchie, we understand the operation efficiency. The one issue that I am not sure about is why to maintain 3% in a branch circuit if we meet the maximum voltage drop at the load side. Let's illustrate that with another example:

- Branch circuit: from the transformer to the main distribution panel........ 4%VD (EXCEEDED 3%)
-Load circuit: from the main distribution panel to the load.........................1%VD
TOTAL VOLTAGE DROP FROM THE SOURCE TO LOAD.........................5% [highlight #FCE94F](Meet max. %VD threshold)[/highlight]
 
In my opinion, the law is ANSI C84.1-2020 . So, if the rated voltage is 120/240, utility may deliver the power at Range A—Service Voltage 114/228 V. Utilization range is 108/216. That means 114-108=6 V. But you got 114V 120-114=6V[=5%].And from here a 2% drop for feeder and 3% for branch it is another 5%.That means, if we take 120 V as rated, we get only 90% at the end.
If you get the voltage at the gate of 114 you have to assure the furthest motor 108 volt [104 V sometime]
Because, sometime, if there is a problem with power delivery [for short time] the range B is the minimum. So, the law permits a service voltage of 110 V. In this case 120-104=16 V[104/120=86.7%]
 
[highlight #FCE94F]Because it is the code[/highlight]

Hi Waross, I'm not sure of the CA code, but the NEC in the US is not mandatory for voltage drop issues and electric utilities are legally exempt from following the NEC. The voltage issue is not a code violation but rather an efficiency issue.

Contrary to common belief said:
voltage drop[/u]. [sub]It merely suggests in the Fine Print Notes to 210.19(A), 215.2(A)(4), 230.31(C), and 310.15(A)(1) that you adjust for voltage drop when sizing conductors. We need to remember that Fine Print Notes are recommendations, not requirements [90.5(C)][/sub]


From the efficiency standpoint, if the load operates in the acceptable efficiency range within -5% of the equipment nameplate, still I have difficulty understanding the rationale for maintaining 3% VD in the branch circuit considering the load operates satisfactorily well even if exceeding the 3% on the branch circuit.
 
Every equipment has assigned with a rated utilization voltage per NEMA. As an example, 120V equipment working on a 120V
nominal 2-wire single phase system has
1) Nominal utilization voltage = 115V
2) Maximum utilization voltage = 126V
3) Minimum utilization voltage = 108V
The total 5% is based on the above limits.
Hence, it is designer's responsibility to ensure the above voltages always at the terminals of the utilization equipment.
 
Reducing branch circuit voltage to 3% is important to ensure proper functioning of electrical equipment and protection from damage. Following guidelines for voltage reduction of up to 3% on branch circuits helps you meet regulations and safety standards such as NEC (National Electrical Code) guidelines.
 
Codes are supposed to be based on practical reasoning, so just stating that we have to do something because the code says to do it is not a sufficient explanation for me. The NEC is regularly revised to remove section that no longer have solid justification. For me, until I understand the reasoning behind the code, the code seems like an impossible to memorize list of random statements. I find that once I understand the reasoning behind a section of code, that section then transforms into part of my intuitive world view.

cuky- I had assumed the 3%/2% split was typically included so that the design of the branch circuit was somewhat independent of the load circuit. Although the FPN states that this split is one possible way to have reasonable efficient operation, it leaves open the possibility that there are other alternatives available for reasonably efficient operation.

In most cases I assume the actual voltage drop is lower than the code calculated voltage drop due to lower loading. I would be kind of curious how the actual voltage on a building designed for a 4%/1% split would compare with a building designed for a 3%/2% split. Designing an installation with a alternative 4%/1% split might leave a trap for a future engineer during renovation project if they assume a new load circuit can have a 2% voltage drop.

Real world efficiency is also impacted by other variables like harmonics, phase voltage imbalance, ambient temperature, and utility side voltage variations. I suspect the overall efficiency would be poor if every factor was simultaneously at the extreme of the allowable range.



 
@Baconforlife,
The reason I see is that main feeder lines are seldom changed/ upgraded while conductors servicing the final loads could be used for other purposes or load added to existing panels. That is why the guys before us have decided it should be practical to set a 2% limit on the VD from the source to the distribution board and have chosen a 3% VD limit (more lenient than the upstream limit).
 
It's a guideline. You follow that when you don't do any detailed analysis.

But, if you're checking all the loads and determine the voltage drops reaching the loads are good then that's also fine.
 
Maybe working with different codes but from the dim distant past I remember some building design software, Cymap (long gone), with an electrical package. In that package it allowed you to modify the size of the main feeder cable which in turn would modify the size of the feeders. An increase in size of the main feeder might enable you to drop the size of all of all of the branches. So this was all about cost saving. 2% on the main feeder seems fairly generous to me. On the east side of the pond I'm thinking you are working with a few less volts than we do in the UK.
 
Status
Not open for further replies.
Back
Top