Continue to Site

Eng-Tips is the largest engineering community on the Internet

Intelligent Work Forums for Engineering Professionals

  • Congratulations KootK on being selected by the Eng-Tips community for having the most helpful posts in the forums last week. Way to Go!

Chilled water coil performance 10deg F delta-T versus 12 deg F 1

Status
Not open for further replies.

BronYrAur

Mechanical
Nov 2, 2005
798
I have a building that was designed and has been operating under a 10 deg F delta-T on the air handler chilled water coils for a long time. For lack of a better explanation, it has been "ordered" that the AHU coil settings be changed to 12 deg F delta-T.

Is this a big deal? Will I see a significant change in performance? I assume the cooling capacity will go down a little, but any idea how much?

Right now the average EAT DB/WB is around 82/68 deg F, LAT DB/WB is around 54/53 deg F, EWT is 45 deg F, and LWT is 55 deg F. They are wanting to let the LWT float up to 57 deg F.
 
Replies continue below

Recommended for you

Why would they do that, just curious? If you keep the same chilled water flow rate you actually get more capacity.

 
A coil is always selected with some "fat". Depend on the size of your coil.
Is there only one coil in the building? are all the coils changing to 12F?

Operating a 12F is fine if the coil area, # of rows, etc can deliver the capacity at 12F DT, should check with MFTR coil selection.
Also, make sure your coil velocity does not change too much (water velocity thru the coil between 3 and 5 FPS), as the GPM is reduced at the coil balancing valve (the only way you can get 12F DT). Too low GPM could result in laminar flow in the coil and air build-up in the system.

Now, one could change all the coils in a building and get more capacity from available CHW flow at the pump and add a few fan coil units here and there at 12F DT and there you have it, you have accommodated a small building extension with just a set point.
 
My guess is that are trying to get more capacity out of the chillers which were designed on a 12 deg delta T originally, or there is a 'low-delta T' problem in the building.

I'd consider a pressure-independent control valve on the chilled water coils to help maintain the 12 deg F. It is easy to overpump a chilled water coil.
 
The chillers are being replaced, and the in-house spec calls for 12 deg DT on any systems that are modified.

Sounds like I shouldn't see much difference at the AHU's
 
it is very related to question how do you control chilling coil, ordinary two-way valve?!

as chis said, low delta t is explanation that comes to mind as for why it is implemented at all.

in general, larger delta T means lower flow and higher average coil surface temperature, which both could slightly affect in direction of lowering coil capacity.

real dependency is more complex, however, and can only be taken from manufacturers specs, but small differences probably cause negligible changes.
 
If you want the change the system dT you will need to replace the chilled water coils. The chilled water valve will modulate to control temperature and the required chilled water flow depends on the coil characteristic, ie fins per inch, no of rows, tube diameters.

If you don;t change the coils and raise the system dT to 12F, you will just end up with low return water temperatures (ie the same temperature as you get now)

Unfortunately the laws of physics are stubborn and won't listen to 'orders' and will do as they have done for the last 13 billion years or so (i think that's how old the universe is?)
 
Hi BronYrAur and all,

I thought I'd pitch in since I have experience with expanding delta T on existing coils. First, I'd have to ask you where the 12-degree figure came from? Was it chosen by an engineer who evaluated the coils and chillers, or was it chosen by a bright young MBA who read an article in a trade magazine?

Regardless of the answer, it's best if the coils were originally chosen for 12-degree (or even better 15-degree) delta T. However, in each case that I have experienced, it made little difference in the cooling effect at the zone level -- even going from 10 degree to 15 degree delta T. The only real change was greatly reduced pumping energy.

I'll respectfully disagree with Chris, I prefer not to employ pressure-independent valves. All they really do in a hydronic system is raise the required pump pressure (sometimes by as much as 10 feet H2O) and waste some energy. With a completely pressure-dependent variable flow system, the only adverse effect will be some overflow at startup after setback, but it is limited very well by the pump curve -- and it resolves itself rapidly, as soon as the nearest zones cool down into their control ranges.

So, go for two-way valves (with enough 3-way valves to maintain minimum required flow through the pump), reduce chilled water supply temperature five degrees, and reduce the pump speed to get the required wider delta T.

Energy minimization philosophy: Make the water as cold as you can (within the constraints of humidity control and equipment limits), and pump as little of it as possible, with as few permanent restrictions (such as balancing valves or pressure-independent control valves) as possible.

Works the same on the air side too.

Good on ya, and I'm willing to hear others' opinions (and reasoning) for the pressure-independent argument. My statements are based on implementing what I described in over 20 large systems (one of them a major university) over the years. All of them worked just fine, and electric bills were slashed considerably. You're still doing the same amount of cooling, just not pumping as much water. More savings are available in the primary loop, too -- but that's a different issue.

Good on all of ya!

Goober Dave
 
The decision to switch to 12 deg DT wasn't probably made by an engineer or a bright young MBA, it was made by government bureaucrats. This is a government facility.

When chillers are replaced, the standard "spec" is to switch to 12 deg DT and make the system variable flow where possible. That is, variable primary only in lieu of primary/secondary.



 
Bronyr
I have yet to see such a technical issue done by a bureaucrat, may the bureaucrat was advised by a consultant, or by his maintenance guy. But I doubt that a bureaucrat actually made such a decision.

But my post today is for DrWeig (Sorry Dr, but I am in disagreement with your statement)

I wonder Dr how you chiller energy changed when you reduced the CHW temperature? did you monitor that?. Your reasoning of supplying as cold water as you can and pump as little of it as you can goes against ASHRAE 90.1 requirement of chilled water temperature reset.
As long as Dew point is good, one should pump as high water temperature as possible and as wide Delta T as possible in my opinion.

You say it works on the air side too. Sure, but there are minimum air flows involved to maintain air changes, ventilation, smoke control etc. that make cold air systems not feasible in most places. Too cold air will result in very low CFM/SF AND you will be in reheat mode for most of the building - again, you'd be in violation of ASHRAE requirements of non-simultaneous heating and cooling.
 
Hi cry22,

All good points.

I'll prep these answers with the note that I've dealt with existing systems that were significantly oversized. In the properly-sized case, your methods may indeed be the best.

As to the chiller energy, we compared the slight rise in power (in some cases none) to the reduction in pumping energy, and it worked out better in our cases. This is mostly because they were existing systems that were oversized significantly. That slight rise in load was offset by putting the chiller into a more efficient operating point. You could do the same thing by allowing the return temperature to rise instead. In a new system, close sizing of chiller versus load may very well make your point the better choice. If you'll look in the ASHRAE applications handbook, there is discussion (either chapter 35 or 40, can't remember) of the optimum pumping versus chilling tradeoff. When both are sized properly, there's a simple calculation to determine when to switch priorities. When they're both significantly oversized, the pump is easier to re-size (shave impeller, variable speed drive, etc...).

There is also the extended-capacity range of the chillers to consider. When condenser water supply is driven lower, the most efficient point of operation may occur at a higher part-load ratio. In this case, moving the chiller load up a bit (colder chilled water) can make the penalty much smaller.

As to my philosophy versus 90.1, we can still do some degree of reset in order to satisfy the code, but there is a provision in the standard to prove that an alternate solution gives better results than the prescribed method. It would be a fun debate with a building official!

On the air side, you're good again -- if the vav boxes are pressure-independent and/or ventilation air comes through the cooling system, you'll have to give up on extra-cold air or use reheat (which 90.1 does not permit in almost all cases). The buildings in my experience were all converted to pressure-dependent type (shutoff VAV) and outdoor air was a dedicated system. This is the very best choice I think.

Oh, and just for the record, the DR part of my handle is just my first two initials, not a designation for doctor.

I appreciate the kind demeanor of your post -- I've been flamed pretty heavily before when posting my opinion on balancing valves and pressure-independent stuff... I'm a long-time fan of Gil Avery's school of thought (many ASHRAE Journal articles fighting against the balancing-valve and pressure-independent school of thought. It's never been definitively tested. Wish I had the funds to do so...

Good on ya,

Goober Dave

 
Goober Dave, what is the pressure independant valve you are talking about, is that a flow limiting valve like Frese sell? I dont really see the point of them either when you can get a zone dP control valve for $250 (50mm) and the use a stat valve at each terminal, plus pick up constant authority on the control valves at part system loads.
 
Hi Waramanga,

Belimo makes a line of pressure-independent control valves, as do several other manufacturers nowadays. Belimo's is the most popular.


They're a good technolgy for keeping the return-from setback overflow and imbalance (all control valves wide open) from causing problems when it's necessary. I believe it's just not necessary in most applications, and the PI valve adds delta-P to the system. I've done the simulations on a hydronic cooling system with Memphis weather and load profiles. You get about 10 to 15 percent extra pumping power (based on typical centrifugal pump curve) in the morning, coming off setback, but only for 60 to 90 minutes, and only on the hottest Memphis days. For the remainder of runtime, system delta P is lower by 5 to 10 feet H20, which costs much more than the short time at higher power.

So, PI valves have their place in the business -- just not in the most prevalant occupancies.

In any event, it's best to consider the entire system's optimal point, including cooling tower fans, chiller, all the pumping, and the air side. It's a sort-of hairy math problem to solve in an automation system controller, but it can be done...

Good on ya,

Goober Dave
 
If this is for a federal project, then FEMP-designated product standards are mandatory, as well as public law requiring life cycle cost analysis and usage of a 40-year life cycle cost analysis complying with NIST 135.

ASHRAE does point to specific system curves for optimizing the system, which for my interests is pretty much IPLV KW/TON. That will probably take at least a year of trending on condenser temperatures, tonnage, and KW/TON.

Following the FEMP-designated standards, when you go to large centrifugals, until the last years or so, only one vendor could meet the full and part loads required for federal procurement, and the FAR includes requirement of documentation of LCCA for optimal selection. I believe that is a major reason why you will find so many R-123 machines on federal projects. I had to address that issue20 years ago, still the same today.

No arcane matters, just LCCA and meet minimum FEMP-designated standards. As a federal employee who has just finished going through that process, I hope you are doing so now if working on a federal contract.

I don't agree with you on all points, DR, as the situation I was in (undersized constant primary/constant secondary plant, oversized AHU coils) differs from yours. I'm looking at about 30 central AHU's that are single fan, dual duct. The cold duct is sized for NFPA requirements, and the coils are sized for that flow (oveersized, unless we go to dual fan dual duct and carry the NFPA 92A requirements on the cold deck). The FAR requirements, however, are the same.

I've contracted out the hydronic calculations, as a drawback to being a federal engineer is we cannot (within a coule years) get software loaded. At that point, the LCCA will determine if we go variable primary or variable plus booster. Pump energy will drive that. If we go variable primary, then pressure independent control valves are needed to operate against variable pressure in order to meet temperature setpoints. For critical areas, such as OR's, we need to maintain both humidity and temperature. Might not be the situation you are facing. If conditions were always the same, HVAC could be made into a video game and we'd be replaced by teenagers.
 
Thanks for the info, mauricestoker --

I'll have to defer to your experience on most of these topics. I'll look more closely at it when I get the next facility to work on!

Good on all of you,

Goober Dave
 
Maurice

Do you happen to know why it is that the government requires a 40-year LCC nowadays?

May be it is because NIST is in Maryland, and Maryland happens to be a big proponent of Geothermal energy? The geothermal does not require replacement of primary equipment in a 40-year LCC, thus always winning in a 40-year LCC when compared to conventional system (local influence over Federal mandate?).

It used to be 20 or 25 year LCC. How come it turned into a 40-year requirement?

Does anyone has any input out there about why 40-year LCC? why not 35? or 50?

Just like George Kennedy asking Paul Newman: Luke, why did you have to say 50 eggs? couldn't you say 35? or 37? Cause it sounded like a good round number - "Cool hand Luke"

Thanks
 
The requirement is from the Energy Independence and Security Act of 2007, PL 110-140. I don't think NIST had the major say in this, as their mandatory software, BLCC, still came with the provision of maximum 20-year life cycle even after effective date.

I think it has an even bigger impact for solar. Easier to fudge on NISTIR 85 utilities escalation/inflation numbers at 40 years out. We recently installed minor solar PV when the LCCA went from 25 to 40.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor