we were in the same position as we finally installed a new chiller on the roof and tied it into the existing piping to the crac units with shut off valves so we could use the central plant system in an emergency. Consider that virtually all data centers have more cooling equipment than is needed for two reasons: First, the need for reliability; and second, the need for oversized equipment. Most facilities have N+1 for every five units installed, and many data centers are designed for future loads. The result is 25% to 50% more air handling capacity than is needed. This sounds good, but it's not. For example, by combining excess capacity with return T & H control, the result is elevated variable supply temperatures. It's bound to happen. The only ways to modulate cooling capacity are to vary the volume of air or the temperature. In a data center with a room set point of 72-F and a 54DEG.F design supply temperature, and 50% excess cooling capacity, the result will be an average supply temperature of 62?F.
The final result is loss of humidity control,what is the room humidity, and an increase in localized hot spots. The single biggest contributor to the inefficiency of CT is by-pass air. We define by-pass air as cooling that returns to the ACU without doing any work or air that does not pass through electronic equipment and gets warmer by extracting heat.
Consider that most computers and other electronics equipment use front to back air flow to dissipate the heat generated in the box and 100% of the power into the computer is turned into heat. A typical computer will elevate the air temperature between 30?F and 40?F from the intake to the exhaust.
If the return air is only 10?F to 20?F warmer than the supply, then a significant amount of the supply air is mixing with the hot air coming from the back of the cabinets. This will "dilute the hot air" before it returns the ACU and not do any work to extract heat. In a data center for every five ACUs, the by-pass airflow rate is an astounding 60%.