Many computer room and server closets start out with a single server and quickly grow as more capacity is added. New servers and routers are typically added to an existing server racks or new rack cabinets brought into the facility to extend the capabilities. This can also happen within existing server rooms and datacentres. Sometimes the extra capacity was designed in from day-one or is engineered into the space available as part of an additional requirement rather than as part of a refurbishment program.
Wherever your facility is at in terms of its design, working life, scale and complexity, it is vital to know your cooling requirements and monitor and continually manage these. Rapid heat build-up within a server rack or room can lead to equipment failure and represents a potentially catastrophic fire risk. As important is the fact that within a computing environment, cooling can account for around 40% of total energy consumption. The more precisely managed the cooling requirements, the more improved the energy efficiency.
Heat is a form of energy and is a result of the power drawn by various devices and systems within the server room or datacentre environment. In a server the heat is generated principally by the central processing units (CPUs), electronics and power supplies. For cooling calculations the power input to the server in Watts or kilowatts is the total heat to be removed by the cooling system.
Other systems within the computing environment also generated heat as they are not 100% efficient. As stated by the Conservation of Energy, energy can neither be created, nor destroyed but changes from one for to another. UPS systems for example are not 100% efficient and will add to the cooling load dependent upon their loading and the state of the battery charge.
To calculate the cooling load it is therefore important to estimate the total load in Watts or kilowatts to be cooled. Other aspects to consider include room size, rack arrangement and air flow.
For more information on the Conversation of Energy and Laws of Thermodynamics visit:
The ideal temperature range of an IT closet, computer room, server room or datacentre is 18-27˚C (with a relative humidity of 45-50% humidity). If UPS systems and their lead acid batteries are installed within the sever room or data hall then the recommended temperature range is 20-25˚C.
This revised range is recommended because lead acid UPS batteries are sensitive to high temperatures. Whilst performance in terms of discharge capacity rises with temperature, design life reduces. The rule of thumb is that the design life halves for each one degree rise above 30˚C. This means that the design life of 5year or 10year lead acid batteries reduces above 30˚C to 2-3 and 4-5 years respectively.
Whilst heat kills lead acid batteries the same is not true for lithium-ion batteries. The use of Lithium UPS within server rooms and datacentres allows for a wider operating temperature range. Most IT electronic systems will operate up to 40˚C without derating. ASHRAE is therefore recommending higher temperature ranges up to 30˚C to improve energy efficiency where this higher ambient does not inhibit work by engineers and technicians.
For more information see: https://tc0909.ashraetcs.org/documents/ASHRAE_TC0909_Power_White_Paper_22_June_2016_REVISED.pdf
Energy efficient cooling doesn’t only deliver lower running costs. The more efficiency the cooling, the lower the load placed on air handlers, fans, air conditioners, heat exchangers and even liquid cooling systems. There are several steps that can be taken to improve the energy and cooling efficiency:
The most important aspect to management cooling systems in the most energy efficient and effective way is to install environment monitoring. This type of system typically sits within a server rack as an IP enabled device for remote monitoring. Sensors are connected to the environment monitoring device for temperature and humidity within the server rack(s) and the room. Additional sensors can be installed for smoke, fire, water leakage and security. Should a sensor pick up a reading above a set threshold, the environment system will issue an alarm via SMS text alert or email to specific users and a building management or data centre infrastructure management (DCIM) system.
Greater cooling precision can be achieved within a server room or datacentre using containment. This can be in the form of a cold or hot aisle containment. Each of these dictates how the server cabinets are arranged and may require additional air flow management chambers and channels. Cold aisle arrangement results in conditioned air flowing into the server racks, with the hot air exhausting into the room and a higher ambient for people working within it. The objective with hot aisle arrangement is to capture the hot exhaust air and channel this directly back to the air conditioning unit.
Each arrangement can cater for multiple rows of server racks. In-row air conditioners or end-of-row air cooling units can also help to achieve a greater degree of prevision cooling within the server environments. As with less sophisticated cooling, it is important to ensure that a suitable environment monitoring system is installed.
Whether you operate an IT server closet, computer or server room or Edge datacentre, knowing your cooling requirement is important. Within a suitable cooling plan, temperatures can rise rapidly due to environmental or load changes and hot-spots quickly build-up within server racks. Redundancy and disaster recovery must be built into any cooling plan arrangements to avoid fire and downtime if there is a catastrophic cooling failure. It is worth bearing in mind that in a high-power density server rack drawing between 25-30kW of power, a cooling failure can lead to CPU meltdown within 15minutes. That’s not always enough time to run in a portable AC unit.
Over the last 30 years we have seen data processing move from centralised datacentre locations with plug-in terminals to decentralised on-site server rooms operating standalone IT networks. The latest trend is to push the decentralised concept even further using Edge computing and connection to hyperscale datacentres via Cloud-based applications.