The layout of your server room or datacentre facility will impact energy efficiency and operational costs. Even if the room is ideal in shape and size, how all your IT, power and cooling systems are arranged within the space must be optimised. So what are the key server room layout characteristics to consider when planning for a new facility, an upgrade or complete refurbishment?
If you use a raised access flooring system this will be based on a standard floor tile size of 600×600mm. When we approach a server room layout, our design team uses a grid based on this floor tile size to map out the entire floor area. If there are irregular shaped walls and plinths these can also be drawn onto the grid to provide a 2D plan view.
Onto the plan view layout it is then possible to draw onto the grid the arrangement required for the equipment that will be situated within the room. With measures heights, including floor plenums and suspended ceilings, and noting of any additional pipe works and ventilation it is possible to calculate volume of air within the room and generate a 3D drawing using a suitable CAD modelling system.
The biggest physical infrastructure within the room will be the server racks. Getting their arrangement on the floor layout is critical to the plan. If there are two or more rows, these may be arranged into hot and cold-aisle arrangements. Where servers and IT devices are planned to be spread between several server racks (to allow for future growth) it is important to ensure that the racks and cabinets are sealed using blanking plates and other measures for sound air flow and efficient cooling.
With the placement of the server rows finalised it is then possible to move onto how to cool the critical IT systems and the server racks. Options here include the use of aisle containment and computer room air conditioners (CRAC units) for large facilities, in row precision cooling or wall mounted air conditioners for smaller server rooms. Aisle containment may also be considered from a cooling perspective as may chilled water doors for server cabinets.
The point here is one of heat management and how to provide the right volume of cooling to the server racks and within them to avoid hot spots. A poorly specified cooling system can lead to heat build up and higher operational costs as well as poor energy efficiency.
Other considerations from a layout perspective include the fact that the air conditioning system within the room must be connected to an outside heat exchanger or chiller. This will require vents and pipe work into the room as well as cold and hot air flow channels.
With a 3D modelling system it is possible to model the air flow to find the most optimum arrangement. If the site is an existing one looking for upgrade or refurbishment, the use of thermal imaging cameras can help to identify hot and cold areas and airflows within the room.
Cooling system designs cannot however be finalised until thought has been given to electrical power and cabling aspects. No electrical device is 100% efficient and energy will be generated through their use into the room in the forms of noise and heat that will add to the overall cooling load.
There will be some form of uninterruptible power supply and possibly standby power generator for to provide secure power to the sever room or datacentre. The UPS system may be a centralised system supplying the distribution and sub-distribution boards within the room or a decentralised power protection plan. The decentralised approach relies on multiple UPS systems either deployed within server racks or end or rack row from where they provide power through power distribution units to their connected loads.
Both power and IT cable routes into the facility and to the various critical infrastructure systems, server racks and network devices is mapped next into the design. Overhead trunking may be used as well as rack cable management and use may be made of under floor plenums.
Most server rooms and datacentres cannot be powered down safely without the risk of server failure on reboot and other associated problems. The most commonly chosen runtime for a UPS in a server room layout is 10-30minutes. This allows enough time to ride through most power outages and the start-up of a local standby power generator.
A generator should be up to full power within 1-2 minutes and the extra time provides a safety margin for a hand-crank or quick fault find. It’s known as ‘sneaker time’ as someone must run down to the generator to find out why it has not started; normally a failed starter battery or breaker left open from a maintenance visit. Issues such as this can be mitigated for by monthly generator test routines including ‘black start’ operations.
UPS systems can also be installed in a parallel/redundant arrangement and with power sources from separate ‘A’ and ‘B’ supplies and even separate distribution transformers. The issue here is one of how far to incorporate an Uptime Institute Tier-level into the server room or datacentre design. Most server rooms opt for a Tier 1-2 with datacentres running from Tier 2-4 (for more information visit: https://uptimeinstitute.com/tiers).
Modular systems provide an option here in terms of installing an extra module to achieve N+1 resilience and often at a lower cost than a two-monoblock UPS redundant configuration. Modular UPS also provide an easier capacity upscale if the UPS frame has enough room for extra modules at some later date.
There are several ways to monitor temperature within a server facility. Most intelligent PDUs have accessory options allowing them to be installed with temperature and humidity monitors. Alternate systems include dedicated environmental monitors to which several sensors can be connected including temperature, humidity, water, smoke, access, power and fire.
It is important in the overall layout to ensure complete room and rack coverage for monitoring. This helps to avoid potential ‘blind spots’ where heat or moisture can build up leading to the potential for either a catastrophic failure or erratic server or IT peripheral operation.
Certain components are also very heat sensitive. The ideal temperature for a server facility is around 20˚C. This provides not only a comfortable working environment for engineers but also helps to protect sensitive assembles. The lead acid batteries within a UPS require a working ambient of 20-25˚C in order to meet their working life expectations.
With a concentration of high-power drawing servers and other critical systems (UPS and cooling) there is always the potential for a catastrophic failure. As well as fire and smoke detection it is important to consider a complete fire suppression system installation. This may be an insurance company requirement and therefore mandatory. Typical fire suppression solutions include complete room systems with options available including PAFSS and individual fire protection trays for server racks.
Lighting should not be overlooked. LED lighting can provide a highly energy efficient solution for refurbishment projects. Not only in terms of the increased Lumens provided. Fluorescent lighting can add to the heat load as well as increased energy usage. Thought must also be given to emergency lighting in the event of a power failure.
Safe working is another aspect that must be built-into the room layout. Whilst energy efficiency and space optimisation are important so is health & safety. Engineers, IT staff and site visitors may regularly work in the room and should be given enough space and facilities to do so. Access for maintenance and system swap-out must also be considered to improve on-site maintenance plans and reduce repair and replacement times.
As well as safe working, physical access security is also important. Most rooms will be connected to the building access control system. This may be able to make use of wireless locks which today can be installed into server racks to provide an enhanced level of physical security and protection for colocation datacentres or server rooms with multiple tenants and systems.
There are several common aspects to consider for any server room or datacentre layout and there are also bespoke arrangements that are site specific. Our projects design team use server room checklists and will run through several discovery loops to ensure we provide the most energy efficient and secure layout for your server room or datacentre environment project, whether this is a new build, system upgrade or refurbishment. Overall design concepts are generated for approval to agreed specifications and datacentre design standard EN50600 before moving into the budgetary and costing phase. For more information on please contact our projects team.
There is a growing demand for bespoke on-site server rooms as many businesses implement a blend of office-based and remote working for their employees. Whilst some of these organisation’s applications have moved to the Cloud, they have locally based services that employees can only access through office-based servers.
Very often IT closets, computer and server rooms are overlooked when it comes to cooling and environment monitoring and yet they can experience rapid heat build-up. One of the biggest issues is deciding how to calculate the actual cooling requirements and then how best to deliver this into the relatively small and confined spaces.