Once upon a time, ‘data center’ was really just a fancy name for a server room. Now, data centers are big business — and huge consumers of power and other resources. Indeed, their hunger for power means that they now account for a significant proportion of the world’s carbon emissions. The rising cost of power also has led some operators to locate new data centers based on where electricity is cheapest — near hydroelectric dams or geothermal power stations, for instance.
Not everyone can relocate to Iceland or wherever in search of carbon-neutral electricity, though. Whether it is because you need low latency or simply the ability to easily visit your servers, the chances are you will want to use or build a local data center.
Put simply, for most users the data center decision used to be tactical: “This stuff has to live somewhere.” Now, though, it is strategic: “Where it lives is important.”
Fortunately, there is still a lot that can be done within the data center to save power. I was reminded of this recently in a meeting with Equinix and its engineering partners, who are currently building Equinix’s sixth London data center (called LD6). Here’s seven significant steps they are taking that you could usefully take too:
1) Design for free cooling and/or indirect air cooling. It can be hard to retrofit if you are using an existing building, but in locations where it is climatically appropriate, getting ambient air in is a big win. “We are seeing facilities designed to A2 class — that removes all refrigeration and runs hotter, at a maximum of 32C, for a better PUE,” says Ross Henderson, Equinix’s technical facility director. “Air transfer allows more free-air cooling. For LD6, we only expect to need mechanical cooling on 50 days a year. And the mechanical system consumes significantly less energy than chilled water.”
2) Build as little as possible on site. The new Equinix data center is being largely prefabricated off-site by engineering contractor Laing O’Rourke. It will arrive in nine by three meter chunks — the exact size varies by project depending on the site’s transport infrastructure, says Laing project leader Zak Carroll. The result is 70 percent less labor needed on site (though more is needed at Laing’s factory, of course), and a 30 percent fast build program.
3) Aim for efficiency under normal load, not under some abstract set of conditions. This will affect your PUE calculations because you want practical efficiency, not the theoretical efficiency that you would get if you fully loaded all the racks — which in the real world you will not do. As Greg Metcalf, a consulting electrical engineer at Arup who specializes in data center design, explains: “These buildings typically don’t run efficiently at under 1000W/m2, so you have to design for that, that’s to say for efficiency at a lower density.” In the case of LD6, for example, this means supporting 1200W/rack without extra cooling.
4) Similarly, make sure your engineers have a hook on real average power density, not a theoretical maximum. Greg Metcalf again: “In 2002, our advice was to design for 1kW/m2 and plan for upgrading to 1.5kW/m2. Things haven’t changed that much! The likes of Google, Microsoft and Facebook can max out their equipment, and they design for that, but it doesn’t really happen in a co-lo – it seems all bright and shiny to design for high-density, but it’s not really the average.”
5) Sliding doors can yield 95 percent of the benefit of enclosure without most of the drawbacks. Tempting as it is to fully enclose the racks and aisles for fuller control of airflow, this costs more and causes problems with fire prevention and airflow. Instead use sliding doors at each end of an aisle, plus sensors overhead to tell the fans to back off when they detect cold air escaping.
6) Employ people directly, rather than through agencies or contractors, in order to win their commitment. As Ross Henderson explains: “We employ people directly, because it gives them ownership of their work. Attitude is more important than skills — skills can be taught.”
7) Lastly — and this one works anywhere — use more energy-efficient equipment. This is a big first step and a double saving, because the less energy your equipment burns, the less heat it produces and the less cooling it needs.