When the New York Times reported on July 20, 2011 that the federal government plans to close 800 data centers by 2015, you would think that would be new news.

Federal agencies and those in the federal information technology community, however, have been grappling with the news for more than a year–and in particular, an ambitious set of energy efficiency requirements.

Those requirements are part of the Federal Data Center Consolidation Initiative that calls for reducing the number of federal data centers from 2,094, on record in July 2010, to just 800 by 2015.

The data centers left standing, and new ones to be built, must also meet new levels of energy efficiency–or Power Utilization Effectiveness goals. The goals are part of broader federal energy objectives to “promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers.”

Achieving significant improvements in energy useage, however, is not as simple as might seem. That’s why federal IT officials are taking a closer look at the National Renewable Energy Lab, which has been demonstrating practical ways to dramatically lower data center power usage.

The National Renewable Energy Lab (NREL) is a Department of Energy national laboratory operated by the Alliance for Sustainable Energy, LLC. It is one of 17 DOE national research laboratories and the only one dedicated to renewable energy and energy efficiency research and development. It is also unique in having the ability to collaborate with industry and university partners, then take the scientific discoveries along with ensuing product development, and accelerate the commercialization of renewable energy products.

In June 2010, NREL opened its new Research Support Facility (RSF), the world’s most energy-efficient LEED Platinum/net-zero energy building. LEED is an international building certification system developed by the U.S. Green Building Council. This 220,000 square foot office space is not only home to 824 NREL staff, but serves as a living lab and a model for future buildings. It is also the new home of the NREL data center.

The Powers of PUE

Instrumental in its design and operations is Chuck Powers, NREL manager of the Infrastructure and Operations Group. Powers heads up planning and development of IT Strategy and Sustainability for the Laboratory, just his latest position in a 20-year NREL career.

Since opening in 2010, there has been 81 percent reduction in power requirements compared to the legacy data center resulting in an annual cost savings of $320,000 in utility bills and an annual reduction in carbon dioxide emissions of nearly 5 million pounds, according to Powers.

“Basically energy is used in three different areas,” Powers said during my tour of the NREL facility. “Most of the energy is used to cool the data center; then there is the energy used for our power management systems and finally the equipment itself.”

PUE measures how efficiently energy is used. Powers explained it is the industry standard metric used to measure the energy efficiency of data centers. PUE is calculated as a ratio of the total energy needed to power the cooling, power and equipment, divided by the power needed just for the equipment (cooling + power + equipment divided by equipment = PUE).

Most PUE ratios are 3:1, according to analysis by Gartner. That means for every one watt spent on powering equipment, two watts are spent on energy for cooling and power management systems. A perfect PUE is 1:1.

“The PUE for NREL’s legacy data center was estimated to be 3.3 to 1,” explained Powers. “In contrast, the measured PUE for NREL’s RSF data center is 1.13 to 1. That’s really low compared to data centers worldwide.”

Because of this low PUE, Powers is helping DOE showcase the NREL data center as an example of how DOE is providing energy management leadership to government and industry. He recently spoke at the 2011 Government Information Technology Executive Council Summit, where he described the Research Support Facility’s “Lean Mean (Data Center) Machine” story to 350 government IT executives.

“NREL was the little darling,” beamed Powers. “We told our story and we were able to live up to all the hype that was created.”

Natural Air Conditioning

Driving the data center’s low PUE is the cooling system, one that Powers described as “absolutely phenomenal.” This ultra-energy efficient cooling system takes advantage of the Golden, Colorado climate and uses direct outside air and evaporating cooling techniques to manage air flow.

Underneath the RSF is a labyrinth of concrete that stores thermal energy–cool air during the summer nights and warm air during the winter days. Air circulates through that concrete maze constantly.

Powers said the system “represents biggest reduction in energy footprint for our data center.” The net result: Chilled water is only required 33.5 hours per year for air conditioning; plus waste heat from the data center is reused to warm the building.

“For 99.5 percent of entire year we are not using air conditioned cooling; 70 percent of the air to cool the data center comes from the outside,” explained Powers.

Chilly Colorado night air enters the RSF through an air intake structure nicknamed “the football.”
“It travels through four air handlers that take the air, process it and send to the data center. For 30 percent we use evaporative cooling; all we are doing is spraying water on filters that help cool the air without using compressors,” he said.

Powers stressed what is different about the data center is how they are managing air flow.

“A very important best practice in data center energy efficiency is to make sure to direct the right amount of air efficiently to the right places and not over provision air.”

Because air management systems were part of the RSF design, not only is the data center able to use both hot and cold aisles, “but we also did hot aisle containment; we are able to extract the air out of the hot air containment system and reuse it throughout the building,” Powers said.

“That is unique (by) making productive use out of the waste heat of the data center. We have two air handlers that actually extract the hot air out of the data center and use it to heat the building.”

The four air handlers are powered by electric motors. They channel air through tunnels throughout the structure and produce cooling for the data center 99.5 percent of the year.

“The data center was designed for an ‘n+1 configuration’,” Powers said. “We can completely cool the fully loaded data center with three units; if we need to do maintenance we can take one off line and do service.”

Powers said in addition to the hot-aisle, cold-aisle configuration, the hot aisle containment system was “the single best thing we did; (and now) this is serving as part of the building’s heating system.”

20:1 Virtualization

Another essential factor in reducing the data center’s energy footprint is virtualization–using blades to replace legacy servers.

There are two pods; each pod contains 20 racks for servers and storage. The total capacity will support 400 kW worth of equipment. The center now runs about 102 kW for equipment, power management and cooling and is at about 25 percent use-capacity for the entire data center.

Managing the energy usage of the actual IT is a combination of activities ranging from good cable management to optimizing air flow, to using energy efficient equipment, to the better use of existing resources, explained Powers.

“For two years prior to breaking ground on the data center, we began to replace equipment and get ready and buy blades. Out of the box they are much more efficient. They carry the workload that used to run on 29 one-application servers. Our actual ratio for virtualization is 20 to 1.”

To put this in context, there are one or two legacy servers left that haven’t been virtualized. Each consumes 302 watts; when loaded on a blade server they consume 10.75 watts. That is a significant reduction in the energy footprint.

When asked about whether users are concerned about data co-mingling on the virtualized servers, Powers said users come to us with requirements and all we have to do is perform the SLA.

State of the Art Power Systems

Power systems aren’t sexy, but they can be big energy savers – and low hanging fruit that can produce big savings for every government data center operator.

Without going through the entire math, Powers explained that the RSF data center new UPS is
97 percent energy efficient, scalable, has built-in redundancy and is not over provisioned.

“We are effectively saving 37 kilowatts on a 100 kilowatt load over the legacy UPS; it is very simple and very inexpensive to do,” Powers counseled. “It is low hanging fruit; simply replace older UPS with a state of the art UPS and have them loaded properly.”

Powers acknowledged that he and the RSF design team had the luxury of using the latest technologies because it was new construction. However, he said many of the best practices demonstrated in the NREL data center can be applied in existing data centers.

As an example, Powers said in today’s data center you don’t need to be on a raised floor.

“You can put in a hot aisle containment system using chillers; they do nothing but take chilled waters, produce cooling and blow it front of the racks. The hotter you can make your hot aisle containment, the more efficient those chillers work. This doesn’t require new construction and anyone can do with some investment.”

Measuring Up

PUE measures how well an organization has optimized its energy use for data center cooling and power systems. It does not take into account efforts to optimize energy use for servers, storage and network infrastructure running within the data center – all of which are necessary to achieve true energy efficiencies.

That’s why lowering the government’s collective PUE can translate into literally billions – that’s right billions of dollars – in energy savings and lower costs government-wide.

But PUE is only one measure. Powers said to see whether your overall energy plan is working comparing “watts per user for total data center power consumption” provides a more comprehensive evaluation of overall data center energy efficiency.

There is no doubt that the RSF energy plan is working. The new data center watts per user number measures just 42, compared with 217 watts per user in the legacy data center.

Proof that no matter what the measure, the new NREL RSF data center measures up.

Learn more about the NREL data center at www.nrel.gov.