CHICAGO -- When users at the Data Center Decisions conference were asked if they planned to build a new data center...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
within the next year, a sea of hands went up. Unfortunately, when the 300 or so attendees were asked if their companies are aggressively investigating "green" infrastructures, few hands went up.
Building a new data center from scratch is the perfect opportunity to go green, and it is up to the data center managers to take charge of the issue, said Matt Stansberry, site editor for SearchDataCenter.com, who delivered the keynote address on Tuesday, Oct. 22, titled "Energy-efficient computing in the 21st century."
"Green may be a fad, but energy efficiency isn't. There are low-cost and no-cost things you can do in your data center today to save energy," Stansberry said.
William Baxter, an attendee and the associate director of IT infrastructure at Chicago-based UBS Services USA LLC, said the company is in the process of building a 6-megawatt data center in the Northeast to gain space. Baxter attended the conference to find out about new technologies UBS can employ to add efficiencies and density, which is more of a concern than being green.
"We have adopted many blade servers for density, but we are already going to need a new data center in about a year in addition to the one we are constructing now," Baxter said.
For many data center administrators, Baxter's dilemma is a common one. New applications are quickly adopted, then new hardware is necessary to run those applications. "Equipment isn't any more energy hungry than it was 20 years ago; there is just more of it," Stansberry said. "Companies don't think twice about rolling out new applications, which requires more hardware."
In addition, with global businesses running e-commerce, Web 2.0 and user-generated content applications that never sleep, the demand for uptime has multiplied.
"MySpace doesn't hold your typical mission-critical data, but when that site went down for half a day, it made international headlines," Stansberry said. "You can bet it is backed up today."
If that isn't reason enough to pursue green initiatives, environmentally conscious consumers are emerging. Ultimately, these users are going to demand that companies practice environmentally sound policies, which could include energy-efficient IT operations, Stansberry said.
What is a green data center, exactly?
The Leadership in Energy and Environmental Design (LEED) rating system from the U.S. Green Building Council (USGBC) is the green building standard. But LEED is not designed for data centers, and at this point USGBC doesn't have plans to address the omission, Stansberry said.
Because LEED was created with office buildings in mind, the standard drives "perverse" design incentives for data centers to earn credits for certification: for example, adding bike racks to reduce vehicle emissions instead of giving credits for making infrastructure change -- like using server virtualization -- that save power, Stansberry said.
Standards for data center hardware are inevitable though. Along with the Uptime Institute Inc. and the Environmental Protection Agency (EPA), the Green Grid, a consortium of IT companies and professionals, is lobbying to develop LEED-DC for the USGBC, Stansberry said.
Comparative metrics for servers are being created by the EPA and SPEC, and by 2008, Energy Star ratings -- the same stickers that get slapped on air conditioners and refrigerators -- will apply to 1U and 2U servers.
A little energy savings goes a long way
In data centers, 10% of the power draw goes to electricity transformers/UPS; 12% goes to air movement; at least 25% goes to cooling; 50% goes to IT equipment and the last 3% goes to lighting and so forth, according to New York-based EYP Mission Critical Facilities Inc.
To cut back on the areas of greatest use, a data center can make short-term, low-cost changes today.
Server virtualization is the most obvious way to become more energy efficient. "Virtualization is the best way to save," Stansberry said. "With VMware, you can collapse 5 to 10 operating systems onto a single machine, cluster them for failover protection and use fewer servers."
Though it sounds counterintuitive, increasing voltage also saves power. Servers will experience a 2% to 3% efficiency gain by using 208 volts (V) versus 120 V power distribution. The boxes are rated to handle 100 volts to 250 volts and will auto-adjust to 208 V automatically, he said.
To gain power efficiencies, buy servers with an 80 Plus rating on power supplies, which convert alternating current (AC) power to direct current (DC) power at the plug, he said. Within the industry, there is also a debate about whether AC or DC power is preferable. DC-powered data centers run between 10% and 20% more efficiently than those powered with AC, but there are hurdles, Stansberry said.
"You can't just go around moving servers, and DC power equipment can cost 20% to 40% more," Stansberry said. "There are also fewer companies rolling out this offering."
Then, of course, simply shutting down systems that aren't in use saves power; some software is designed to shut down idle systems, such as Cassatt Corp.'s Active Power Management software. Windows Server 2008 will also have power-saving features that are much slicker than what was available in 2003. Due out in February, the new function lowers the voltage going to the CPU and is enabled out of the box, Stansberry said.
But powering down makes IT nervous, and an informal user poll at a conference session revealed skepticism about the power-down features. Many users said that even if available, they would not use the feature.
Cooling denser data centers
Another power vacuum is cooling, and there are efficient offerings here as well.
By 2011, in-rack and in-row cooling will emerge as the predominant cooling strategy for high-density equipment, and in-server cooling technologies, like the one provided by SprayCool Inc., will be adopted in 15% of leading server products, Stamford, Conn.-based research firm Gartner Inc. predicts.
High density cooling systems have shown to be more efficient than raised floor but are also more expensive and harder to operate. Liquid cooling technologies like SprayCool offer more efficient cooling than air ever could; water cooling is 3,500 times more efficient than air at removing heat, Stansberry said.
Liquid cooling makes data center managers nervous, though. A poll of SearchDataCenter.com users indicated that only 7% use liquid cooling, and 65% said they would never use it. But "liquid refrigerants are more efficient than water," said Stansberry, and refrigerant "doesn't leak as a puddle on your DC floor; it leaks as a gas, which should take away some of the fear," he said.
While green computing is a new trend in the U.S. and clearly not a priority for most data center managers at this year's Data Center Decisions conference, implementing a few of these changes adds up to bottom-line savings.
"If you work incrementally at the things you can do today, you will be ahead of the game," Stansberry said.
Let us know what you think about the story; email Bridget Botelho, News Writer.
Also, check out our news blog at serverspecs.blogs.techtarget.com.