The idea of taking a data center off the grid is kind of crazy, but a consortium of data center experts is doing...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In the coming year, an ambitious research team led by the New York State Energy Research and Development Authority (NYSERDA), Clarkson University, AMD and Hewlett-Packard plans to build a pair of data centers that will generate 100% of their own power via solar panels and wind turbines, forgoing a link back into the utility grid. To hedge against intermittent power problems, they’ll shift compute loads between the two facilities.
“Rather than shift energy to the data center, [we will] shift the compute to where the energy is located,” said Steve Kester, director of government relations and regulatory affairs at AMD.
In a first phase of the project, the group will work with renewable energy specialists to model wind and solar patterns and develop predictive algorithms that can detect increases and decreases in the availability of power, said Bryan Berry, the lead for NYSERDA’s green data center research program.
That done, the project calls for a pair of uber-efficient HP Performance-Optimized Datacenters (PODs), one in windy upstate New York and another slated for a yet-undetermined sunny location such as West Texas.
Because the data centers don’t rely on the utility grid, it will be possible to place them in remote locations with no transmission lines, Berry said. The only connectivity they will require is fibre optic communications lines to coordinate the shift in compute from one pod to another.
Off the grid, off their rockers?
This is a novel idea. With one notable exception, there are practically no examples of off-grid data centers -- and certainly none that distribute their load in response to energy availability.
Data center operators that tout the use of renewable energy usually claim to do so through the purchase of renewable energy credits (RECs), or by buying some portion of their energy through a utility that generates renewable energy, said John Stanley, research analyst for data center technology and eco-efficient IT at The 451 Group. Only a handful of data centers generate some portion of their energy directly.
Datapipe Managed Hosting is one data center that uses “renewables.” Its Somerset, N.J., data center uses 100% renewables through the purchase of RECs; and Emerson Network Power, which generates 100 kW of power with solar panels on its St. Louis, Mo., facility.
But even a bona fide off-the-grid data center operator said he’s not surprised by the meager number of data centers relying on renewables. Phil Nail is CTO at AISO.net, a virtual hosting provider in super-sunny Romaland, Calif., that generates all its own energy from an acre of solar panels, plus propane-powered backup generators.
“Most data centers are big, and they have a very specific way of doing things, and there’s a lot of red tape,” Nail said. For most data centers, the hoops you need to jump through to go off the grid probably aren’t worth the effort, he said.
Follow the moon, follow the sun and wind
This new project’s idea of shifting compute between data centers is also raising some eyebrows.
People sometimes refer to the idea of shifting compute to data centers to take advantage of utilties’ off-peak power rates as “follow the moon,” but with only two data centers -- this is something different, said Mark Bramfitt, a Sonoma Valley, Calif., consultant focused on data centers and energy efficiency.
“Only a few players that have kind of global presence [to make follow the moon work],” said Bramfitt. “The Googles and eBays of the world could potentially do this,” but not much of anyone else.
The goal for this project isn’t to follow off-peak utility rates, but to tap into available renewable energy. Solar energy is obviously applicable only during the day, and wind tends to pick up in the evenings, but Bramfitt worried that there would be a good chance that neither site could generate sufficient power.
“I would look for much more redundant infrastructure than just two sites,” he said.
Further, shifting compute loads between data centers also has its limitations, said Stanley of The 451 Group. A central question is whether shuffling compute between locations can be done reliably.
After all, “the whole reason we have data centers is to support reliable IT services. If you can’t do that, you have to wonder why do it at all,” Stanley said.
One wild card will be the network connectivity that runs between the sites. Less than optimal connectivity will hamper the kinds of workloads that can run there, said Jonathan Eunice, principal IT advisor at Illuminata Inc. in Nashua, N.H. Suitable workloads might include weather simulations, economic forecasting and some rendering apps -- compute-intensive applications with low bandwidth needs, and a high tolerance for latency, he said.
Thinking outside the grid
While this project is clearly bleeding edge, the need to push the data center energy-efficiency envelope is clear, said NYSERDA’s Berry.
“The need for data centers is exploding and won’t slow down any time soon, Berry said -- especially in New York state. There, data center power consumption represents 3% of the state’s total energy use, and is doubling every three to five years. With that as the backdrop, this is an incredibly promising and interesting project, he said.
Whether average data center operators can benefit from this project is unclear, but it does signal a shift in data center thinking. “Now that we’re sensetized to the idea of efficiency, we realize that we’ve been doing a really poor job in the data center,” said Illuminata’s Eunice. Going forward, “The real impact will be when it becomes social-level thinking and [the research] affects lots and lots of data centers.”