PALO ALTO, Calif. -- Hewlett-Packard Co. (HP) has a vision for the data center of the future, and it is using its own massive data center consolidation project as the proving ground for its ideas.
In May 2006, the company announced plans to consolidate 85 worldwide data center facilities down to six -- three locations with mirrored disaster recovery sites. Upon completion, HP's data center real estate would scale to 300,000 square feet; down from 500,000 currently.
HP plans to save $1 billion annually as a result of the consolidation, which it will execute in a phased approach over three to four years. At the time of the initial announcement, HP was breaking ground on its new facilities, but it was tight lipped on specifics. Several months later, HP executives offered more details on the project's strategy at a roundtable discussion on its corporate campus with SearchDataCenter.com.
Application appetite is put on a diet
An enterprise data center is often required to support and deliver thousands of applications, some of which are multiple versions of the same software, others are different products essentially doing the same thing, while even more are forgotten or legacy programs just taking up space.
Yet application consolidation has typically received little attention in data center consolidation projects. Russ Daniels, vice president and chief technology officer (CTO) of HP Software and HP Adaptive Enterprise, stressed the importance of this step.
Daniels said HP had to ask a series of questions to reduce the application portfolio in its data centers: "Do we actually have a business case for that application? Can we consolidate it with others? Do we have 10 applications all doing the same thing? Which one are we going to standardize on?"
Application consolidation is a huge cost saver in licensing, management resources and hardware support, according to Daniels.
Homogenized hardware strategy
But application support decisions move beyond picking off duplicate and unused applications. The next step for Daniels is to define which applications will run on the specified architecture.
Part of HP's plan is to trim the kinds of hardware it's willing to support in its data centers. Daniels said the systems infrastructure is defined by the data center experts, not the users. This allows data center managers to make infrastructure decisions in a two-step process:
- Make a design choice on what system capacity to bring into that data center
- Choose applications to target those hardware patterns already designed
"There's a very strong gate to bringing in legacy applications that can't run in the new architecture," Daniels said. "You can present application support to users as a service with a price tag, if they force the issue on legacy, nonconforming apps."
More on server consolidation HP gearing up for massive data center consolidation
Fast Guide: Server consolidation
For HP's own systems, it'll be running C-Class blade servers for x86, midrange Integrity machines and storage hardware. Database utilities are going to be built on Superdomes. There will be four or five hardware models in total.
Lin Nease, CTO, HP Business Critical Systems, said, "By limiting the architecture [choices] you might lose some optimization, but you gain in standardization. By choosing what's available on the menu, you have a much more efficient supply. It's like Southwest Airlines choosing only 737s."
Nease also explained that HP's idea of hardware modularity is changing. Instead of deploying servers as a single piece of equipment, Nease said HP will be using pod designs.
These pods are a collection of systems designed for a specific purpose, wired once, sharing capacity and resources across the unit. "Let them age, and eventually unplug them all at the same time," Nease said.
HP is currently working on several pod designs, including models optimized for running virtual machines, J2EE applications, shared databases, Web hosting and more.
Physical environment: Work smarter, not harder
Despite the smaller footprint and higher server density for its new facilities, HP plans to stick with basic raised floor air cooling. According to Sharad Singhal, distinguished technologist at HP, delivering air flow properly will prevent HP from having to go with high-density cooling technologies.
The new data centers will use computational fluid dynamics models of the airflow to equipment and use cooling sensors, not just on the ceiling -- but also on the intake and outtake valves of the chassis. Adjustable floor vents will deliver air where it needs to go.
"Walking into a typical data center, you need a jacket, but not ours," Singhal said. "We understand where the hot spots are occurring and reallocate cooling to those areas."
While routing data center cooling to where it's needed isn't new, it is noteworthy that HP doesn't plan to adopt any next generation liquid cooling, while others involved in major green field data center projects are planning for new cooling strategies. HP also offers its own liquid cooling technology.
Cooling efficiency is just part of the energy savings HP plans to reap from this project. Data center site selection was very important. Telecom and power costs were the No. 1 issues, and the U.S. is where HP found the cheapest rates. While HP has offices all over the world, developing countries don't have the existing infrastructure in place for cheap data centers. And Texas' ability to produce its own cheap, reliable supply made it the right place for two out of the three locations -- Austin, Houston and Atlanta.
Analysts said the move could be a model for how other businesses can run their data centers in the future. But a project glitch could spell public relations disaster for the company.
Let us know what you think about the story; e-mail: Matt Stansberry, Site Editor