Tip

Achieving a more efficient data center design

Data centers have trouble with growth. It seems that there isn’t enough power, cooling or space to tackle the computing demands that you have now. And the problem is even worse when you project computing needs over the next few years. But wait a moment – it may not be necessary to foot the bill for a brand new data center build. There are technologies appearing that can change the face of data center design and help IT administrators meet future needs without starting from scratch. Let’s examine just a few of those technologies here and what they mean for IT departments.

The data center growth paths
There are three basic approaches to data center growth: horizontal growth, vertical growth and technology refresh. Horizontal growth represents straightforward hardware addition. If you need another server, you buy one and put it on the floor. This also means providing the additional power, cooling and space that the horizontal scale-out demands. You can see how this paradigm drives energy costs up and eventually necessitates an investment in new data center facilities. The traditional horizontal growth path is the least desirable and most expensive growth path. There are better options available.

Instead, consider the appeal of vertical scale-up. Remember that most data centers don’t fill racks or blade chassis completely. This mitigates system weight and hot spot problems within the data center, but it also wastes an enormous amount of

    Requires Free Membership to View

physical space, which accelerates the need for more space. Experts like David Cappuccio, managing vice president and chief of research at Gartner Inc., urges IT planners to maximize the use of rack space. Adequate power, cooling and floor loading are still concerns, but maximizing the use of physical space can extend the data center’s life for years and defer an enormous capital expense.

Also consider accelerating technology refresh cycles. Virtualization can vastly extend the service lifetime of your servers, but that’s not always advisable. New servers pack more computing resources into the same form factor while using far less power than current systems. The net result is that savings in power and cooling will more than pay back the cost of the more capable server, and chief technology officers are re-evaluating the wisdom of waiting on the next refresh. In many cases, the upgrade can be performed in place by simply migrating virtual machines (VMs) off an old server, replacing it in the rack, and then migrating the VMs back to the new machine with little (if any) noticeable downtime. This is another strategy that can vastly extend the facility’s service life.

Cappuccio cites the example of a 2,700 square foot data center with 1,200 physical servers at a rack density of 60% and a 10% growth rate. According to Cappuccio, that data center would need 2,200 servers and 5,200 square feet of facility space in just seven years if it stuck to a traditional growth path. By comparison, Cappuccio said that increasing the rack density to 80% to 90% would support that data center’s growth needs for five years, and would eventually only need another 1,000 square feet of space in seven years (assuming that adequate power and cooling is available). He also said that using a technology refresh approach with a 60% rack density would support the data center’s growth needs for seven years with the same space footprint as rack use grew to about 90%.

When the need for additional space becomes unavoidable, businesses have other options available for data center growth. Outsourcing options are becoming more popular, allowing less-critical workloads to be run from a public cloud provider or Infrastructure as a Service (IaaS) provider. This mitigates the need for more hardware, energy and cooling, and lets businesses shift some capital expenses to operating expenses. Modular (or containerized) data centers offer another set of alternatives to new facility construction. Containers can fit easily adjacent to existing data centers, include their own servers and cooling, can be stacked vertically for additional growth, can be delivered in weeks rather than years and provide very aggressive power usage effectiveness (PUE). Cappuccio suggests that some containers can approach a PUE of 1.05.

Watch the data center design trends
While it is certainly possible to extend the working life of many existing data centers, there will come a time when the current facility is simply not adequate to meet the business’ growth needs, and a new build will become inevitable. This is the time when a business can take advantage of some of the most creative and innovative data center design trends to ensure the longest and most cost-effective service life. Cappuccio outlines several trends to consider:

Scale vertically first and then scale horizontally. You’ve seen the potential benefit of vertical scaling to make the maximum use of available rack space before deploying and populating another rack or chassis. This isn’t always possible in an older facility when adequate power and cooling or floor load ratings are unavailable, but it’s an absolutely essential design requirement when building a new facility or retrofitting an existing structure.

Build or retrofit for high density deployments. This builds on the idea of scaling, by choosing rack and other infrastructure hardware that will optimize hardware layouts and cooling use. Examples include APC’s InfraStruxure architecture that promises a 25% increase in power and cooling capacity with a 15% smaller footprint. Another example is IBM’s high density zone that touts rapid deployment of high density hardware in existing data centers at up to 35% less cost than a site retrofit.

Build several density zones. Data center design does not have to be an all-or-nothing proposition, and an increasing number of new designs incorporate multiple density zones. For example, rather than create an entire data center with aggressive (and expensive) cooling systems, consider designing a data center with only a portion dedicated to high density, a portion to medium density, and a portion to low density (for legacy systems or low-priority test and development platforms). These can sometimes be constructed as separate rooms, but density zones can also be built within the same room if containment and the air handling are designed properly. This is a means of mitigating overall costs.

Build multi-tiered data centers. There is no reason to build a Tier 4 data center to host test and development systems. Similarly, putting mission-critical transactional operations in a Tier 2 data center would probably be unwise. So Cappuccio suggests considering a multi-tiered data center design strategy where applications can be deployed on the corresponding “tier” of performance, reliability and data protection. The concepts of multiple tiers and density zones are often used together to some extent, depending on particular business goals.

Use free cooling and reuse heat. Locate and design new data centers to maximize the use of free cooling resources like cold outside air or water. When free cooling technologies are deployed together with higher ASHRAE temperature setpoints in the data center, the energy savings can be enormous over the lifetime of the facility. Computing equipment built to future ASHRAE Class A3 and A4 standards may need little (if any) mechanical cooling.

Build small and build often. Organizations with distributed data centers are best positioned to benefit from this advice. The goal is to maximize the use of physical space with vertical scaling and efficient cooling technologies. This should reduce the overall size (and expense) of that facility. When multiple facilities are involved, it’s much more practical to stagger the constructions or retrofits of those facilities rather than attempt a massive multi-site project. The use of data center containers or other prefabricated data center designs can lower costs and speed deployment.

More workloads in less space
Efficiency and scalability are the watchwords for modern data centers, and CTOs must consider the need to operate more workloads with less hardware using less power and cooling, yet get more viable life from the data center facilities that they build. By 2016, Cappuccio predicts that 60% of new data centers will be about 40% smaller, yet support 300% more workloads for the business. There are many driving factors for this, including the need for energy efficiency to forestall increasing energy costs, the threat of carbon taxes, the desire for social responsibility and the need to address regulatory pressures.

This was first published in November 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.