Sergey Nivens - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

A data center consolidation plan 800 years in the making

In this Q&A, learn how an 800-year-old institution plans to put 200 server rooms from over 100 self-steering departments under one roof and one management strategy.

Any data center consolidation plan can be tough to pull off, but try unifying 200 server rooms and scattered IT operations from an 800-year-old institution into a modernized data center.

Ian Tasker, data center manager at Cambridge University, intends to do just that. The data center consolidation plan pulls operations from independent colleges and departments into a modularly designed and up-to-date data center building, centralizes governance over IT services and unearths new university-wide opportunities where IT can be a partner to administrators, students and lecturers. Standardizing IT equipment and encouraging virtualization will help the data center reach higher levels of power efficiency and utilization.

The consolidation plan also requires a sensitive balance of encouraging conformity without stripping individual operations of their capabilities and sense of control. For example, while Cambridge put in a shared network infrastructure for the campus, its use isn't mandatory. Instead, Tasker coaxes departments onto his centralized infrastructure by demonstrating the inefficiencies of duplicated network, storage and other infrastructure.

Cambridge's 2-2.5 megawatt central data center will start out with about 300 racks of servers, run by a small team of IT ops (monitoring, reporting, capacity, patching and data hall control) and maintenance and security personnel. Tasker explained the university's current and future data center consolidation and expansion plans, as well as the drivers and tools that enable this change.

What's the goal of a central data center?

Ian TaskerIan Tasker

Ian Tasker: The goal is to free up space, but primarily to create a centralized, shared service with efficiency savings for carbon output and power use. We're bringing in more accurate monitoring; the new data halls are designed with metering and monitoring in mind. We deployed energy and building management systems and [data center infrastructure management] DCIM.

DCIM is essential because there are a lot of tools with different levels of information. As the data center manager, I wanted to pull all information from all disparate systems and look at it in one pane of glass on the Emerson Avocent Trellis DCIM: power, cooling, IT equipment and so on. DCIM reporting lets us maximize usage over time. We started [with] what currently exists around the university. Metering and information was piecemeal, but by doing best analysis we can see that server rooms right now are at a 1.7 to 3, even 3.5 [power usage effectiveness] PUE. Cambridge's average IT room PUE is just over 2. With this project, we're aiming for PUE of 1.2 or better.

There's no way to know for sure what cost savings to expect when PUE reaches goal because the university doesn't associate specific energy bills to IT rooms, but we're targeting 40% reduction in IT energy costs. By rough calculations, that's somewhere in the region of 1 million pounds per annum [about $1.57 million U.S.] ...

We can achieve higher efficiency by virtualizing platforms and refreshing equipment in the move, using the central data center space best, implementing evaporative cooling systems -- make sure that we can operate the space successfully. DCIM pulls in all the information to guide this process.

The move also will offer more consolidated management, standardize service delivery and improve security and availability.

What about soft issues, like ownership and control, in a major data center consolidation?

Tasker: That's a tension that's inherent to university establishments [and is addressed in several ways].

We encourage [faculty] to buy in and not buy their own hardware. There is a cultural change that has to take place. Most users do care about energy efficiency -- they also care about enhanced security, which opens up avenues of new research and partnerships.

Starting small and scaling the offering up as we develop capabilities is essential to the consolidation plan. We're offering data from the IT equipment through Web browsers and portals so users don't feel like they've lost control over it.

Is cloud computing part of this change?

Tasker: Like many others, we're having the debate about public cloud, but data sensitivity is an issue, both for personal student information and research data. Eventually, the data center(s) will probably end up with a private cloud infrastructure. It might be in the long future. On the other hand, we don't think of software as a service as "going to the cloud," and look for places where we can utilize SaaS rather than build our own solution ...

What is your vendor strategy?

Tasker: In the supercomputing research area of the data center, we're standardized as much as possible on one or two vendors, mainly Nvidia's graphical processor units and Dell devices.

Dell supplies everything from laptops to data center equipment at Cambridge, but we don't have exclusivity on vendors, [so you'll see] HP, IBM and nearly every other vendor out there in the data center. In the general IT area, we're offering shared platform on a standard server, like a Dell product, but a lot of users will have own flavors of equipment that they'd like to keep. We're not taking old equipment and moving it in -- phase that out and refresh -- but departments don't have to change their preferred vendors. It will remain a heterogeneous server environment.

We manage it by taking all the information from disparate brands, which are indeed quite similar, and interrogating and analyzing the output. Problems arise when you just look at a few parameters. Avocent/Emerson engaged with those various device suppliers so that the DCIM works with output in various APIs, SNMP and other protocols. That configures and manipulates data from other sources, translates it into 'Trellis speak' and enables the DCIM to drill down and compile the data. Pair comprehensive data with facility information from the DCIM, and we can make recommendations back to IT teams that have ownership [over a particular cluster or environment].

How are you measuring success?

Tasker: We design the facilities to accommodate change. It's not quite modular, but we're building in segments so that we can take the benefits of modern technologies as they come along.

The data halls share power supply and use new cooling and air handling technologies that don't disrupt IT equipment layout. We're also putting in a fiber backbone so we can take advantage of faster speeds, 40 or 100 Gbps Ethernet, as they come along. The segmented design means we can extend the capacity on the same site without disrupting existing IT operations -- we took a lesson from how colocation providers design data centers. We're looking at capacity planning success as PUE and the utilization of space and its carbon output.

We're analyzing the capacity data as well to get a feel for the future requirements of the business. Can we deliver additional services provisioned centrally in the data center and offer them out to the 120 departments at Cambridge? We want to enable departments to do more computing without using their own space and generate further income for the university ...

The data center team gets momentum from talking to people and sharing how they're using the space and DCIM. They're seeing the current limitations and asking about advantages. It's key to share with departments what will happen [during and after] consolidation, what information they'll get and the possibilities it enables ...

What advice do you have for others undertaking a data center consolidation?

Tasker: Understand what you want to deliver and your key drivers, and that it is not an overnight success. It's a slow process and you have to start small and grow. We made sure that we can scale up as we grow from first site to multiple sites.

The original estimate was that this project would take 18 months to two years. Realistically, not everything will go into one data center, so we'll need to consolidate into a small number of centralized sites. The new centralized data center is owned and built on campus, and we may also reutilize some existing space. We're looking into building another data center on campus about 10km away. The entire move will probably take about five years. … We expect to double [our] capacity in the future, potentially with three on-site data centers. We need to be able to scale and operate long-term, and manage those multiple sites from one tool, one program.

If you're creating a similar data center consolidation plan, choose your partners by how well they can work with you to make the project successful.

This was last published in December 2014

Dig Deeper on Data center capacity planning

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

this is a fascinating piece--please keep us posted!
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close