It seems like server technology is changing faster than we’re able to keep track. Before you know it, IT pros are behind the eight ball, scrambling to understand and deploy important new technologies that they’ve overlooked. Sometimes, you just want to know how your data center stacks up against the industry overall.
Earlier this year, SearchDataCenter.com released its Data Center Decisions survey, designed to collect information about hardware and operating systems, virtualization and the cloud, facilities, and data center management. We’ve compiled the data, observed the trends and released the results in our State of the Data Center: 2011 special report.
In this podcast, Stephen Bigelow, senior technology editor with SearchDataCenter.com, sits down with Bill Kleyman, virtualization architect with MTM Technologies Inc., to talk about some of the results and what they mean for IT professionals.
Stephen Bigelow: We see everything from 1U, 2U and blade systems packing in more computing power than ever, but how are data center owners and operators selecting and purchasing new server technology?
Bill Kleyman: The key thing to remember here is that every environment is unique, with its own specifications. Before administrators can make a purchase, they need to evaluate their goals for the data center. Are they rapidly expanding? Are they about to take on a massive amount of new users? Are they looking to virtualize an environment? Or, are they pretty stable? Are they, for example, a stagnant manufacturing firm that is not looking to perform a lot of upgrades or push too many virtual machines (VMs) out into their environment?
Based on that, administrators and IT managers need to look at what server technology they want to go after. Spending money on blade systems is not a bad thing. Spending money on 1U or 2U rack-mount servers is not a bad thing either. However, it is important to understand where [this equipment] fits into the data center design. If an administrator is looking to take on a desktop virtualization project, a blade environment might be great for them. However, if their environment is more stable and is not expanding rapidly–and really doesn't require the integrated blade chassis environment–pushing a 1U and 2U environment is really not a bad thing. This approach allows for a lot of users and has a lot of density, as well as high-availability failover, with a few 1U boxes. The key thing to remember is that administrators need to sit down and plan their environment before deciding what server technology they need and making the purchase.
Bigelow: Our research tells us that integrated infrastructures are gaining ground slowly, but how do you see adoption rates, and more importantly, how are adopters using the technology?
Kleyman: Integrated infrastructure is a little bit of a buzz term, and everybody has their own take on it. For example, Cisco Systems calls it the Unified Computing System (UCS). Let's say you're expanding your virtualization solution. Instead of buying a typical blade chassis, or 1U or 2U server, you're going to go with this integrated infrastructure approach. That means you have the chassis, and inside of it you have some sort of switching infrastructure. You also have a blade infrastructure and you have a management infrastructure. So the entire environment is under one roof. The management granularity inside an integrated infrastructure is phenomenal. Let's take the Cisco UCS Manager as an example. If you have 20 Cisco UCS blade chassis, the management of these chassis becomes an absolute nightmare. What a lot of these hardware manufacturers are doing is simplifying the process. Using one graphical user interface (GUI), you're able to go in and see the health, state and viability of every server. You're able to clone hardware profiles. The actual reporting is so granular that upon start-up, you’re able to see any error occurring on any server all the way down to the DIMM slot.
As far as adoption, you’re going to see a lot of integrated infrastructures deployed toward virtualization and a consolidation environment. It is particularly beneficial to take numerous old computers and V2V or P2V them, remove that physical footprint and put it under one well-structured roof. Hewlett-Packard (HP) Co., Cisco and a few other manufacturers are creating phenomenal solutions for IT administrators to leverage. It really does simplify data center management. That's the goal--to simplify and get the biggest bang for your buck.
There's also another element here. Blade chassis and integrated infrastructure are becoming affordable. There is no longer sticker shock. Through Cisco and HP, you can get a well-built integrated infrastructure that includes some kind of a switching element inside your chassis, along with two or three blades, for around $40,000. All of a sudden, a medium-sized enterprise, which could not previously look at this kind of solution, can take a step back and decide whether to buy one or two blade servers, or invest in the chassis while only using a couple of those blade slots, and still have room for growth. With this affordability, we are going to see more users adopt integrated infrastructure and move toward blades a little bit faster.
Bigelow: Virtualization use is constantly expanding, and a large portion of our survey respondents are using virtualization for disaster recovery (DR). How do you see users taking advantage of virtualization?
Kleyman: Surprisingly, there are still some people who are just slightly hesitant in adopting virtualization technology. The definition of virtualization is becoming a little bit broader than just taking a computer and making it virtual. We’re now talking about application streaming, desktop streaming, hosted desktops and virtual server infrastructures. There are a lot of new aspects that fall into virtualization. What's happening now is that end users see this as an advantage, because you're getting better performance at the end user point. For example, you could take a simple terminal and stream entire desktops to it, utilizing the hardware at the endpoint to maximize the streaming efficiency. Or you can host entire desktops at a centralized point and have users adopt a “bring your own computer” policy, where they can literally come in with their own laptop and are able to connect to anything they want on the network. It is making data center management easier, and that's what IT administrators are looking for. It is so much easier to deploy a single desktop image to 100 users, rather than take a CD or use some type of imaging software. Everything is managed from a single point. That is what virtualization gives you. It is always going to be easier to spin up a new VM than to start up a new fresh piece of hardware. It's always going to be easier to back up a specific piece of a VM rather than back up a physical box.
DR plays an even bigger role in all of this. With virtualized DR, you can go from a situation in which your entire infrastructure is down, where you need to launch these new VMs in an environment that is entirely dormant, to a point where all of a sudden all of your users are redirected to this new colocation, and we are still up. So, virtualization plays a big role in the ease of use and the viability of any environment, helping it to move forward and continue growing.
Bigelow: Our research tells us that the number of physical hosts and the number of VMs are both increasing. What does this mean for VM sprawl and overall data center management? Is this a problem in the making?
Kleyman: We've all heard of desktop sprawl, server sprawl, and now with the ease of virtualization, we are going to hear more about VM sprawl. It is a problem. Young administrators can get “click-happy” and press Next five or six times. All of a sudden you have a new VM. If this keeps happening, then you could have 50 or 60 VMs sitting dormant, underutilized or improperly utilized on the server, taking up space and resources. The important thing is to manage and monitor your virtual infrastructure. Managing your virtual environment is just as important, if not more, than managing a physical environment.
You need to understand that these stray VMs play a role in your environment. Having them just sit there is literally wasting money, because these resources can be used somewhere else. Whether you're using VMware Inc.’s vSphere, Citrix Systems’ XenServer or Microsoft’s Hyper-V, there are different management tools in the GUI to see how many VMs are spinning up and how they are being utilized.
There are also a lot of other tools you can use. Using tools such as Microsoft’s Performance Monitor or Citrix’s EdgeSight, you can see performance metrics and monitor what's being used at endpoints. By collecting this information and understanding how the environment interacts with end users, an administrator is able to make a better judgment as to what should and shouldn’t be running. I often come into environments where there is a physical host and there are eight or nine VMs running. I will sit down with administrator and ask them, ”Do you need these three VMs?” And they'll usually say, “I don't even know what those three do.”
Those VMs are taking up storage on the storage area network, taking up networking resources and taking up valuable hardware resources that can be distributed somewhere else. Always know what your VMs are doing, whether they are desktops, VM applications or full virtual servers. Having them on the machine just because you think they need to be there isn't really a good excuse. Work is required to make sure there isn't too much VM sprawl going on in the environment
Bigelow: And this brings us to our last question. We know that virtualization is putting a new emphasis on data center management, but where do you see data center management falling short? How should system managers improve their use of management tools?
Kleyman: The last part of your question I think is the most important element of this conversation. How should managers improve the use of their data center management tools? A lot of times these management tools are present, they are just not used well. Make sure you monitor your infrastructure and set up alerts. That is very important. Administrators may need to have some sort of a workflow automation process that tells them when they need to spin up VMs. It is also important to set up utilization alerts. There a lot of times when a physical host gets overburdened, and if no alerts are set up, the machine continues to be overburdened. That is an example of improper utilization of data center management tools.
It is important to take the time to learn what a hypervisor or virtualization platform offers. A lot of times, engineers or consultants will go in and do their best to explain the technology, but in-house administrators need to take the time to learn what they have on their hands. This is very powerful stuff. You're talking about enterprise-ready solutions. So obviously, they are going to have some sort of management tools available to prepare managers for VM sprawl, excessive machine utilization and error monitoring. Having these tools at your fingertips can really reduce issues or problems that occur within the data center.