Essential Guide

Taking charge of VM allocation, troubleshooting methods

A comprehensive collection of articles, videos and more, hand-picked by our editors

Improving server capacity planning

Allocating computing resources to changing workloads is an ongoing challenge, but proper server capacity planning can lead to more efficient use of a data center's resources.

Server technology is leaping forward with newer processors, better RAM capabilities and increased storage capacities.

For example, processor manufacturer Tilera Corp. recently released its TILE64 family of multicore processors. This processor features 64 identical processor cores, and each core is a full-featured processor. Each tile (core) includes an L1 and L2 cache, which means that each tile can independently run a full operating system (OS). New physical server technology has also taken RAM utilization to the next level. For example, the HP ProLiant DL580 G7 has 64 DIMM slots and is able to handle up to 2 TB of DDR3 RAM.

So the question is: Why are some data centers still having issues with properly planning out server capacity, server sizing and hardware resources?

The answer revolves around a misunderstanding of how server hardware resources are planned, deployed and managed.

Understanding server capacity planning
The most important part of any server rollout is the planning phase. Since there are several parts to a data center deployment plan, it’s especially important to analyze the server technology to be used. Many times, IT administrators simply throw money at a server and buy the beefiest, most modern system they can purchase. Although this will probably work initially, it’s certainly not a stable long-term solution.

Industry experts agree that planning is everything, and it is particularly important to start early in the development phase of your environment.

"When capacity planning is concerned, almost everything will depend on the data center infrastructure," said Timothy O'Brien, system consultant at MTM Technologies Inc. "Prior to deploying any server, we have to set a good baseline of expectations to what that specific machine will be doing."

When analyzing a server capacity, there are two principal questions to consider:

  1. The business needs to understand the scope of the server in the environment. Will it have a majority of its system virtualized? Or will it be a standalone physical machine?
  2. More users, more services and more applications will all affect computing resources that the environment will need to accommodate. Where do you want to go with the environment? That is, have you planned for the future?

One attribute of server capacity planning that is often overlooked is end-user performance. As many IT administrators can attest, a hardware rollout can go south very quickly if the end users are unhappy. For example, a storage area network (SAN) is a powerful and necessary tool in an environment. However, just because you have a lot of storage does not mean you can get the performance you expect. If you over-utilize your SAN, users will experience poor performance in applications and services. It's critical to select tools that can benchmark and track behaviors relevant to the user experience on an ongoing basis. Any unexpected variations in this data can then correlate back to changes, faults or other issues in the data center.

Servers do not require massive amounts of money to be thrown at them to work properly. Having a solid deployment plan, as well as an understanding of what that machine will be doing, can save time and money.

Working with server sizing and resources
Before any server is deployed in an environment, the engineer must understand what that machine will be used for. By analyzing its workload, the IT administrator can then properly allocate resources and appropriately size the machine. Sever hardware can be used up very quickly, and a seasoned IT expert will tell you that resources can become extremely limited. Even machines with plentiful computing power can have their resources depleted by workloads or applications that were not properly spec’d.

“Server sizing and resource management are constant issues engineers are faced with in a data center. Understanding what the workload needs to run will set the precedent to what resources need to be used,” added O’Brien. “For example, a simple Web server that [displays] an internal webpage in no way requires multiple cores or a massive amount of memory. However, running a SQL server with numerous users connecting to it at a given time would require more computing power.”

Planning ahead alleviates challenges with resource allocation. Understanding what will run on the server and planning for expansion creates an environment that is capable of handling the demands placed on it. Because almost no server infrastructure ever stays the same, an engineer must be ready for a fluctuating environment.

It’s important to note that, for the most part, servers are not “set and done.” Choosing the right resources for a machine is an ongoing battle since computing power and the workloads placed on these servers is almost always variable. That is, planning for what a machine will run in the future is vital to putting the right amount of RAM, processing power or storage into the device now.

There are three key points to consider when working with resource allocation:

  • Nothing is ever set in stone. Modifying the resources of a server is very common. Some resources can be allocated right into a live workload in real time.
  • Always monitor your environment. Know which resources servers are using at any given moment. Workload management that involves watching servers perform over time and seeing when demand peaks allows an engineer to properly distribute resources.
  • Know your applications, OSes and platforms. Never assume that an application or OS will always run the same. With service packs, additional users and changes in the overall environment, applications can require more RAM, storage and even additional processors, at any given time.

Virtualization
Using a server as a virtual platform includes many, if not all, of the same planning steps as a standalone physical server would. An engineer still needs to evaluate which virtual machines will be running on the machine and what these workloads will be doing. As mentioned earlier, planning ahead for a physical deployment will always begin the same way:

  1. What will this server be running?
  2. What will this server be running in the future?
  3. Does the workload/OS/platform/database/application require growth alongside my business? Have we planned for this growth and can this physical machine handle it?

By knowing what will be operating on the machine, an engineer can plan to create a server that will be more cost-effective. Whatever is running on the physical box is interchangeable in the planning phase. Whether or not the machine is used for Citrix XenServer or VMware, or it has SQL Server installed directly on top of a host OS, the idea is to understand the requirements of that hardware prior to launch. That way, all future planning can be done accordingly.

Server capacity planning best practices
Planning and mapping out a server environment helps engineers create workloads that use the optimal amount of resources. Since resources are often used up quickly when they don’t need to be, knowing what goes into a server before its launch saves time and headaches down the road.

There are also great tools that can help survey metrics and identify how well a server is operating. A great tool called up.time, by uptime software Inc., helps administrators monitor servers, virtual machines, the cloud, a colocation and more. Using up.time’s graphical server monitoring software, an administrator can graph and analyze all critical server resources running inside of the data center independent of any OS that is being used. In-depth, granular monitoring of resources, such as CPU, memory, disk, processes, workload, network, user, service status and configuration data can help engineers understand what needs to go into their servers.

When working with server capacity planning, remember the following:

  1. Any good deployment will have a build or test phase. This is where you can size your machines and know what you can allocate comfortably. Take the time to understand what your servers will be running and how their respective applications will affect the overall environment.
  2. Research what will go into the server. Understanding the resources an application, database or OS requires will prevent under- or over-allocation. Resource-intensive workloads include Exchange, SQL or any database back-end application. Resource-light workloads include license servers, single-service applications or simple Web application machines.
  3. Always be prepared for fluctuation within an environment. Being ready and knowing what resources are available for workload distribution improves your ability to respond quickly and accurately to change, creating a more balanced and stable server infrastructure.
  4. Never throw money at a server just to fix one or two issues. Properly calculate the cost of a given machine and analyze those numbers. Replacing or upgrading individual components through the life of the machine is completely normal. However, since the cost of server technology is always changing, sometimes it may be better to purchase a new machine than to replace a processor, hard drives or even RAM.

As server technology keeps progressing, IT administrators will have to do more thorough planning when they decide on their server capacities and resources. Whether the environment consists of all physical servers containing virtual workloads or just a set of dedicated database machines, understanding their role within the data center will allow for better server performance now and in the future.

ABOUT THE AUTHOR: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.

This was first published in May 2011

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

Taking charge of VM allocation, troubleshooting methods

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close