Tip

Choosing the right server for mission-critical applications

Prior to the concept of virtualization and server hardware consolidation, companies were forced to dedicate individual physical hosts to facilitate their critical applications. With the introduction of VMware, Citrix's XenServer

Requires Free Membership to View

and Microsoft's Hyper-V, these same companies are potentially able to maintain a medium-sized environment from a single 42U server rack. Where selecting the proper server was important in the past, it's an even more critical decision now. A single physical host can have anywhere between five to 10 virtual machines (VMs) running on it. A physical failure on these machines could be a complete disaster.

Mission-critical applications often require dedicated environments with set parameters for resources. In utilizing a hypervisor, engineers are able to give these applications the needed amount of resources through virtualization. The higher the uptime an application requires, the more fault tolerant the environment must become. Buying an expensive server to accomplish this task may not be enough any longer. To maintain high usability and uptime within a mission-critical environment, specific requirements must be met.

Plan ahead
Probably one of the most important tasks prior to even making a purchase is laying out the architecture for a given application designated as a "mission-critical application." A program running on a server that is classified as mission critical will usually have a recovery time objective (RTO) – the time between disaster and recovery – of about two to three hours, or possibly even less to avoid interruptions to the business. To truly gauge the impact of a down critical application, it is recommended to run an appropriate risk assessment. A business impact analysis (BIA) should be completed when an RTO is established. Since data centers and their respective businesses are usually unique environments, determining the RTO will be completely dependent on the company.

During the planning phase, a team will begin by identifying and characterizing an application and its workload. At this point, both IT engineering as well as business management teams will take the time to understand exactly what constitutes a mission-critical application.

They will also need to answer a few questions:

  • What are the goals of the application?
  • Which resources are allocated to application?
  • How much resources does the application require and how much will it require six, 12 and 24 months from now?
  • Can we allocate more resources to accommodate peak times and temporary surges in demand?

It's important to note that not every application can be virtualized. Working with the vendor or developers of that program will help answer these questions. When working with applications designated as critical, engineers must take assumptions and guessing out of the equation.

Once the engineering and business teams have successfully outlined the mission-critical application, the next important step is to determine the means of delivery.

Utilizing the right hardware and software
Engineers will need to have a very clear understanding of the application to facilitate its delivery. There are several key technological elements that have to be evaluated and researched prior to launching a critical application, and they are discussed below:

Virtualization
Despite the remarkable growth of virtualization technology, many engineers are still hesitant to launch applications that require high uptime. Their fears range from VM security to I/O utilization. Although their concerns are valid, research will still need to be done as every application is unique. There are three things playing in favor of rolling out an application in a virtualized environment:

  1. Over the past three years, virtualization technology has found its way into many IT environments. Whether used in a live or a test environment, almost all new and veteran engineers have had the chance to toy around with some type of hypervisor. This should make a typical engineer much more comfortable in deploying their program on a virtual platform.
  2. Oracle Corp., Microsoft and other software vendors have taken several steps in creating virtualization-friendly applications. In fact, many databases are now optimized to run in a virtual environment. Working with a large Exchange or SQL infrastructure is no longer a concern, as these databases are capable of operating very well on a virtual platform. VM failover and redundancy has also boosted confidence in utilizing this technology. A critical application running on a dedicated VM can be mirrored to a hot-site located many miles away. In the event of a failure, this application can seamlessly resume operation on a server located at a recovery site.
  3. Physical server-class hardware now comes virtualization-ready. In fact, processor manufactures take pride in showing metrics on how well VMs perform in either a hosted or a bare-metal environment. Note: It's highly recommended to deploy mission-critical applications on a bare-metal hypervisors as it will eliminate a hosted operating system point of failure. I/O has become so streamlined that many VMs operate only a percentage or less badly than if they were installed directly onto a physical server.
  4. For firms looking to launch a mission-critical application and reduce their hardware footprint, lower data center costs and improve ROI, virtualization will be a good road to take. A company can have multiple applications running on a physical server as VMs instead of using individual hardware for each application.

It's important to verify with the vendor if a critical application can perform well in a virtualized environment. Many times, a thorough testing phase will need to be conducted to see if a program can run in a virtual state or if it requires its own dedication hardware.

Choosing the right hardware
Because each environment is unique, selecting the proper hardware will be dependent on the requirements of the application and the business. Some engineers prefer a blade environment while others swear by rackmounted servers. If a business is considering virtualization for its critical applications, a blade environment can be a good investment. By using blades, an infrastructure could see improvements in speed and performance. When making the decision, it's important to analyze the overall flexibility of the IT environment, storage utilization, migration and consolidation initiatives, and networking functionality.

Performance
Application performance will be at the top of any engineer's list when dealing with a mission-critical environment. Applications given the necessary resources will translate into measurable performance gains, increased productivity and overall better performance. When selecting a server, look for expandability and reliability. This is where understanding the future of a critical application comes into play. By knowing, or at least forecasting, what a program will require, an engineer can make a good decision on which server to purchase. Make sure that there is always room for growth. An early hardware refresh is not usually on IT management's list -- especially if the original hardware was expensive.

Over the past few years, companies such as Intel Corp. and AMD Inc. have taken huge strides in developing advanced semiconductor technologies. Current iterations of Intel and AMD processors contain eight and 12 cores, respectively. Many servers now incorporate these processors and have the ability to expand as well. By increasing the number of cores, companies can now benefit from an increase in performance and reduce the number of physical servers in their environment.

Memory
Whether an environment has virtualized their critical applications or is using a dedicated server, memory will always play a crucial part for a mission-critical program. New server hardware is equipped with more memory slots and advanced technology in DIMM support. For many environments, a mission-critical application also means very memory-intensive. When selecting a server, verify its expansion capabilities and what type of RAM it will support.

"Fault-tolerant memory is always top on my list, since it's such a frequent point-of-failure. When choosing a server and the required hardware components, memory mirroring and error correcting are a must," said Andre Robitaille, IS consultant with SynerComm Inc. and founder of MilSec.

Placement of the memory inside of the server is important as well. "When selecting a mission-critical server, I prefer rackmount hardware since that allows me to access common internal parts, like RAM, without sliding the whole server from the rack," Robitaille said.

Server Redundancy
Having a fault-tolerant machine should also be at the top of a deployment list. Mission-critical applications are deployed with a "zero-downtime" mindset. So, what happens if one of the physical hosts fails? Any server operating as a physical host for VMs or that is simply dedicated to an application must have redundant components built in.

Prior to making a purchasing decision, verify that the following are redundant and ready for failover:

  • Power supplies
  • Hard drives
  • RAM
  • Fans
  • Network interface cards (NIC)

With critical applications, no downtime is the goal of any environment. Using a solid storage area network environment with data duplication and physical server mirroring will help critical applications stay alive. Virtualization technology helps take this process a step further by mirroring an entire workload offsite. To add to failover, these VMs can also have a snapshot taken during set intervals for fast recovery should a hot-site be unavailable. Remember, it takes much longer to rebuild a physical server environment than it does to spin up a VM on a different server.

Best practices and tips
The more planning a team can accomplish, the better the mission-critical application will fare in the long run. Understanding the application's goal and potential for the future will help gauge what is needed both now and later.

When working with an environment that cannot go down, it's important to remember a few things:

  1. Try to design an environment built on open standards. That is, working in an infrastructure utilizing industry-wide standards will help support maximum environment flexibility. Since business and IT are constantly changing, working with these open standards gives IT managers a wider variety of vendors to work with for server upgrades and compatibility.
  2. Make sure all critical servers are well maintained. This doesn't only involve software and hardware maintenance. When deploying a machine with multiple redundant NICs and power supplies, cable management becomes very important. If an emergency arises and a component needs to be changed on a live server, having a clean cable environment will make the process go much smoother.

    "If I'm the only person in the datacenter and need to do a quick RAM change, I may not always feel comfortable sliding a server entirely out of the rack without someone in back watching the cables of all the other 24/7 servers to make sure something doesn't snag, " Robitalle adds.
  3. Mission-critical applications will require higher-end server technology. This means it may be slightly more expensive. Utilize your respective hardware vendor and do a POC of a given server to ensure it meets the application's needs. Oftentimes, POCs can run for a couple of months prior to a company making a decision. Even smaller shops can test out a server for 30 days before either committing to it or returning the device. By placing a critical server in a test environment to see what it can handle, engineers can eliminate some variables prior to making the technology live.

About the author: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.

This was first published in April 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.