|Server hardware roadmaps|
Selecting the right hardware for a server refresh is critical to running smooth IT operations. Having the right hardware directly affects the health and success of the overall business. New hardware is never selected arbitrarily; it's a long process that maps the future technological needs of a business against new server vendor products. You must also weigh how the refresh will affect budgets, service and maintenance, facilities investments and even disposal or reallocation of the old equipment. Careful consideration of these details up front will help ensure the best server is selected.
This first segment focuses on server hardware issues. There is a wealth of information centered on new server form factors, particularly blade servers, which can concentrate a lot of computing into a relatively small area. Blade servers are an important form factor, but they also place serious demands on data center facilities. Server specifications and computing capabilities also profoundly affect which new server is chosen. With Intel and AMD battling to produce more powerful CPUs, having a perspective on their current offerings (and future directions) can boost your server buying confidence.
Sticking with a set server refresh cycle isn't always possible. Tough economic times often put a crimp in the capex budget or slow down new business projects. Funding issues might require stalling a server refresh and wringing more life from existing hardware. Disposing of displaced hardware can save, and businesses can often reallocate older servers to secondary tasks, such as backup, disaster recovery, testing and development or other creative uses that allow used equipment to render more value once its production lifecycle is finished.
Blade server technology faces virtualization hit
Rackmount servers and server virtualization are just two factors that could contribute to a decline in blade server technology. Certain network, hardware and economic aspects can also make blade servers unattractive when designing future data center sites.
A look at CPUs for Unix
Although many processor technology advancements are for Windows-based virtual machines (VMs), open source systems are seeing new developments from Intel and AMD hardware. Users are calling for redundant and stable Unix-based physical and virtual machines, and virtualization and migration engineers are demanding improved performance from processor manufacturers.
Delaying a technology refresh cycle
Deciding when it’s necessary to replace data center hardware is becoming increasingly difficult as organizations face dwindling budgets. Although you can extend the usual three- to four-year hardware refresh cycle, organizations must know when to update their systems to ensure optimal performance and efficiency.
Repurposing older hardware
When consolidating a data center and taking advantage of new technologies such as cloud computing and virtualization, it’s essential to plan for the aftermath of excess hardware. Putting old data center hardware to use should be discussed early in the consolidation process.
(BACK TO TOP)
Server consolidation has quickly emerged as an important practice available to modern data centers. Virtualization allows a single physical server to host multiple workloads simultaneously, vastly increasing the utilization of available computing resources, while reducing the number of physical servers required to accomplish the same amount of work. While virtualization and consolidation are now fairly straightforward and wizard-driven processes, server consolidation and its management can present a series of new challenges for IT administrators.
One immediate challenge is allocating the correct amount of computing resources when creating a VM. Assigning too many resources results in waste, while assigning too few resources can starve a VM and cause poor performance. Proper assessment of computing needs and careful ongoing management can ensure adequate performance and support the optimum number of VMs. Care is also needed to avoid excessive or uncontrolled VM deployment. This can deprive physical servers and important VMs of critical computing resources and increase the administrative workload to unsustainable levels.
Further, management tools you choose for the virtual environment can profoundly affect the accuracy and efficiency of your system management. Tools that accompany specific servers can provide great depth, while third-party tools may sacrifice depth for heterogeneity. Finally, changing server vendors can present serious management and support problems. Therefore, administrators need to consider the implications of new hardware platforms beyond cost.
Best practices for VM workload management, resource allocation
VMs are now created with as much effort as five to six mouse clicks. However, this ease of use can present issues with over-allocating computing resources to a VM. As a result, administrators must be savvy in effective VM workload management processes and best practices for resource allocation.
Preventing server over-consolidation
Making the most out of a server hardware budget often means packing VMs on each physical server or achieving the highest possible density. The problem with this is that it often leads to server over-consolidation, poor server performance and instability.
Server element managers vs. third-party server management tools
Today’s data centers often have so many servers that managing each one individually is too onerous. Evaluating server management tools and determining the best one is difficult; however, there are certain guidelines you can follow to find the best choice.
Changing server vendors
When switching server vendors, a lot more goes into the thought process than simply deciding on the one with the lowest-priced server hardware and software. Recognizing several red flags can help you decide whether your current vendor fits the bill.
(BACK TO TOP)
Server virtualization provides IT administrators with a dynamic computing environment where workloads can be created, destroyed, moved and optimized on the fly. This places significant new emphasis on the importance of capacity planning. After all, the computing resources available in a physical server are utilized much more extensively in a virtual environment. This demands much more comprehensive monitoring and planning so workloads will always have adequate CPU, RAM, I/O, storage and other resources, especially as workload demands change and migrate between physical machines.
One often-overlooked attribute of capacity planning is licensing. As VMs proliferate, Windows Server licenses must be budgeted, deployed and tracked to ensure compliance. Because VMs are so easy to clone and change, it's easy to lose control of licensing and expose the company to significant legal penalties. Storage is also remarkably important in virtual environments, and administrators can achieve even more efficiency in environments by employing several techniques to consolidate storage space.
VMs require a more critical attitude to overcome uncontrolled proliferation and unnecessary storage requirements. VM lifecycle management tools are necessary and policies must be implemented to govern how a VM is created, how long it's needed and when it should be destroyed so its resources can be returned to the computing pool. Capacity planning also requires acute benchmarking and monitoring to keep track of computing needs, which also serve as the foundation for new server and upgrade purchases over time.
Windows Server licensing tips
Microsoft Windows Server 2008 licensing has always been difficult to interpret, but with Client Access Licenses, per-processor guidelines and the popularity of virtualization, it has become even more complex.
Consolidating storage space
Although storage is currently very easy to buy and add to any network, poor integration into an infrastructure can create storage sprawl. Proper data storage management is essential for building and protecting information, and knowing the available management options can help save your bottom line.
Virtual machine sprawl management to rein in stray VMs
New VMs are fast, easy to use and the hardware is essentially free. Creating and running a new VM can be done quickly and before you know it, stray VMs have accumulated to the point where managing virtual machine sprawl is necessary.
How benchmarks should be used for capacity planning
Administrators should use benchmarks to help with virtualization capacity planning and better manage server resources, utilization and performance. In turn, organizations can effectively plan their infrastructures before resources become limited.
(BACK TO TOP)
|Benchmarking and testing|
Simply buying server hardware won’t ensure proper application behavior, particularly when the server hosts multiple workloads in a virtual environment. By running a reasonable benchmark on the server, an admin can establish a baseline of performance for a given workload and track resource use and performance characteristics to gauge the server's behavior regularly. This can provide insights helpful in capacity planning and give early warning of encroaching server (and workload) problems.
Choosing the correct benchmark for your servers will depend on a myriad of factors including the server itself and the characteristics needed to baseline. Once a benchmark is selected, it must be used properly to avoid inaccurate or unreliable results that may trigger premature upgrades or unnecessary investments in new hardware.
Benchmarks and monitoring tools also factor in workload balancing, ensuring that the most compatible VMs are assigned to servers with the most relevant available resources. You should match the needs of each application to the servers with the best corresponding computing resources available as the applications will affect computing resource needs and performance. Organizations that architect their own applications should reassess application design to utilize virtualization and the availability of advanced computing resources like multicore processors.
Selecting benchmark tools for specific needs
Although virtualization comes with undeniable benefits, it can also negatively affect computing resources in a data center. Benchmarking tools can help administrators monitor server resources, manage utilization and optimize performance.
Recognizing and avoiding common benchmark mistakes
When maintaining long-term system performance, using server benchmarking tools is essential. However, server benchmarking results are only legitimate if you adhere to recommended best practices and avoid common benchmarking mistakes.
How should benchmarks be used to optimize virtual workload balancing?
As the number of VMs in a virtual environment grows, they must be managed and monitored against an organization’s computing benchmarks. Using benchmarks and tools for virtual workload balancing will help you plan your next VM deployment.
Rethinking applications to take advantage of multicore
Today’s CPUs have many advanced capabilities, including multithreading and parallel processing, but most traditional applications aren’t coded to support these advancements. Existing applications may need to be updated or redesigned to incorporate multithreading and other performance-enhancing software coding techniques as well as improve application performance.
Check out the rest of our Server Month resources.
This was first published in March 2011