While blade server technology has been a fixture in data centers for many years, there are many looming factors
that may drive some organizations to replace blade servers with rackmount servers. In fact, these factors may signal a shift in the entire data center design philosophy.
One factor that may contribute to the eventual demise of blade server technology is the rapid adoption of server virtualization. While most modern blade servers are capable of hosting virtual machines (VMs), there are some simple economic, hardware and network factors that make blade servers unappealing for use in today’s virtual data center.
Blade server technology economics
For many organizations, economics drives the adoption of server virtualization. Server hardware resources are often underused, and organizations can save money (and extend the working life of their hardware investments) by making better use of existing hardware.
When implementing server virtualization, companies primarily want to reduce server hardware costs and may look to commodity hardware. Virtualizing your servers only saves money if hardware costs are reduced. From a cost perspective, it makes no sense to consolidate 10 virtual servers onto one physical host server if that host server costs 10 times as much as each of the physical servers that you have phased out.
At first, this scenario might not seem like an issue for organizations operating blade servers. Blades, after all, have a reputation for being inexpensive. The problem is that unlike a rack server or a tower server, blade servers are not self-sufficient.
Blade servers are small because they do not include power supplies or fans. These components (and others) are instead integrated into an expensive chassis, which acts as a backplane to the blade servers, providing them with power, cooling, network ports and other various interconnects.
Organizations must consider the cost of the chassis (and the cost of any required add-on modules) when determining the overall costs of a blade server technology deployment. For example, if an organization only requires a few servers, then the per-server cost of blades will be much higher than comparable rack servers. The cost of the chassis should also be factored and any required support modules. On the other hand, if you completely fill the chassis with all the blade servers that it can accommodate, the price per server comes way down because the cost of the chassis is spread across many servers.
If an organization is looking to save money by reducing the number of physical servers purchased, blade servers may not be the best choice. They are only cost-effective when purchased in quantity.
Blade servers present VM limitations
Another setback affecting the use of blade server technology within virtual data centers is its lack of sufficient hardware to host large numbers of VMs.
There are blade servers that can accommodate numerous VMs. The Dell PowerEdge M910, for example, is a four socket server that can accommodate processors with up to eight cores. Since the server can scale up to 512 GB of memory, you can see just how many virtual servers the PowerEdge M910 supports.
The problem is that even though high computing capacity blades exist, organizations may be unable to take advantage of them because there is no standardization among blade servers. Once an organization purchases a blade chassis, they are locked in to using blade servers manufactured by the same company that made the chassis. Not only do chassis design constraints prevent organizations from mixing and matching server vendors, but they also prevent the mixing and matching of a vendor’s blade product lines. For example, Dell’s M1000e chassis will only accommodate Dell M-Series blade servers.
Lack of standardization can be a major issue for organizations wishing to host virtual servers on blades -- once an organization has invested in a blade chassis, they may be unable to take advantage of future blade servers without buying a different chassis.
Organizations may also discover that the chassis they purchased quickly becomes obsolete. Manufacturers may stop producing blades that fit into the chassis and focus instead on their latest chassis model. The investment in blades carries a greater economic and technological risk than more traditional rack or tower servers.
Network interface restrictions
Another blade server technology limitation that tends to affect organizations moving toward virtual servers is an inherent lack of network interface cards (NICs). Most blade servers include at least several integrated NICs and typically offer a couple of mezzanine-type card slots. Often, one slot is occupied by a Fibre Channel card that connects the blade server to a SAN; the other is occupied by a NIC.
The actual number of Ethernet ports that you can fit into a blade server varies from manufacturer to manufacturer, but you can’t just plug an Ethernet cable into the back of a blade server as with other types of servers. Instead, the individual NIC ports must be mapped to ports on Ethernet modules that are installed into a chassis.
Many different types of modules must be installed into a limited amount of space within a chassis. The physical size of a chassis (and requirements for other types of modules) limits the total number of Ethernet modules that it can accommodate. This can be problematic for organizations that want to host large numbers of virtual servers on their blade servers -- a network bottleneck will occur unless a sufficient number of NICs are available.
Numerous economic and technical factors have slowed the adoption of blade server technology in virtual environments and could potentially lead to the decline of blade servers within data centers. Still, blade servers are clearly important computing platforms that provide great benefits for many organizations, and they won’t go away completely any time soon.
ABOUT THE AUTHOR: Brien M. Posey has received Microsoft’s Most Valuable Professional award six times for his work with Windows Server, IIS, file systems/storage and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities and was once a network administrator for Fort Knox.