Problem solve Get help with specific problems with your technologies, process and projects.

Blades in your data center?

There are many misconceptions about blades and some truisms that are misunderstood. Expert Carrie Higbie brings you the truth.

There are many misconceptions about blades and some truisms that are misunderstood. In order to thoroughly evaluate the benefit of blade servers for your environment, there are some principles that must be understood. Implementing blades, how many, and what options to chose will all depend on a successful ROI and TCO model for your organization. One thing that organizations have done in the past is to take ROI and ownership statements verbatim from manufacturers. These will not always apply due to the fact that in some cases, stated savings are false if the expenditure did not already exist.

One thing is true, however, and that is that blades are becoming more tightly integrated with storage, either in a SAN environment or as NAS. New options also exist in the way of management and virtualization of servers. The coexistence of the technologies can provide a significant savings to companies through ease of management and a more integrated environment.

One of the largest misconceptions is that Blades will cause an increase in your cooling and power consumption. While this may be true in some cases, this does not mean that you will have to completely revamp your facilities. In fact in some environments, this change may not be needed at all. Many data centers suffer from abandoned cable issues which create an air dam and may keep cooling units and chillers from operating at peak efficiencies. This is due to systems changing out over time, older point to point systems and non-structured cabling. These are all reasons why the new TIA 942 data center standard recommends running all cabling accommodating growth so that these areas do not have to be revisited.

With that said, any time there is a large equipment change, it is prudent to reevaluate your facilities and infrastructure. Blade server chassis can be outfitted with a variety of networking options. Connectivity will typically be either network only, or network and storage. Understanding what the needs are for your organization can assist with this. When planning your facilities, you should examine not only the new equipment, but also what equipment will be leaving the data center. There are many options for providing power and cooling to the data center.

There is retrofit gear from Liebert, APC and others to assist. Which you chose will be highly dependent on preference, availability and legacy facility gear. This is not to say that you will always choose the same manufacturer, but certainly, all options should be addressed. What was a great solution at one time, may be inefficient today. The same would hold true if you were adding rack mount servers, main frame connections, or midrange server connections, as well as any storage.

In some cases, it may be advantageous to select a product that will scale with your purchases. Cabinets such as the InfrastruXure line provided by APC allow you to add cooling and power conditioning as needed which could save money as opposed to "over supplying" facilities and incurring the ongoing expenses of power consumption for areas where that power/cooling is not needed immediately. It is important to consider not only day one expenses, but also those recurring year after year. In each of these calculations, you will want to look at management capabilities and security within your enclosures as well.

While some of your cabinets may not be completely full with servers, if you factor processing power per square inch of data center space, you are likely to still be ahead. In order to determine the benefits here, you should factor day one equipment expenditures, cost of the floor space, facilities and then divide by the amount of processing power in contrast. This will help you arrive at your total equipment total cost of ownership. For instance, if I can get the same processing power out of one blade server with 4 blades in a 6RU (rack unit) footprint compared to 4 2RU footprints, I am saving 2RU in my cabinet. However with growth potential, new applications and servers will require the addition of a blade server – no RU loss of space.

Depending on the processors chosen, if I divide the total RU's by processing power, blades quickly become attractive to these budgets. The servers can be upgraded without having to change out the chassis in most cases, which is another benefit.

While you are comparing the server and cabinet to other configurations, you must deduct the additional port costs on your storage and network switches. In a 2RU rack mounted server, you will typically have 4 connections: one for the primary network connection, one for the secondary network connection, one for inband monitoring and finally one for out of band monitoring. With a blade server, the chassis can share these connections. The additional switches that would be required to support these connections are a savings. While each blade may have its own primary and secondary connection, the monitoring now is limited to two per chassis rather than two per server.

Bear in mind that the minimal cabling savings is minor. The real savings is in the initial purchase of the added switch ports, monitoring ports and monitoring software if it is priced by the port. Day two savings are through additional maintenance costs that would be realized. These will also become part of the comparison figures. In a rack mount situation, you may fill the remainder of the cabinet with switch ports which also require power, cooling and maintenance. This can make a half empty rack look attractive over viewing it as a "waste" of floor space.

The trend in data centers has been to fill every available rack unit with equipment, cabling or something and to waste none due to the cost of the data center floor space. However, as data is becoming more and more important to companies, the most efficient calculations are changing from the standard fully loaded rack to the most efficient use of space. The efficiency will be based on processing power, cooling, heating and benefit. Once the physical comparisons are complete, the next step is to address people and resource savings. This is above and beyond what we have considered thus far.

People savings and resource savings will include, boot from SAN technology where desktop administration may be lessened through a single user interface, administration savings realized through a single chassis as opposed to many separate chassis, patch management, remote monitoring and NOC resources, etc. These calculations will be unique to every company and every situation. It is no secret that most application vendors would rather have their own server and not share any applications with any other vendors to help lessen troubleshooting time.

Adding your equipment equations to your people resource equations will provide you with a useful ROI tool. Adding day two expenses with all things considered will provide you with a meaningful total cost of ownership. Each vendor will have specific heating and cooling requirements. For an excellent tutorial, go to IBM's website.

Blade servers can also provide the savings in these areas. The chassis becomes a single point of management not only in regards to networking gear, but also in regards to people resources. This translates to a direct savings in time for daily operations and troubleshooting. The BladeSystems Alliance is a non-vendor specific site for end-users, analysts and others to learn more about blade servers and the technology that surrounds them. From here, you can find links to server manufacturers, connectivity options, cabinet and cooling manufacturers, storage providers and software companies. Or as always, feel free to send me your questions here at TechTarget.

Dig Deeper on Server hardware strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.