Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Vendor lock-in still issue in new data center servers

With converged infrastructure, data center servers are stepping back to the mainframe ideals of self-contained computing -- with a few differences.

Vendor lock-in is a pervasive issue, especially for data center servers.

Self-contained servers come with the central processing unit and supporting chip sets of memory, storage and network interface cards (NICs), allowing them to talk to the rest of the data center and the outside world. The mainframe is a prime example -- highly proprietary with specialized connectors to deal with expansion, such as extra storage. With such a custom unit, the only standardization needed is at the NIC.

Servers talk to each other via standardized cables and plugs. But the problem with self-contained servers is this architecture prohibits availability. If any part of the monolithic server fails, it needs to be replaced. Alternatively, data centers spend huge amounts on high availability for these systems.

To deconstruct the data center server, storage moves to a different environment. Storage area networks (SANs) create a shared pool of highly available storage; servers can fail without affecting data. Virtualization minimizes application downtime by spinning up new servers to interface with the data.

With this improvement comes a hefty price tag, and a new proprietary conundrum. Few SANs are compatible with other vendors' kits, and heterogeneous storage management software promises more while providing less.

Blade computing also addresses the proprietary issue of self-contained data center servers. Data centers can buy separate server, storage and network blades and build them up into highly flexible server units. The downside? You have to know what you're doing. With the many errors in configuration and chassis engineering that yield major hot spots and cooling failures, it's clear that blade computing can create more problems than it solves.

Blade computing is another proprietary leap out of the kettle and into the fire: Each chassis is a proprietary design to the vendor. Even if the chassis servers don't meet requirements, maintaining a commercial relationship with the existing vendor is often cheaper than adding new servers.

New approaches to an old problem

Data center servers follow two main architecture approaches: Scale-out commodity clouds of pooled resources and engineered converged systems.

Commodity equipment, with as much of the resources pooled as possible through a cloud platform, removes single points of failure. This works well for service providers and IT shops with low-level workloads, and where little hardware tuning is required to gain adequate performance.

With converged infrastructure servers, an engineered system of components are pre-configured to provide a high performance platform. Examples from well-known data center vendors include Cisco UCS, VCE Vblock, IBM PureFlex, Dell Active Systems and HP Converged Systems. With as much standardization at the periphery as possible, IT teams can ignore the system's internals.

As a consumer, you are buying into a hardware platform and an approach where the vendor optimizes the system for your workloads. This is very difficult to do with the pure commodity approach. Therefore, highly proprietary connection buses (such as IBM's CAPI), highly specific storage systems and other components go inside to improve performance. The vendor pre-configures everything to work, with fans and wiring in safe, reliable set ups, redundant power supplies and NICs where needed, and so on. We are back to the age of the mainframe -- almost.

Modern IT shops need systems that expand easily, demanding standardization on the edges of the converged infrastructure. If extra storage is required, data centers might want to go to a third party, such as EMC or NetApp, or to the new startups such as Pure Storage, Violin or Nimble for systems that easily attach to the platform. They should be able to implement fabric networks with other vendors' network switches, Cisco, Juniper or Brocade, for example, rather than being locked in to the engineered system vendor.

Don't get too hung up on the internals of these tailored systems -- it's how a system interfaces with the rest of the world that matters. As long as a proprietary server system can attach to external storage and networking systems, can integrate and interoperate with the rest of your IT platform, and can support the workloads you require, engineered is the way to go.

This was last published in December 2014

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Guide to managing data center costs and the IT budget

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Is a converged infrastructure better than scaling out with commodity hardware?
Cancel
What never seems to get discussed in all of the CI vs DIY discussions is the software framework needed to manage scale-out infrastructure. Just putting a logo on all the hardware doesn't do it. Making it work is a complex set of resource managers. I'd much rather see the focus on that aspect, rather than lock-in or not. In either case, the underlying HW is commodity. 
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close