IT architects ponder converged infrastructure 'pods'

The temptation to deploy turnkey bundles of servers, storage and networking is strong, but what's gained in simplicity and deployment time may be lost in flexibility and vendor lock-in.

No one likes to reinvent the wheel, especially not infrastructure architects whose "wheel" is a complex mix of data center infrastructure, including servers, storage, networks and virtualization. Some forward-looking types want to minimize the amount of up-front work it takes to stand up new systems by designing self-contained compute "pods" that contain everything necessary to run a system.

On the other side of the aisle, vendors are responding to this desire with proprietary converged infrastructure bundles that include all the necessary hardware, plus management software. The most common examples are the Vblock from the Virtual Compute Environment (VCE) Coalition, a.k.a. VMware, Cisco and EMC; and HP's BladeSystem Matrix.

The vendors claim these pre-configured systems speed time to deployment while simultaneously providing ease of management and functionality that can't be had by rolling your own system.

But the downside of one-size-fits-all designs can be inflexbility and lock-in, critics say. Their contention is that the widespread move away from mainframes to distributed systems happened for a reason, and it is folly to go backward.

In praise of pods

Infrastructure architect Bill Welty designed several compute pods for his employer, a large online digital image processing firm. A pod, Welty says, "is a way of putting all your infrastructure in one place," with virtualization as both a key driver and enabler.

I'm old and I went through the mainframe stuff, where CA would hold your software hostage. [With vendor-designed pods], you do kind of get locked in.
Tom Becchettisystems engineer

Welty designs his pods so that they're easy to replicate. "I can stand one of these up and hand off the list [of components] to someone else -- it's much more straightforward."

He also designs pods differently based on their function. Currently, the firm relies on a virtualization pod, complete with Dell servers running VMware ESX, storage and I/O virtualization from Xsigo, as well as a five-rack high-performance computing pod for image processing.

Xsigo I/O virtualization technology, based on Infiniband, provides redundant 10 Gb of bandwidth to a server in just two I/O slots, reducing cabling and allowing smaller form-factor servers. More to the point, the management software allows bandwidth to be easily and dynamically reassigned between servers.

Organizations that have a lot of growth, plus predictable workloads and storage needs, are good candidates for pods, said Joe Skorupa, research vice president at Gartner Inc. For instance, Skorupa knew of one organization going the pod route -- a division of a Fortune 500 company that provides online transaction services to the travel industry the world over.

The decision to design generic pods came after spending several months on-site at a customer in the U.K., when it dawned on the IT architect that his company was wasting time and money on custom, one-off designs that required on-site personnel to install and maintain, Skorupa said.

To reduce costs, the IT architect instead designed a system that would limit the colocation provider's role to providing floor space, power and cooling. The resulting pod -- two racks of Sun Fire X4600 servers running VMware, NFS or iSCSI storage, and a pair of Xsigo I/O Directors -- works well for the organization, Skorupa said.

At the same time, its business model is relatively unique, and pods might not make sense in a more traditional data center, Skorupa said.

Logical candidates for pods are small and medium-sized businesses and service providers, Skorupa said. "They should be able to predict the ratio of storage to compute to network that they're going to need," he said. Pods probably don't make sense in large data centers where you can't predict your workloads, however.

As a general rule, pods tend to appeal to younger, more inexperienced IT architects who "didn't go through the pain the last time these systems were built," Skorupa said. Remembering their mainframe days, a lot of experienced IT pros will say 'There's no way in the world we're doing pods, we already have the scars and the t-shirt.'"

Tom Becchetti, a systems engineer at a large manufacturing firm, is one of those experienced IT professionals, and conceded that highly integrated systems do scare him a bit. "I'm old and I went through the mainframe stuff, where CA would hold your software hostage," he said. With vendor-designed pods in particular, "you do kind of get locked in."

But there's a case to be made for greater modularity and standardizing on fewer systems -- and if you have the resources, for rolling your own best-of-breed pod design. Becchetti's firm has a pod design for remote non-raised floor environments. Back in the data center, the team takes a cookie-cutter approach and uses one of three basic server designs -- small, medium and large.

"We don't have time to build these servers one-off. Instead, we ask 'Does it fit in the small, medium or large bucket?' That way, when it comes time to reuse it, it goes back in to that small, medium or large bucket."

Would you like storage in that pod?

Among IT architects who see the logic of pods, one outstanding question is whether to include storage.

Stu Radnidge, IT architect for a multi-national financial services corporation, uses pods extensively -- which he calls "cells" -- but without the storage component.

"[We're] big fans of the approach," Radnidge wrote in an email. "We don't really see a need for massive optimization of resources outside of a few bespoke solutions that are generally trading related, so the pod approach has worked very well for us." However, he added, "we don't include storage in our pods -- we centralize that as much as possible."

Becchetti, the system engineer, said he didn't worry much about storage performance in a pod, since a limited number of servers can only generate so many IOPS. A bigger problem might become "storage subsystem sprawl."

"Do you want a dozen eggs spread all over the place, or a single golden egg that you keep wrapped up in your sock drawer?" he asked. If you have a lot of storage subsystems, it becomes harder to keep an eye on them and properly maintain them . But if you're running a, say, big EMC Symmetrix frame, "[EMC] is going to let you know that you need to update the firmware, and they're going to come do it."

Flexibility is another potential downside of including storage in a pod, said Gartner's Skorupa.

"Rather than a big pool of servers and storage with a network between them, you get Balkanized systems, Skorupa said, "which are extremely inflexible when you have to go beyond the pod."

"There's a reason we've developed 15 years of shared, pooled storage [in the form of SANs]," Skorupa said.

Bristling over the Vblock

Meanwhile, vendors' attempt at pods is generating lukewarm interest from IT architects.

At the Tech Field Day Boston event this spring, IT professionals voiced their frustrations with the VCE Coalition's Vblock, for instance. In a presentation by EMC employees, attendees griped about the inflexibility of the Vblock design, and that it didn't seem to make the best use of resources.

"It doesn't max out the hardware," said Gabrie Van Zanten, a VMware consultant with Open Line in the Netherlands, who blogged about the session. "What happens if you put in more RAM? What performance upgrade would I get, and on what are those limits based?"

Furthermore, while Vblock's hardware recipe is quite closed, Van Zanten said the virtualization side of the bundle still required administrators to do a fair amount of setup work.

The VCE Coalition maintains that in order to guarantee certain performance levels, customers must stay within pre-determined minimum and maximum configurations to maintain their support agreement. Customers are free to tweak the infrastructure to their specifications, but if they do, "it just won't be a Vblock," said Scott Lowe, VMware-Cisco Solutions Principal at EMC.

These concerns about VBlock raise the question of whether highly integrated systems are a good fit in large enterprises with a plethora of tech-savvy employees.

"This isn't a Toyota Corolla, it's a home theater in a box," quipped one attendee. "The problem with the home theater in a box is that you always get a component you don't want -- a great speaker, but a lousy receiver, or some overpriced cables."

In other words, VCE's attempt to please everyone actually pleases few.

Others like the idea of a Vblock, but not its current incarnation.

Mark Vaughn, enterprise architect at a national financial services firm, works in an environment with over 5,000 servers and has been looking at the Cisco UCS (which combines servers and converged networking) as well as the VCE Coalition's VBlock. Today, the firm buys small increments of servers regularly, but would consider shifting that to "fewer purchases of larger quantity in the right scenario," he said.

But as it stands, the Vblock "increments are still too large. I need to look at the new Vblock 0 to see its sizing," he said. [VCE released Vblock 0 this month, which maxes out at 16 Cisco UCS blades. That's in comparison to the Vblock 1 and Vblock 2, with up to 32 and 64 blades, respectively.]

Catering to CIOs

These kinds of complaints are typical among infrastructure architects who make their living designing and maintaining complex IT systems, said Manjula Talreja, Cisco vice president for the VCE coalition. "The value proposition of the Vblock always has been and always will be for the CIO," she said.

In fact, the higher up toward the CIO you go, the more appealing the Vblock's prescriptive design becomes, said Dennis Hoffman, EMC senior vice president of the VCE coalition. In negotiations with large enterprises, the Vblock's rigid design is often deemed a selling point. "The CIO says, 'whatever you do, don't change anything. It's a slippery slope and that's what leads you to accidental architecture."

Furthermore, EMC, Cisco and VMware are all more than happy to sell the individual components that make up a Vblock. At that point, all the customer loses is the guaranteed performance and one throat to choke.

That unified support may be overrated anyway, said one VCE partner who requested anonymity. "The promise of a single throat to choke has not been delivered," he said, although it's still early days.

In the end, the decision of whether to go with a vendor's pod is a basic cost/benefit analysis.

"Building and designing your systems takes time and effort. If you don't have that resource on staff, [pods] make sense," said the systems engineer Becchetti. "It's almost like outsourcing part of your infrastructure, but what you're outsourcing is the design."

Let us know what you think about the story; email Alex Barrett, News Director, at abarrett@techtarget.com, or follow @aebarrett on twitter.

Next Steps

IT shops want more throats to choke

Cisco, VMware, EMC coalition leaves users cold

Dig deeper on Converged infrastructure (CI)

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close