Modern Infrastructure

Can HP, IBM and Dell survive the cloud?


Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Can traditional server vendors survive the cloud push?

In the era of virtualization and cloud computing, can the traditional server vendors survive extinction?

Ten years from now, will we look back on the traditional server vendors like IBM, HP and Dell as dinosaurs that could not withstand the giant asteroid that is public cloud? Or will we marvel at their resourceful ability to adapt -- and survive -- in the face of rapidly changing conditions?

Can HP, IBM and Dell survive the cloud?

Part 1: Can traditional server vendors survive the cloud push?

Part 2: Server vendors must innovate to stem bleeding

That's the $54.9 billion question -- which is the amount that organizations spent on servers in 2012, according to IDC, with IBM, Hewlett-Packard Co. and Dell Inc. garnering a combined 74.3% of that market.

Public clouds come on the heels of server virtualization, which has already been a doozy for server unit shipments and revenues. In 2002, the ratio of physical to logical servers was almost 1:1, at 4.4 million and 4.5 million units shipped worldwide, according to IDC, and revenue was just shy of $50 billion. That heyday would soon be over: A decade later, revenue had increased a modest 11.5% to $54.9 billion, while the number of physical servers shipped had increased by 84% to 8.1 million units, and logical servers had increased a whopping 497%, to 22.4 million. 

Shifting workloads from on-premises virtualized servers to the public cloud stands to be even worse for server vendors, since a workload running in the public cloud requires no infrastructure purchase. As VMware CEO Pat Gelsinger said, speaking at the company's Partner Exchange in February: "If a workload goes to Amazon, you lose, and we have lost forever."

But traditional server vendors certainly haven't given up on selling infrastructure to the enterprise. "They're going to fight," said Kuba Stolarksi, research manager for enterprise servers at IDC. "But a lot can happen in five years," he added, and come 2017, server offerings from the traditional vendors might look very different than they do today.

The cloud-first imperative

One small business has shifted the brunt of its processing from on-premises to the public cloud.

Mosaik Solutions in Memphis, Tenn. collects mobile network coverage information that it delivers as geospatial, analytical, creative and Web data. The company relied on a combination of on-premises and colocation systems from its founding in 1989 until 2008, when it began experimenting with Amazon Web Services (AWS). In 2009, it increased its AWS spend by 1,000%. In 2010, it discontinued its use of colocation and increased its AWS spend another 500%, and another 250% the year after.

A rising tide lifts all boats

While public cloud is cannibalizing some workloads, you could also make the case that it is simply meeting demand that didn't used to exist.

"It's like the [former IBM CEO Thomas J. Watson] quote: 'I think there is a world market for maybe five computers,'" said Jonathan Eunice, principal IT adviser at Illuminata Inc. Back in 1943, when Watson reportedly uttered those words, our collective imagination could not conceive of the need for more compute power. It's only in retrospect that we understand how silly that statement is.

The same dynamic is in play today.

"Our compute needs and desires are always growing faster than Moore's Law. We always want more," Eunice said.

In recent years, new compute resources have gone in large part to tackling our new interest in data analysis, Eunice said.

"We've been on a multi-decade march from transactions being the most important to analytics being the most important," he said. These days, "analyzing and figuring things out has become more important than just keeping your books. There just aren't that many books."

Today, the firm runs the lion's share of its processing on Amazon's Elastic Compute Cloud (EC2) -- an average of 335 EC2 compute units and 668 GB of memory on AWS, compared with just 64 cores and 384 GB memory in-house, mainly for its "customer-facing" applications, said Daniel Bozeman, solutions architect for the firm. It has not purchased a new server since 2011.

That's great for Amazon, but it's decidedly bad news for Cisco Systems and Dell, from which Mosaik last purchased servers. Nor is the firm particularly loyal to any one server shop.

"We go wherever the price is right," Bozeman said.

The firm is, however, looking to reduce its EC2 spend, Bozeman said, which it will do by modernizing and consolidating the services running on EC2, increasing its use of Amazon EC2 Reserved Instances, and by offloading a subset of its operations -- test, development and staging -- back in-house to a Dell server running Eucalyptus, an AWS-compatible private cloud stack.

"At least, that's the idea," Bozeman added. "We'll see how it goes."

Custom-made is in demand

Meanwhile, service providers, with their razor-thin margins and copious in-house technical talent, are increasingly backing off their relationships with tier-one server vendors -- if they ever had a relationship at all. Google, for instance, builds its own servers, and is reportedly the fifth-biggest customer of Intel server chips, according to a September 2012 issue of Wired.

The value of the support that tier-one server vendors bring to enterprises doesn't always carry over to service providers, said John Considine, CTO at Verizon Terremark, the cloud and managed services provider that has a mix of tier-one, commodity and custom gear.

"For enterprises, the real value is in service and support over the lifetime of a server," Considine said. "We see a lot less value in that support model."

Service providers also cite the customization that comes from building their own servers as the main reason for the decrease in traditional server purchases. For example, SoftLayer Technologies, the hosting-cum-cloud provider, has a long-standing relationship with contract manufacturer Supermicro, which has supplied the 100,000 servers SoftLayer has under management.

Designing servers with Supermicro "allows us to fine-tune that server to meet the application stack," by manipulating elements such as CPU, network, hard drives and memory, said Marc Jones, SoftLayer vice president of product innovation.

About the author: Alex Barrett is the editor in chief of Modern Infrastructure.

Article 3 of 12

Dig Deeper on Data center budget and culture

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I wonder where some of these articles come from. The "cloud" is hosted on what? Servers!!!!
What the other commenter fails to see is that servers for cloud storage have more strenuous demands than typical in-house servers and have to built more exacting standards. Companies like Savage I/O, have popped up that specialize in hardware technology specifically for the cloud.
How secured is Cloud hosting with NSA snooping around.

Get More Modern Infrastructure

Access to all of our back issues View All