In IT, the old eventually becomes new, and that axiom is playing out in regards to data center servers.
"We are again seeing growing interest among enterprises [to] scale up systems, an approach that had been on the decline," said Christian Perry, senior analyst, data centers, at Technology Business Research (TBR) Inc.
Since the emergence of data centers, IT managers have struggled with how best to configure their processing resources. The choice has essentially been between buying bigger boxes or more boxes:
Scale-out servers: Benefits and negatives
Scale out server concepts mean adding more nodes to a system or buying more boxes. For instance, a company could grow from one Web server system to three. In extreme cases, hundreds of small computers are configured into clusters that aggregate enough computing power to support large data centers.
Scaling out is typically simple. A business could buy a new system and have it up and running quickly. The approach is also flexible: Companies can add systems in a seemingly infinite manner -- although it will eventually require more space for the data center. In fact, scaling out is popular among the cloud vendors because they can build massive data centers capable of supporting thousands of customers and petabytes of storage.
Facebook Inc.'s Open Compute Project has even encouraged scaling out servers. Rather than rely on special purpose systems from traditional server vendors like Hewlett-Packard Co., IBM and Oracle Corp., companies tie generic boxes together and build gigantic data centers. Web leaders including Apple, Facebook and Google have all elected this approach to support their businesses.
There are downsides to scaling out systems, the first being complexity. Many connected computers increase management complications, and you have to support a much more complex programming model. Throughput and latency issues arise between nodes and are difficult to troubleshoot because there are multiple local systems.
Scale-up systems: Pros and cons
Scaling up avoids sprawl problems. Scale-up systems use specially developed processors originally designed for scientific computing or high-end database management system applications. Typically, companies build up their data center and make it more powerful by adding computing resources to a single server. Corporations install more powerful processors, additional memory, or more internal network resources to boost performance; they get a bigger box.
Scaling up had been on the wane because of the advent of low-cost x86 servers. Rather than pay a premium for processing power, enterprises bought less-expensive hardware. As a result, the scientific computing market shrank dramatically, and the Unix marketplace has been losing its luster. In fact, TBR found that Unix server revenue dropped by 10.8% year-over-year in the second quarter of 2013.
A few recent shifts seem to be breathing new life into that market segment. Scaling up works well with virtualization. It enables corporations to grow their workloads on one system by carving it up via software rather than actually adding new systems. Now, firms can have hundreds of applications running on one central system. Deploying a new virtual server over a hypervisor is often less expensive than actually buying and installing a physical one.
Established vendors, such as Hewlett-Packard and IBM, are delivering packaged solutions. Rather than just providing bare-bones hardware, suppliers are configuring their systems so they include all of the components an application needs. "Scale up meshes with the emergence of applications, like big data, that a company wants to deploy quickly," said TBR's Perry.
These turnkey solutions align with another trend: more business unit influence over IT purchases. Gartner Inc. found that chief marketing officers now have equal purchasing power as CIOs. Consequently, business managers are in a position to opt for one of these turnkey systems with little or no input from the IT staff; there is less delay than with traditional system purchases.
The challenge for IT staff then becomes supporting scale-out systems. Departments purchasing their own systems will need IT support to keep them running.
Scaling up is also only a temporary fix. This technique increases the density in a data center and often requires special cooling and power amenities. Each time a company invests in more server hardware, it buys itself time, but eventually no box will fit the application load.
With the reemergence of scale-up infrastructure, IT shops are finding the choice of scale up versus scale out that much more difficult. The only thing that's sure is that computer use is expanding and businesses are struggling to keep pace with the growth.
About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been writing about technology for two decades, is based in Sudbury, Mass., and can be reached at firstname.lastname@example.org.
This was first published in January 2014