Tip

Scale-up or scale-out: What fits best in your data center?

Every data center uses servers to supply computing resources — processing cycles, memory space, network and disk I/O — that workloads need to function. As workloads proliferate and computing demands increase, server resources must grow or “scale” to meet those demands. We’ll answer some common questions about server scaling and consider the implications on an enterprise.

We hear the terms “scale-up” and “scale-out” servers used frequently, but what are they and what are the differences between them?
Stephen Bigelow: There are two basic ways to scale computing (

Requires Free Membership to View

server) resources in a data center. The first is to add more servers or “scale-out.” Say a business has a virtualized server running five business applications and is using 80% of the server’s physical computing capacity. If the business needs to deploy more workloads, the current server may not have enough resources available, so the business could purchase and deploy an additional server to support the new applications.

Scale-out architecture also includes clustered or distributed computing approaches where multiple small servers share the computing load of a single application. For example, a mission-critical workload may run on two or more servers, and the processing can be shared across those servers in an active-active configuration. If one server fails, the other(s) can take over and preserve the application’s availability. If more redundancy is needed, the cluster can be scaled-out with additional server nodes.

But advances in computing power have vastly increased server resources in each new design. Today, it is possible to replace an aging server with a model touting far more processing, memory and I/O capability than previous models yet occupies the same physical footprint — such as a 1U or 2U rack chassis — and often consumes less energy. This approach is called “scale-up” because the physical box can handle more or larger workloads.

Consider the first example where one virtual server ran short of resources. It is possible to deploy a new server in the next technology refresh cycle with far more computing resources, migrate all of the workloads from the old server to the new one, take the old server out of service or allocate it to other tasks and be left with significantly more available resources to tackle additional production workloads without adding significantly to data center space or energy requirements. It’s like you’re slowly easing the older server into retirement.

When is it best to use a scale-up server in the data center, and when should an organization opt for a scale-out server?
Bigelow: There is no single best answer. Both scale-up and scale-out approaches are valid means of adding computing resources to a data center environment, and they are not mutually exclusive. A scale-out approach could be the right answer when a large number of smaller nodes are needed, perhaps for a web server farm or a server cluster where physically redundant hosts are required. Conversely, a scale-up server approach might be right for a major virtual server consolidation initiative where more workloads must reside on the fewer physical servers.

How does virtualization play into the scale-up versus scale-out discussion?
Bigelow: You saw a bit of this in the previous questions. An organization that deploys server virtualization can take advantage of server consolidation by moving a greater number of workloads onto fewer and more capable servers. This reduces the total number of servers that an organization has to buy and puts far more emphasis on the scale-up approach.

The bigger issue is resource allocation. Poor or careless allocation can adversely affect scale-up plans. Virtualization allows you to provision a virtual machine for each workload and allocate computing resources to each virtual machine. If you provide excess resources to a virtual machine — 2 GB of memory when only 1 GB is needed — resources are wasted and the server may host fewer virtual machines than expected. Conversely, if an administrator doesn’t assign enough resources to a virtual machine, that workload may perform poorly or even cause the entire server to crash.

A business will get the most value from consolidating to a scale-up server if resources are properly allocated to meet each workload’s needs.

Don’t scale-up servers present more disruptions for a data center?
Bigelow: The potential for scale-up server failures and work disruptions is certainly real. When a powerful server runs a single application such as a database, there is little potential for extra disruption since an application crash or server failure only means that a single workload needs to be recovered. As long as the server is running or other suitable server hardware is available, it doesn’t take long for skilled IT staff to recover the application thanks to the server’s greater computing power.

However, the matter is a bit different if the scale-up server is virtualized and consolidated with numerous workloads. If a server like that fails there could be many more workloads to recover and the process could take considerable time. Remember that as each workload is restored it will start using network and other computing resources on that box and effectively slow the recovery of subsequent workloads.

Still, it’s important to put such disruptions into the proper perspective. Mission-critical workloads should be protected with some kind of resiliency strategy such as physical server clustering or virtual workload redundancy using tools like EverRun from Marathon Software. When critical workloads are protected, they will continue to be available and will eventually synchronize with the original machine when it is successfully recovered. Only non-essential or non-critical workloads would bear the brunt of extended downtime.

How does the reliability of scale-up and scale-out servers compare?
Bigelow: The reliability of scale-up and scale-out servers typically compares quite well. The interesting thing is that many enterprise-grade servers are incorporating technologies designed to enhance reliability and avoid downtime. Techniques that were once in the domain of the most powerful and expensive systems are quickly filtering down to entry-level models.

Even entry-level 1U servers include redundant power supplies so the server will continue to run even when one supply fails. Similarly, the presence of several multi-core processors means that only some workloads may be disrupted if a core fails, but the afflicted workloads can be restarted on another system or even on other available processor cores in the same system. It’s the same when several network I/O ports are present. Workload traffic can failover from a faulty port to a working port or the impacted workloads can be migrated to another server with minimal performance degradation. A measure of CPU and network port redundancy can be realized on entry-level enterprise servers.

Memory is one of the last frontiers in server reliability because virtual machines reside as images in server memory. Entry-level enterprise servers like the Dell PowerEdge R510 support error correction code (ECC) memory which can correct common kinds of memory corruption, but ECC generally doesn’t protect against outright memory faults.

More sophisticated servers such as the Hewlett-Packard ProLiant family seek to mitigate downtime by including fault-tolerant memory techniques such as memory mirroring — think RAID-1 with disk storage--and spare memory modules online that can automatically take over for failed memory modules — similar to hot spare disk storage.

ABOUT THE AUTHOR: Stephen J. Bigelow, Senior Technology Editor in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 20 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Contact him at sbigelow@techtarget.com.

This was first published in March 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.