Should Linux workloads reside on the mainframe or on distributed servers?
Ever since IBM made the Linux mainframe a reality with Integrated Facility for Linux (IFL) engines, Big Blue and its competitors have made conflicting claims of the specialty processor's value. There's no one right answer.
Without standing for or against migrating workloads to z/Linux, here are the matters to consider.
Consolidate either way
Many of IBM's case studies for successful z/Linux migrations involve consolidating underutilized server workloads onto fewer, faster mainframe IFLs. The studies' numbers look impressive given System z's computational density and ability to share resources.
However, hypervisors such as VMware vSphere and Citrix XenServer are making inroads into server farms, consolidating workloads and driving higher hardware utilization. This changes the economics of a proposed Linux mainframe conversion, making a migration to IFLs less attractive to enterprises with highly virtualized distributed environments.
A kernel of truth
Linux presents several interesting issues. Like every software vendor, Red Hat, SUSE and other Linux distribution providers have a different price structure for the mainframe. Support cost quotes are based on the number of cores or CPUs, but the mainframe costs more because its faster engines work harder. Providers may give special deals on the OS for distributed test and development systems that won't apply to the mainframe. It ultimately comes down to negotiation.
Not all flavors of Linux work the same on mainframe processors. For example, one distribution exploits machine instructions and hardware assists built into the latest CPU model, while another's special extensions cooperate closely with the mainframe's hypervisor, z/VM. These tailored efficiencies could tip the scale in favor of z/Linux.
Established Linux shops have a mature framework for managing and administering the environment. To use Linux on the mainframe, administrators will spend a lot of time and effort extending those tools. At first glance this may not seem difficult, but there are many unknowns that may complicate the effort.
Ultimately, despite all of these factors, shops that run Linux in the distributed world tend to continue with the provider they're already using.
Hardware: Apples and oranges
Hardware may be the hardest comparison between Linux mainframe and server infrastructures -- for several reasons. There's no straightforward formula for comparing capacity, so loading up a workload on z/Linux and watching it run is the best way to decide which platform is best for the task. Careful measurements of processor utilization, throughput and response time will best predict how much Linux on System z will do for your shop.
There's an obvious price difference between commodity hardware and mainframe processors. When comparing processor-for-processor with distributed servers, big iron isn't even in the ballpark. But that's only part of the story.
The mainframe is very computationally dense considering its size and energy footprint. It also takes comparatively fewer people to maintain. Logically, enterprises can stack x86 processors like cordwood, but there's a point where the energy, space, cabling and maintenance costs are prohibitive. Include floor space, energy consumption for both running and cooling equipment, and support personnel in your analysis of relative hardware costs.
The other disparity is computational capacity in x86 CPUs and IFLs. The ultimate price of Linux mainframes comes down to how much work the processors do. IBM System z may have an edge on raw clock speed, but that's not a true measure of capacity. IBM optimizes mainframe CPUs for sharing and context switching, which simpler distributed platforms don't have to worry about. The mainframe I/O subsystems are also very different from distributed systems.
Enterprise IT shops become deeply divided between mainframe and server farm proponents; any discussion about moving workloads between platforms can erupt into a war with righteous indignation on both sides. The best strategy is to execute an impartial study on a workload that includes both groups so they can keep each other honest. Managers without vested interest in "winning" should supervise.
About the author:
Robert Crawford spent 29 years as a systems programmer, covering CICS technical support, Virtual Storage Access Method, IBM DB2, IBM IMS and other mainframe products. He programmed in Assembler, Rexx, C, C++, PL/1 and COBOL. Crawford is currently an operations architect based in south Texas, establishing mainframe strategy for a large insurance company.