News Stay informed about the latest enterprise technology news and product updates.

Should you move apps on or off the mainframe to cut costs?

In this era of stringent cost cutting, what should IT do: move more apps to the mainframe or move all apps off it? This tip weighs the total cost of ownership for mainframe and non-mainframe applications.

As IT seeks to cut costs in the face of declining budgets by every means possible, the mainframe now appears to...

be a likely source of cost savings and a likely target of platform cutting. It's been a long-standing idea that the mainframe is costly, and with enterprise apps more easy to migrate, deep-sixing a mainframe or two is easier than ever. On the contrary, over the last three years, many users have found that, in many situations migrating applications to the mainframe is a source of total cost of ownership (TCO) savings of up to 50%. So in this era of stringent cost cutting, which strategy should IT employ: move more apps onto the mainframe or move all apps off it? The answer is, of course, that it depends.

Specifically, it depends on how many migratable apps exist on all the enterprise's non-mainframe platforms. As it turns out, if there are 10 or fewer total migratable apps, you can put them on PC or Unix servers for less than the three-year TCO of putting them on the mainframe (server acquisition costs included). If the total number of migratable apps is 20 or more, then even if you have to buy a whole new mainframe, your put-them-on-the-mainframe TCO is less than PC-platform or Unix-platform TCO, and the advantages of the mainframe increase as the number of migratable apps increases.

The boundary line between "better on the mainframe" and "better on another platform" varies according to many factors, but one appears to be the type of alternative platform: With Unix high-end servers, the crossover point is closer to the 20-app range, while with PC servers the crossover point is closer to five to 10 apps.

What's going on here?

Breaking down the mainframe TCO
To understand why this kind of TCO pattern is being seen in today's data centers, and to use that understanding to fuel a platform decision, we need to break down TCO into its basic components: people (mostly administrative, but some developer and services); software (that is, license costs for everything above the operating system); hardware (typically the only cost considered in calculating up-front costs); and, last but not least, power.

As everyone knows, mainframe up-front hardware acquisition costs are well above those of PCs, Unix/Linux servers and blades. However, over the last 20 years, people costs have dramatically increased as a proportion of overall costs, and in many cases (such as in data centers located in cities), power and space costs have gone from an insignificant fraction to more than 10% of overall TCO. While per-application administration, software-license and power costs of mainframes and other platforms are reasonably comparable, putting multiple applications on a machine via virtualization produces sharply different effects depending on the platform.

To put it bluntly, non-mainframe platforms typically support fewer VMs at full theoretical capacity, typically top out at 20% or less of theoretical maximum load (versus 90% or better for reported mainframe installations), and are less effective at load balancing the VMs that they do support -- further decreasing the number of apps that they actually do run per machine. Running more apps on a single machine decreases per-application administrative costs (no need to handle inter-machine communication, it's all done automatically via virtualization software), frequently decreases per-application software-license costs (because those are often per-machine) and decreases per-application power costs (fewer machines, smaller footprint, fewer redundant components to shed heat). And the greater the number of apps on the mainframe, the greater the mainframe's per-application cost advantage.

Here is an example (extrapolated from an IBM study presented on March 23, 2008) of how it can work: Suppose that you moved 700 apps from PC servers without virtualization (or 350 apps from PC servers with virtualization) to 25 mainframe specialty processors on one mainframe. Suppose, further, that you bought a new mainframe and paid for it up front, as opposed to buying more PC servers. The mainframe up-front cost will be about 20-25% of its three-year TCO, so as you install it, you are about 20% in the hole compared to a PC-server solution. However, software and hardware maintenance costs overall are about 1/4 as much for the mainframe; so end-of-year PC solution TCO is at least twice as much as mainframe one-year TCO, and by the end of three years the PC solution TCO is at least 3.9 times as much as the mainframe TCO.

Of course, the PC is an easy target, at one extreme in VM scaling and number of processors per machine. Today's Unix boxes are more comparable to mainframes, but a recent IBM study highlights a telecom company experience suggesting that the up-front hardware cost differential versus the mainframe for a set of high-end Unix machines is less than for a PC server farm, the five-year TCO is about 30% greater than the mainframe's, and the TCO crossover point is about two years.

These mainframe cost advantages, and the number of applications at which they begin to apply, are likely to continue in the next two to three years. Unix and PC boxes will improve the number of VMs that they can support and the percentage of theoretical capacity at which they top out. However, hardware costs will continue to decrease and power and administration costs will continue to increase as a percentage of overall TCO. So while the cost-difference line will flatten, the point of intersection -- the number of apps at which the mainframe becomes less costly -- will remain approximately the same. Decision Tree: How many apps can be migrated?

While understanding when the cost advantages of the mainframe kick in is straightforward, deciding what to do to cut costs right now is not so simple, because the typical decision is not how to scale by spending more money to meet user needs, but how to preserve functionality at less cost.

The first thing to do is determine which applications can be relocated to another platform -- and specifically, which can be moved from Unix/Linux/PC platforms to the mainframe and vice versa. The next step is to determine just how much spare capacity each platform has -- how many more apps it can handle before you need more hardware.

Let us suppose that 30 apps can be migrated from a PC server farm to the mainframe (and the PCs disposed of), while five apps can be migrated from the mainframe to the PC server farm. The PC server farm presently has 400 apps, with a capacity of 420; the mainframe presently has 1,000 apps, with a capacity of 1,025. In that case, there is no sense migrating anything to the PC, but it does make sense to migrate 25 PC apps to the mainframe. The other five should be left where they are, rather than buy a new mainframe, because it's cheaper.

In the general case, the decision tree runs like this: Is the number of existing mainframe apps plus the number of other-platform migratable apps greater than mainframe capacity? If no, migrate all the other-platform apps to the mainframe and enjoy the cost savings. If yes, migrate enough other-platform apps to the mainframe to fill it to capacity, and then ask if the remaining number of migratable other-platform apps is 20 or more. If no, keep them on the other platform; if yes, buy a new mainframe (!) and move those apps to the new mainframe. In the case where you have to buy a new mainframe, payback period will typically be 1/2 to two years.

Of course, this analysis only examines TCO. There are several other happy side benefits frequently seen when moving migratable apps to a mainframe, including greater application robustness, less data center crowding and better "green" carbon footprint from lower per-application energy consumption. But when push comes to shove and cost is all that matters, the mainframe still is your best bet when more than 20 apps are involved.

ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.

What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at mstansberry@techtarget.com.

Dig Deeper on Data center budget and culture

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close