Have mainframe costs reached unreasonable levels? In a recent Z Systems Journal article, "Mainframe Software Costs Too High? Think Again!," Mike Moser of BMC Software Inc. concludes that mainframe software costs are more than fair and, instead of shedding purchased packages, customers should buy more. After reading the article, I disagree with Moser's opinion and maintain that the industry needs better software pricing models.
The premise: Mainframe software productivity saves on labor costs
Most of Moser's article explains how mainframes achieve lower total cost of ownership (TCO) through economies of scale. To argue his point, Moser analyzes computing power per data center footprint, higher utilization rates and centralized management. These are issues that we mainframe "bigots" understand and use to argue for our platform.
After establishing mainframes' lower TCO, Moser moves on to a survey that compares budget outlay percentages for different platforms. The survey says that mainframe shops spend 40% of their budgets on software and 27% on labor. With distributed platform budgets, on the other hand, 27% is allocated for software and 43% for labor. From this mismatch, Moser infers that mainframe shops spend less on labor because of productivity gains from fabulous software tools. Because of this, he concludes that mainframe software costs are indeed reasonable, and, bolstered by the labor savings, IT should invest even more in mainframes.
The reality: Mainframe hardware upgrades drive up software costs
Moser's point is well taken. Software must provide value or companies would not buy it. The value of software can be measured in several ways. First, a shop may decide a purchased program is valuable because it's cheaper than writing its own. Second, a corporation may have special business requirements, such as security or availability, and is therefore willing to spend money to meet those needs. Last but not least are the productivity gains Moser says allow one programmer to do the work of many.
But we must turn the question around and ask when software begins to lose value. I can think of several examples:
- When the number of users becomes relatively small;
- When the cost outstrips productivity gains; and
- When a software tool's functionality is underutilized
Perhaps the biggest reason for software devaluation is capacity-based licensing. Capacity-based licensing is founded on the idea that faster CPUs make more use of software, and therefore vendors are entitled to more money. This means that when a shop upgrades a mainframe to a larger or faster machine, software costs go up. Shops incur this additional expense without the benefit of software improvements. If you think of value as a ratio that divides functionality by cost, you can see that a tool's value approaches zero as a business adds processing capacity and costs increase.
I must also point out that many of Moser's assumptions for comparing platform TCO may no longer apply. Virtualization and server consolidation in the distributed world decreases data center footprints and drives higher utilization percentages. Blade servers manage to fit many powerful processors into a compact area. Distributed software tools are also dynamic and may reach the point of productivity where the percentage of distributed budgets for labor come more in line with mainframe budgets.
This leads to the question of whether it is easier to add many x86 servers or one mainframe central processor (CP). While it may be difficult to wedge additional x86 servers into an overcrowded data center, provisioning and software purchasing are simple. It is relatively easy for mainframes turning on a CP, but the rest is not.
One may argue that adding a powerful mainframe CP creates a bigger processing boost and provides a longer time between upgrades. But remember, although adding the CP may increase your processing capacity by 25%, you may have needed only a 10% bump to get to the end of the year. In terms of software costs, you will pay for the additional 15% capacity from the time of the upgrade until you actually need it. In today's economic environment, I think most of us would rather have capacity closer to our needs than overallocated.
Finally, I take issue with the assertion that software is responsible for the mainframe's reduced labor costs, as there are many reasons why it takes fewer people to run the machines. First, it's much easier to manage three system images than 3,000. Second, mainframe system management is centralized, disciplined, uniform, well understood and years in the making. IBM has also done a good job of simplifying and streamlining system definition and operation. Some built-in automation and intelligent software, such as Workload Manager, make fewer people necessary to watch the system. In short, while software can make programmers mighty, it is only part of one reason that mainframes are cheaper to maintain.
I've mentioned software cost several times in my columns because I think it has a direct impact on how well mainframes will do in the next few years. In my shop, arguing lower TCO is met with deep skepticism that is difficult to overcome, especially when executives see millions of dollars going out the door to pay for software. Capacity-based pricing is responsible for hours of configuration juggling, which leads to asymmetric Sysplexes and impacts availability. Sometimes clichés are true, and in this case I think mainframe software vendors may end up killing the goose laying the golden eggs.
ABOUT THE AUTHOR: For 24 years, Robert Crawford has worked off and on as a CICS systems programmer. He is experienced in debugging and tuning applications and has written in COBOL, Assembler and C++ using VSAM, DLI and DB2.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at email@example.com.