It appears that we are entering another economic downturn, a period when traditionally IT is expected to focus...
on cost-cutting much more intensely than in eras of economic expansion. We have seen these times before, most recently in 2001-2003 and in 1990-1993, and each time the prescription for cost savings was a little different. In the early 1990s, the answer was to buy cheaper hardware; software and administration costs were much less important. In the early 2000s, software and administrative people costs were much more important, so the answer was to consolidate; running multiple workloads on the same machine, thus saving per-server software/hardware license costs and (to some extent) administrative costs.
In 2008, the game has changed yet again. Software and administrative costs dominate hardware costs; energy costs have suddenly become important when the data center is maxed out; and there are more choices for software in areas such as system and database administration, which provides more flexibility to drive down software costs even if systems are already fully virtualized and consolidated. Above all, it is easier than ever before to move off the mainframe -- or onto it.
What do current TCO studies suggest about driving mainframe costs down? First, that although it is easier now to move off the mainframe, in most cases it makes more sense to virtualize on the mainframe -- both existing mainframe workloads and Unix/Linux ones. That is, the dominance of software and administrative costs means that virtualizing 20-500 workloads on the mainframe is cheaper, especially where IBM's attractively-priced specialty processors are involved, because one software license per system and no network administration trump up-front mainframe hardware license costs. And in some cases fewer than 20 workloads per mainframe will still be cheaper, because the cost of moving software from the mainframe plus the cost of running on distributed or Unix/Linux boxes (including a reasonable charge for less availability) is greater than the cost of staying on the mainframe.
Once virtualized, what else can we do? Well, a frequently underestimated source of costs is database administration. What has been happening in the last 10 years is that not only Microsoft SQL Server and other "enterprise" databases but also open-source databases like MySQL have sharply cut into the scalability and robustness advantages of Oracle Database. These open source options also offer typically lower administrative costs, and in many cases lower license costs. It is clear from IBM case studies that in many cases DB2 is priced to deliver significant savings as well. Thus, even for PeopleSoft or SAP enterprise applications, using an Oracle alternative can have a major impact on TCO. And, say the studies, this holds true even if the alternatives are "all Oracle" or "Oracle and one or two other databases". In this case, the savings from lower alternative-database administrative costs typically dwarf those from being able to use the same database administration tools for all workloads. The next area to look at is system management. In the early part of this decade, there were frequent user complaints about high software-license costs from third-party system management software vendors. Then IBM came out with basic system-management tools of its own, prices went down, and the complaints became less vociferous. Since then, IBM has continued to elaborate its tools, meaning that they can handle systems-management tasks that the initial basic versions were not suited for, such as Web service management. Thus, one promising area for reducing costs is to reconsider the suitability of IBM system-management software.
A third area where additional improvements can be made is in power management. Although virtualization should yield major improvements in data-center energy consumption, in many cases data centers will still run up against physical constraints in the near future, unless care is taken to optimize power consumption of the mainframe-dominated data center via a "global" approach that considers cooling mechanisms, data-center design, and layout. Already, power monitoring software is making a difference here.
A more speculative, but interesting, area of cost savings is Software as a Service (SaaS). It is hard to conceive that an enterprise would consider handing over their "crown jewel" applications to a remote third party; but SaaS' multi-tenancy is beginning to assuage user concerns about security, as well as allowing consolidation well beyond the ability of any one enterprise, however large. SaaS is not for all tastes; but for achieving additional cost savings, it deserves more attention than it has been getting.
Finally, in this downturn, unlike past ones, it is unlikely that the spigot for new-application development will be turned completely off; new applications are more critical to corporate success than ever before. That means that savings in development costs, while historically hard to achieve, will be a welcome addition to overall cost savings in most places. Since the early 2000s, agile, collaborative, and "hybrid" in-house-plus-open-source programming are showing some promise of delivering more cost savings (and better time-to-value) than previous panaceas. They are not as much a part of mainframe IT's mindset, but the innovative CTO will investigate and maybe adopt one or more of these methodologies.
New hard times, new opportunities. In areas such as database and system administration, power management, software as a service, and development methodologies, mainframe users should be able to cut costs where no cost has been cut before.