But what do these trends (demonstrated in
TCO studies over the years have waxed and waned in popularity, because at various times it has been easy for vendors to "cook the books" to demonstrate their product's superiority. However, these studies remain a valuable tool in IT's assessment arsenal because they ensure that users factor in long-term operational costs -- which can contrast sharply with up-front price -- and because repeated TCO studies capture trends in costs over time, such as the increasing importance of administrative personnel costs. Likewise, ROI studies have not been valued until recently because it has been difficult to figure out the business benefit of a piece of software. With the increasing importance of computing solutions in the typical enterprise, ROI has come into vogue. It is now possible to gain valuable insight into the effect of adding new software technologies to a given platform by identifying the right combination of TCO and ROI analysis and the right way to characterize the solutions being studied.
In the case of mainframes, I have found that an effective way to characterize alternative solutions and find the right TCO/ROI combination is to distinguish between the mainframe as an operational platform and the mainframe as a development platform. Thus, where you may typically think of TCO as the most important metric (because you typically think of the mainframe as an operational platform), you need to consider ROI as well. This is due to the fact that the new technologies, such as Web servicing and creating composite applications, sometimes treat the mainframe as a development platform -- and thus ROI is important. For example, a British bank that discovered it could rapidly create variations on its customer-facing apps by using Web services with significant effects on its bottom line would never justify such an effort by TCO alone.
The second key to effective analysis of new mainframe software is to view robustness as a less-important part of the analysis. Although minimally considering robustness is considered "vanilla" (whereas availability is a major concern of data centers), treating robustness as a small factor is actually a reasonable way to assess most new software technologies. For example, distributed applications with multiple copies on various platforms, when ported to multiple partitions on the mainframe, are actually quite robust despite the network and failover problems of the original distribution. This is due to the fact that mainframe virtualization can usually isolate faults to a single partition and fail over on the same system quite rapidly.
What the new mainframe analysis means
Given these two points, IBM TCO studies and my TCO/ROI studies on other platforms have some interesting things to say about the true value of new mainframe software technologies. For example, my studies of application servers show that they are surprisingly "people-heavy" in many environments, requiring as much administration as a database. However, initial reports suggest that the Enterprise Service Bus (ESB) can carry out many of the same functions with much less administrative overhead, reducing TCO costs. At the same time, the ESB simplifies development and upgrade of Web services, encouraging standardized Web-service development and allowing the developed application to operate flexibly across partitions. Thus, minor ESB additional TCO as an operational platform is more than counterbalanced over the long term by increased ROI for application upgrade and new-app development over the long run.
Likewise, application modernization via Web-servicization (creating Web-service producer code in front of a mainframe business-critical application), is a minor additional increase in up-front TCO. However, it is not a trivial task. Users should know what standards to embody in the Web-service specification in the repository, and should have a deep knowledge of the mainframe app in order to be confident of what will happen when the producer code is called. At the same time, it can produce major ROI, either through extension of the end users of the application or by a new ability to improve business process flows via cross-application composite apps. In the long run, TCO will decrease as maintenance costs of modernized COBOL code falls well below their historical slow rate of increase.
Development and upgrade via open source, collaborative, agile and high-level programming are amazingly effective at improving ROI. Surprisingly, "hybrid" open source programming using a community -- the typical way to add open source to existing mainframe development -- does not yield a major decrease in TCO compared to in-house development, because the savings in programmer salaries are partially offset by the additional coordination necessary. But by speeding up development (speed to value) using a high-level toolset, good development infrastructure software can improve new-application or composite-application ROI by 50%.
What about open source or low-license-cost databases? The software-cost effect on TCO and ROI from using these in new applications is marginal at best; and for ultimate scalability, nothing beats an enterprise database like Oracle or DB2. However, for everything below that, user testimony says that Microsoft SQL Server, Progress OpenEdge, DB2 Express 9, and MySQL are now surprisingly robust and perform well, Their ease of administration also has a major impact on TCO, allowing per-app TCO as little as one-tenth of a fully loaded Oracle installation. This, in turn, has a multiplier effect on ROI for apps in which the database chosen is the core of both the development and operational platform: ROI in large-scale apps can almost double.
A counter-argument from some IT shops is that standardization on one enterprise database yields better cost savings than using multiple databases in the same data center, because it simplifies administration. MY TCO studies suggest otherwise: The typical administrative-cost advantage of a lower-cost database reduces TCO more than standardization does.
The effects from data integration and master data management are more subtle. Extending queries to cross database boundaries makes things like IT mergers and acquisitions cost much less, but in normal development and operations the main effects are in extending applications' end users and the data afforded to them. Thus, the effects on TCO (more cost to implement data integration) and ROI (more revenues from a more useful application) are typically minor. However, while difficult to measure, the leverage effects to the business that involve having more actionable information are clearly profound.
What are the take-aways from this TCO/ROI analysis? First, consider ROI as well as TCO, and development-platform effects as well as operational-platform ones. For most of the new technologies, a superficial TCO analysis will undervalue the new mainframe software technology. Second, consider speed-to-value and maintenance cost effects. In other words, just getting new capabilities to production faster, over and over, in an era of shrinking competitive advantage has a major impact on the bottom line of the business and IT department. Third, attention to database administrative costs and openness to database alternatives can be the most effective cost-cutting tool in the IT governance arsenal. Fourth, a careful TCO/ROI analysis will often lead to the conclusion that the worst thing to do is to stand pat and simply virtualize what you have, despite the up-front additional costs. The long-run TCO and ROI benefits of new mainframe software technology make it more desirable in many cases.
ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates. This was first published in June 2008
This was first published in June 2008