In the past, deferring software and hardware upgrades during a recession to save money has been the norm rather...
than the exception. After all, recessions mean lower demand, which often translates to a focus on cost savings and a slowing in the rate of product innovation and upgrades at both the business and IT levels. Also, vendors following Moore's Law (very loosely translated as performance doubling every two years) have often sought to sell upgrades as boosts to scalability -- scalability that is not needed during a recession, because with lower demand, business-transaction volumes may be down.
The stringency and probable longevity of this recession make it different. On one hand, IT must face a longer-than-usual period of decline to flat budgets and slowed growth in transactions handled, very possibly followed by a permanent 3-7% reduction in global demand (but not in growth rate) compared with its level over the last 30 years. On the other hand, IT must deal with a situation in which improvements must be carried out to maintain competitiveness, but the company cannot afford to wait two years until prospects brighten. In other words, an IT shop considering upgrades must also face the likelihood that, in the immediate future, its cost constraints will not improve, but its potential competitive disadvantage due to failure to upgrade will only increase.
At the same time, Moore's Law is to some extent breaking down: It is no longer so easy for vendors to increase price performance by increasing performance. As a result, vendors in this recession are much more likely to provide upgrades that also cut costs for IT shops' existing scale.
This holds true for the mainframe. For example, one path to short- and long-term cost cutting is to move Unix/Linux apps to virtual machines on the mainframe. In the past, this has been limited by substantial differences in infrastructure software (application servers versus CICS; VSAM versus Microsoft SQL Server) that must be painstakingly considered to determine migration feasibility. Over the past year in particular, IBM and third-party vendors have provided upgrades to existing software like DB2, and new infrastructure software on the mainframe, such as additional Tivoli capabilities. Mainframes now typically have the latest and greatest IBM infrastructure software, just like other platforms.
Choosing the right upgrades
Possible criteria for choosing which upgrades not to defer include the following:
- Upgrade payback period is less than a year. If yearly budgets are flat or have decreased, there is typically little flexibility to spend more than planned. So the upgrade must pay for itself by the end of the budget cycle by reductions in personnel, software or hardware costs that match the outlay on the upgrade.
- Long-term decreases in the cost structure are caused by the upgrade. For example, replacing several high-end Unix boxes with a mainframe may cost more initially, because up-front hardware costs are greater; but studies tend to show that within one or two years, mainframe total cost of ownership (TCO) is less for data centers with more than 20 applications.
- Upgrading decreases power consumption. It is likely that the pressure on enterprises to cut carbon footprints will continue to increase, recession or no recession. Because IT now consumes 2% of the world's energy (and that percentage is growing fast), it's an obvious target for such pressures.
- Apps on the platform are key to future competitive advantage. For example, a gaming company that has tended to grow by acquisition may find that such acquisitions are rare during this recession; but when gamblers begin to spend more again, companies with up-to-date data integration software will be in a much better position to cost effectively carry out system and data mergers.
On the mainframe, the following are promising candidates for immediate upgrade cost effectiveness:
- Data compression software (for example, in DB2 and storage de-duplication). Storage capacity needs will continue to grow, albeit at a slower rate. Data compression software has the ability to more than counteract that growth rate, allowing fewer disks.
- Power management software and power-reducing upgrades to system and storage hardware. Even if the data center is not redesigned, better monitoring can deliver substantial reductions in power costs.
- Specialty processor and z/OS (virtualization) upgrades. Adding specialty processors at no cost can provide more processing capacity for migrating Unix/Linux apps, as can VM administration upgrades.
- Software upgrades that give more pricing flexibility, e.g., capacity on demand or per-machine/per-processor rather than per-VM pricing.
The types of upgrades that can be deferred relate more to scalability. For example, new, "Web-servicized" versions of existing software are certainly a good idea in the long run, as they simplify the handling of new Web users and building of new composite apps. But aggressive competitive-advantage strategies that aim to increase sales and transactions via Web-servicized business-critical mainframe software can wait until consumers on the Web are able to spend more.
Don't forget to think long term
Even in this recession, some room for maneuver and additional spending may exist. One tactic to use when allocating extra funds is to focus on upgrades and new spending dealing with the one element of the data center that is likely to see continued growth: storage. Whether because of new government mandates, better uses of archiving or an increased need to tap into new data sources on the Web, storage capacity is likely to increase at double-digit rates into the foreseeable future. In turn, the processing capacity to use the data captured at that storage is also likely to increase; and software to use the data more effectively is also likely to yield benefits. First movers in this area may very well find that competing via global and proprietary information use is the key to long-term company success. Moreover, there is a lot of room for improvement. Recent studies suggest that less than one third of all data amassed by a company is effectively used by the appropriate employee -- and the situation is getting worse.
Initial steps need not require enormous expenditures. The most promising is to upgrade mainframe databases with more global data quality and master data management solutions. Another is to upgrade business intelligence with "analysis for the masses" user interfaces. Technologies that affect query scalability, such as integrated SAN/database storage allocation and columnar processing, are beginning to arrive via major vendor upgrades and innovative new database vendors.
So whether cost constraints are rigid or slightly flexible, the choice to defer is no longer automatic. Those decide not to defer mainframe software and hardware upgrades are likely to find, to their surprise, significant cost savings in some cases. And when every penny counts, that's important.
ABOUT THE AUTHOR: Wayne Kernochan is president of Infostructure Associates, an affiliate of Valley View Ventures. Infostructure Associates aims to provide thought leadership and sound advice to vendors and users of information technology. This document is the result of Infostructure Associates-sponsored research. Infostructure Associates believes that its findings are objective and represent the best analysis available at the time of publication.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at firstname.lastname@example.org.