Victoria - Fotolia
The debate over COTS server versus legacy hardware is sometimes acrimonious and almost always blurred by myths, half-truths and good old-fashioned FUD.
The debate started by mainframe versus Unix servers is three decades old. Fundamentally, the issue reflects the useful life of existing legacy hardware and software against the opportunity of low-cost commercial/commodity off-the-shelf (COTS) servers and storage based on Intel, AMD or ARM processors.
Like all partisan contests, the issues are a mixture of the technical, the economic and the political.
On the legacy side, consider how difficult the transition would be and the risk of failure once it's done. On the COTS side, look at the promise of agility and performance growth; hardware and software for legacy systems are often fully depreciated at the time of refresh. Job security is intricately involved as well.
Do legacy systems get the job done?
This is a trick question. There is a hidden presupposition that the job is static in a data center. Even further, the belief pervades that the app defines the business processes and the business processes define the app, and that both have reached optimum.
That rarely holds up to scrutiny. Today's business environment is evolutionary, and sometimes even revolutionary. In any line of business, the agile provider gets ahead. Legacy systems get the job done, but is that the right question?
Can we adapt legacy systems fast enough to get a lead on competitors?
Legacy systems often evolve slowly, held back by old code, weak application programming interfaces and an inability to communicate with the rest of the universe. That outside world, in case your data center hasn't noticed yet, is going through a mobile revolution, and the business has to keep up.
Bolting .NET onto the legacy code base provides partial fixes, leading to some cobbled-up solutions. Legacy apps are the equivalent of steam-powered engines in a turbo-diesel world.
Where servers come into play
Compare the engineering investment in COTS systems with the money going into proprietary designs. The payoff on CPU chips, on hardware generally and on software, is huge.
Off-the-shelf server CPUs follow Moore's law, doubling in performance every 18 months. The legacy proprietary CPUs are on a much lower curve, making bigger gains in each product cycle, but taking much longer to get there. The CPU performance difference is large, but overall, the difference at the system level has been much less noticeable, due to poor growth in storage performance.
Today, flash and solid state drives allow fast, performant CPUs to achieve their potential. This development places the COTS approach way ahead of proprietary systems on any cost or performance curve. This translates into real business successes, beating competition with new capabilities and scale.
The threat of change raises a barrier to COTS servers. Job protection can block evolution; large data center teams are no longer needed to tend to off-the-shelf servers; administration is lighter, coding gives way to off-the-shelf apps and even to software-as-a-service apps running in the cloud, none of which need much coding expertise within the enterprise. Moreover, most legacy code is COBOL (400 billion lines of COBOL are still running worldwide), while new apps are written in C++ and Java, and others.
But legacy hardware and software is free -- right? Wrong. The mainframe uses a lot of power compared to COTS, for example, while the maintenance and support contract for a fleet of legacy systems is probably large enough to pay for all new COTS gear. The jobs question dovetails with financial considerations too, since fewer staff members are needed to write code and admin the system, reducing salary expenses.
App to the future
Another major hurdle to COTS system adoption is fear of change. Writing an app to emulate another package is not easy: In these situations, the pent-up need for business process change leads to a rapid functional evolution as the new code is written.
Here's where the process breaks, with undisciplined change and instability. One option is to recompile the code on a COTS platform. There are good COBOL tools for this, but it only fixes the hardware part of the legacy issue.
The best decision is to buy a new app. App development becomes a selection process, coupled with a rent-or-buy decision. Very few companies are islands; most share 90% or more of what they do, and how they do it, with similar companies. Code off-the-shelf complements hardware COTS.
Moving to a new COTS app involves business process reengineering. This isn't free, but is likely way overdue. Getting onboard with ISO 9000 standards for quality assurance and the associated process documentation approach is a critical first step. The cloud further opens up ITs position. Its economics, scalability and flexibility pressure the move off of legacy systems. Fundamentally, ITs function becomes service delivery, and the cloud -- public or a hybrid deployment -- offers the best value for service delivery. Even government organizations and the U.S. Navy are jumping on the cloud, and shedding reputations for stodgy indifference to change.
It takes senior staff commitment and a strong IT leader to bring the issues to the decision point and begin the transition from legacy to COTS.
About the author:
Jim O'Reilly is a consultant focused on storage and cloud computing. He previously held top positions at Germane Systems -- creating ruggedized servers and storage for the U.S. submarine fleet -- as well as SGI/Rackable and Verari, startups Scalant and CDS, and PC Brand, Metalithic, Memorex-Telex and NCR.
What to do with legacy hardware
Shop for a COTS software package
COTS for the cloud