The mainframe is undoubtedly relevant for the foreseeable future, but big iron's role in the data center could be significantly eroded for some enterprise workloads now that cloud is becoming mainstream.
Mainframes are secure, reliable and trusted. In very large enterprises, mainframes still dominate mission-critical systems. While the share of enterprise computing continues to decline on a relative basis, the overall usage of mainframes is growing.
So is cloudification of applications a credible threat to the mainframe in coming years? Many people have likened cloud computing to mainframe time-sharing, and there are some viable parallels. The answer is not straightforward, and will depend on user- and application-specific scenarios. But a significant amount of mainframe-based workloads will likely migrate to private and public cloud deployments.
On a pure infrastructure and software cost per workload basis, Linux-based open systems prevail over mainframes. Today's multicore, multisocket processor architectures paired with commodity components create staggering power. And the cost of an Intel-based system can be a fraction of the equivalent processing capacity of a mainframe.
Workload processing is only part of the story. When you include factors like system reliability and security, achieving mainframe's equivalence in open systems requires significant incremental investment. Today's cloud-oriented world is moving toward software-layer resiliency and security, putting the onus on application developers and architects to continually reinvent and recreate the native capabilities of a mainframe.
Few enterprise development and architecture teams are equipped to build mission-critical applications that run on open systems as reliably and securely as on a mainframe. Even with top-notch technical teams and huge investment budgets, it has taken years for large-scale Web companies like Salesforce.com, Netflix Inc. and Facebook Inc. to learn how to build, monitor, maintain and operate their crown-jewel mission-critical solutions in a cloud environment.
Even government agencies find working without mainframes challenging. In 2012, I attended a presentation in which an IT leader from a U.S. government agency described his problem with their large, critical, public-facing Web-based system, initially built with Java on enterprise-grade Linux hardware. The resulting architecture, with hundreds of servers and distributed components, was unmanageable and frequently failed to handle the highly spikey loads generated. It's impossible to tell if the issues resulted from poor application design and implementation or the complexity of managing distributed systems. In any case, after moving the system onto a mainframe, the problems disappeared and the system met the agency's reliability, performance and security needs.
In a cloud environment, control over any resource is driven by application programming interfaces. This enables very high levels of reliability in ever-more complex systems, coupled with new cloud-native application architectures. For those with the technical skill and experience, mainframe-level reliability is achievable in a distributed system.
The vast array of moving parts in cloud systems means that IT teams must diligently control security and manageability. If servers are brought up and down throughout the day at a rapid-fire pace, fully automated and auditable security solutions must be baked into the systems in such a way that makes insecure resources nonexistent. This is not to suggest that Linux-based systems are as secure as mainframes, but the risks of losses from breaches can be significantly reduced.
A pro-cloud, pro-mainframe strategy
Mainframes are typically used in a 'round-the-clock mode where core transactional systems consume resources during the day and massive batch jobs rule the night. Batch jobs, which are often conducted against very large data sets, prove good candidates for migration to cloud. When the base data is ingested in advance via a bulk data transfer, and new data is synchronized to the cloud in near-real-time, then these batch jobs cost far less to run on cloud resources. There are many ifs here and factors to weigh, but the possibilities for cloudification do exist.
Continuous batch processing -- where batch results are calculated and updated as the data arrives in the cloud from transactional systems on the mainframe -- could enable faster reactions and decision-making. Because this work is done off of the mainframe, it can run in parallel with the core transactional environment. Big-data analysis, reporting and data enrichment are all candidates for migration off the mainframe onto cloud servers.
Even if a company is not ready to move vital workloads off of the trusted mainframe, most systems comprise many applications that are tightly interrelated but can operate independently. Cloudifying some of the contextual applications off of the mainframe could reduce MIPS consumption and postpone expensive upgrades.
Legacy modernization is a growing field. COBOL-to-Java conversion tools and practices can make this transformation economically and technically feasible for some applications. Rather than move from a mainframe to traditional Intel-based hardware run in a traditional data center, think about how to skip a generation and move straight to the cloud for some workloads.
John Treadway is a senior vice president at Cloud Technology Partners and is based in Boston.