This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
3. - Breathing new life into big iron: Read more in this section
- Tips for Linux and z/VM on mainframes
- Open source software and Linux for mainframes
- Agile development with Linux on the mainframe
Explore other sections in this guide:
- 1. - Helpful administrative tools for Linux in the enterprise
- 2. - Lifting Linux workloads off-premises
- 4. - Test your Linux skills with this quiz
The evolution of hardware development and operating system support has allowed mainframes to endure in the modern data center. Today, open source operating systems like Linux have found a home on mainframe platforms such as the IBM z114. This has spurred important improvements in both the operating system and the mainframe hardware. In this Q&A, James Vincent, a senior z/VM systems programmer and director of conference operations for SHARE, offers his expert insights on the future of Linux and mainframes.
Q. Generally speaking, how do you see Linux developing or evolving for mainframe platforms into 2012?
James Vincent: I believe that Linux on the mainframe will continue to take advantage of the advancements in the OS and enhance it with mainframe-specific features and tools. There have been some beneficial enhancements made in Linux that mimic features other mainframe OS’s have provided, such as dynamic memory upgrades, adding capacity on the fly and so on. These features allow non-disruptive changes to Linux servers as capacity needs arise. The mainframe and z/VM (the OS that hosts these Linux servers on the mainframe) already have non-disruptive capacity upgrades for CPU and memory so these types of changes in Linux make the viability of production Linux on the mainframe hard to beat.
I think we will see more of Linux on the mainframe being the “glue” in hybrid computing. There is a drive to run enterprise applications where they fit best and where capacity is available based on demand of the application. For example, IBM released z/VM 6.2 in December 2011 which contains Single System Image (SSI) and Live Guest Migration (LGM) support. SSI allows you to cluster up to four z/VM systems to work logically as one system, so servers can be running on any of them based on load and capacity needs. LGM gives you the ability to move Linux servers from one system in the cluster to another for load balancing or z/VM system maintenance.
Linux on the mainframe will likely grow and evolve more for cloud computing services. Building servers, adding resources and handling the management of Linux servers on z/VM is generally quite robust. There will be more push for “server management” tools that will handle these tasks behind the scenes. As needs arise, these tools will look for a place to build the servers, load applications or move applications to other servers. Tools will also be involved with hybrid computing models involving Linux on the mainframe and attached blades with the IBM zEnterprise BladeCenter Extension (zBX).
I also think that more companies will consider using Linux on the mainframe now that the smaller-footprint z114 systems are available. They will allow smaller enterprises to take advantage of all the benefits of running Linux on the mainframe by using the best class of machine for their needs.
Q. Getting more specific, what new Linux features or capabilities do you see being critical for mainframe environments, and why?
James Vincent: I see any of the Linux tools that take advantage of the power behind z/VM and the mainframe overall as being critical. The more dynamic Linux can be in working with the hosting OS, the better. Being more dynamic allows Linux servers to adjust resources on the fly with no interruption to services. The right tools and functionality also make administration of Linux on the mainframe less cumbersome for administrators that may not have mainframe background. Any tool or feature that allows the server to run uninterrupted is critical to a successful production Linux implementation.
Linux distributors like RedHat and SUSE also need to embrace the mainframe hosting environment. Without their complete understanding of the mainframe and making patches, updates and features/tools available on a very timely manner for Linux on the mainframe, it will be difficult to manage and justify the OS in that environment. For the most part, distributors of Linux have done a good job getting it to run in the mainframe environment, but they need to understand that enterprises need them to continue to do so – and maybe even “kick it up a notch” to show that the mainframe is as important as any other platform that runs Linux. That can be seen as rolling out new releases or patches for Linux on the mainframe at the same time as all other platforms.
Q. How do you see mainframe hardware evolving, and how does that continued hardware evolution impact Linux support, performance and associated workloads?
James Vincent: I think that mainframe hardware is going to continue to evolve with incremental steps in faster engines and more memory. Memory usage in Linux on the mainframe seems to grow faster than CPU needs, depending on the workload being run. Increasing the amount of memory the mainframe can manage overall in the logical partition (LPAR) and in the hosting OS (z/VM) will be important to installations that have large memory needs.
The mainframe engines (IFLs) will likely get additional throughput, which should also boost capacity for Linux applications while keeping the cost-per-IFL the same. The ability to run more work through the same number of engines is always a great incentive for companies to upgrade their hardware.
We should see more from the IBM zEnterprise BladeCenter Extension (zBX) offering that was first announced in 2010. As more customers begin to install and set up these environments, we will likely see adjustments and enhancements that tie the systems together with fewer “moving parts.” Having diverse OS environments (Linux, z/OS, z/VM, Windows) all working in harmony to run applications in the best place at any moment is the ideal goal.
Q. How do you see Linux and mainframes better supporting virtualization technology and virtualized workloads?
James Vincent: Virtualization technology has been around for quite a while now, and both Linux and mainframe technologies have been making good progress in making virtualization more robust over the years. Virtualization on the mainframe is already very strong and quite solid. I think the advancements to note will be in virtualizing workloads from the standpoint of hybrid computing. Having the tools and functions to move workload seamlessly from one place to another will be one of the keys to making the hybrid environments viable and cost-effective solutions for enterprises.
Specifically in the mainframe space, I think there will be additional enhancements based on the z/VM 6.2 release with Single System Image (SSI) and Live Guest Migration (LGM). As people begin to use and exploit these offerings, they will likely find requirements for additional features and function that we will see in future releases of z/VM for hosting Linux servers.
Q. Finally, we know that the next SHARE conference is coming to Atlanta on March 11-16th. How is SHARE influencing the development of Linux for mainframe environments?
James Vincent: SHARE’s members and attendees are mainly the people out in the trenches trying to make all this technology work best for their companies. If these users find deficiencies or requirements beyond what is currently available, SHARE provides numerous ways for them to express their needs. It could be something as simple as dialogue in a session to ask about future support or technical short-falls, or discussions in the halls between customers and vendors. In addition, round-table discussions, submitting SHARE requirements and formal meetings with developers and their management all make it possible for SHARE attendees to influence the direction of Linux on the mainframe.
SHARE is a great place to meet and talk to the subject matter experts, developers and the product decision-makers from mainframe technology vendors. Having these people on-site during the conference is critical to the full SHARE experience. Networking opportunities not only solve short-term needs but oftentimes will build relationships that drive future offerings from the vendors. I have heard and experienced many situations where an attendee or group of attendees will express a frustration over a problem, or a strong desire for an enhancement that would benefit everyone. The vendors often will take an interest, listen to the attendees and take action on it. Countless new features and enhancements in the mainframe environment have been a direct result of SHARE’s attendees.
A direct SHARE influence on the future offerings in Linux on the mainframe is through a Linux and VM Technical Steering Committee. This is a select group of SHARE members and vendors that collaborate throughout the year on technology decisions that affect the future offerings in the Linux and z/VM environments. Formal meetings are held at every conference to discuss, review and address technical decisions, offer ideas for future enhancements and collaborate on ideas to improve the overall environment.
SHARE is also hosting an ExecuForum in Atlanta at the March 11-16, 2012 conference. This forum is designed for enterprise IT executives to discuss and address issues that keep them up at night. SHARE is providing access to vendor executives and subject matter experts that will allow opportunity for the IT executives to influence direction based on their needs. These open, engaging and inclusive discussions will certainly be important for the product decision makers to hear and address.