Consolidating Linux workloads onto the mainframe has become increasingly attractive for many shops, with as few...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
as 25 Linux instances needed to make it worthwhile. But before jumping in, IT managers should weigh whether their Linux applications are a good fit for the mainframe.
When it comes to Linux on the mainframe, Andrew Hillier, the chief technology officer and co-founder of server consolidation analysis software company Cirba, told SearchDataCenter.com that it's crucial to examine the kind of workload that you're considering for the consolidation and what kinds of workloads the target platform is best at handling.
Can you explain what kinds of workloads work best on a mainframe?
Andrew Hillier: Batch workloads, things that require a lot of data synchronization. If you have an application of a certain flavor, it may run better or worse than you might think on a mainframe depending on what it's doing. There's a rigorous theory underneath this in terms of synchronization activity versus data activity. It's like the opposite of a grid, where you can break a problem up into a bunch of little pieces. The mainframe excels at things that need to be done in one big lump.
What's an example of the kind of workload that runs well on the mainframe?
Hillier: A hotel reservation system. Some app where there's a lot of different touch points doing a lot of activity on a common set of data, and you're multi-synchronized. If you spread that across hundreds of small boxes, it would be cumbersome to make it work properly. The opposite would be a Web search where you can divide it across a lot of servers and they have access to a common database, but they're not stepping on each other.
Is workload analysis more important when consolidating on a mainframe than it is on other platforms?
Hillier: It's more important in that the results may not be quite what's expected. If you're doing x86-to-x86 virtualization, if a server you bought is twice as fast as the server you're coming off, that's a reasonable estimate for how the workloads will look. If you're taking something running on Intel and putting it on a mainframe, it's not obvious at all how that's going to look on a mainframe just by looking at it. You need some analysis to understand that.
How complex is this analysis? Is it something people could do on their own?
Hillier:In the manual world, if you look at average utilization you can get a big picture. But certain workloads are peaky in nature, and some kind of chug along constantly, so you can't just look at average utilization. You have to look at the patterns, which can be difficult if done manually. Then you have to factor in benchmarks, which is extremely difficult to do manually. If you're trying to figure out what the workload looks like and what it's going to look like, you very quickly go beyond what can be easily done with simple tools or spreadsheets.
Do you see a difference in consolidation ratios for the z10 compared with the z9 mainframe?
Hillier:The z10 is faster at raw computation. It's not twice as fast, but it's considerably faster, so that opens it up to run more compute-intensive applications. With the z9, the difference between a mixed or batch workload compared to compute-intensive is higher. With the z10 that narrows a bit because it's better at running raw computations as well, so that can affect the results slightly.