We’ve already explored the technical similarities and differences between traditional mainframe and x86 virtualization architectures in a two-part article on SearchDataCenter.com. This series of podcasts focuses on these technologies and how they fare in the ongoing evolution of cloud computing, as well as predictions for the future of the market.
In this episode, we’ll thrash out whether traditional mainframe architectures have a place as cloud computing technology evolves, or if they’ll largely be left behind in favor of x86 scale-out systems.
Beth Pariseau, senior news writer, discusses these topics with two experts: Wayne Kernochan, president of Infostructure Associates LLC, an affiliate of Valley View Ventures Inc., and David Floyer, chief technology officer for Wikibon.
You can also listen to a podcast of this Q&A here:
Pariseau: Which model do you think will dominate the enterprise private and hybrid clouds over the next decade: a mainframe core surrounded by midrange and distributed systems, as IBM envisions, or scale-out x86 clusters? Why or why not?
Floyer: The cost of conversion off of the mainframe for systems that are working well on it just does not make sense. The real question is, which is going to be the dominant platform? Where’s the innovation going to come from?
X86 and scale-out [computing] are dominating chip architectures. When you look forward, other architectures will be challenged to keep up, and all of the innovation is going into the architectures and ecosystem around x86. So the systems’ designer is going to choose x86 for any new innovation, from a cost and power point of view.
So, what types of innovation are going to be required [for x86]? Well, there are a number of points in the architecture that, in my opinion, are going to change radically over the next few years. At the moment, you have systems, processors, memory, disk and to some extent, solid state disks. [For example] you need disk because it’s persistent, and you need it to secure the data, but the time [to access a] disk is just colossal.
Another architecture that I think is going to take over is putting NAND storage very close to the processor and the main storage. This is akin to, back in the mainframe days, having expanded storage on the mainframe. [This type of architecture] is going to be a revolution. It’s going to pull data much closer to the processor, [and] racks full of processors and NAND storage are going to make for incredibly powerful high-performance computing and mainframes [that] the traditional mainframe will not be able to compete with, either on power or performance.
The key point here is; that [innovation is] not going to happen on IBM mainframes or any other mainframes. There’s a huge investment into these types of technologies, and it’s all going into x86. VMware hopes to be a part of that, but the innovation is not in the mainframe area.
Kernochan: I am, of course, going to disagree. To me, the fundamental question for the storage you describe is how soon it will make an impact. I don’t think it’s really understood just how far [storage] has to go in terms of scalability before it really impacts the core of the [mainframe] market. The fact of the matter is, information storage has been going up 50% to 60% [forever]. It just passed a zettabyte [1 billion terabytes], and should approach somewhere around 35 zettabytes by 2020. This is a ten-year time frame we’re talking about – you’re basically talking about petabyte-sized storage arrays in the next five years, in order to really make a dent in scale-up architectures.
Then what you’re faced with is whether you’re going to have an effectively lumpy cloud. The mainframe has been, in my mind, increasingly dominating the future capabilities of scale-up, simply because traditional vendors are focusing more and more on existing x86 distributed systems. As a result, the virtualization stuff that you’re going to be faced with will be lumpy in the sense that you’re not going to put as many virtual machines on a particular [physical] machine. So now we’re back to scale-out vs. scale-up.
Pariseau: Can you tell me what you mean by “lumpy cloud?” I think we’re defining a new term here.
Kernochan: It just means that it’s not all four-CPU PCs. Instead, you have lumps that are basically 100 processors and up, whether it’s the mainframe or something else, and 100-virtual-machine-and-up machines. I just don’t think that x86 is there yet.
Floyer: I agree with you that [the cloud] will be lumpy. It’s just that it’ll be x86-based. If you look at where the innovation is going – social media, the Web and service organizations – all of those are building on x86 architectures. Look at Facebook – none of that is going on to any sort of mainframe, and in my view, never will. We’ll find that the new mainframes will be x86. I actually agree that [lumps] will probably be not much over 100 processors, and they’ll be the biggest and best of x86 for the transactional processing on these systems.
On the question of storage, NAND will clearly be at the core of that infrastructure, and will link to huge amounts of storage with its own processors, because that’s the only way that Hadoop and other technologies will be able to get at that data. Those large data farms will be distributed across the network, and they will be a very important tier.
So it will be lumpy in the way that you describe, because it will have high-performance processors and the mainframe equivalent in the middle, and then you’ll have these much more distributed, data-rich environments spread out as a tier, if you like, underneath that. I strongly disagree that mainframes will have any part to play in large-scale systems after the next five years. And x86 mainframe equivalents will be dominant.
There are many ways of skinning a cat as far as virtualization is concerned, and VMware’s got a great start. But [x86] is not necessarily going to be the dominant architecture unless it gets its act together. My main premise, though, is that x86 is still going to dominate over the next 10 years.
Kernochan: I think in some ways, David and I are saying the same thing. Aside from the question of just how far out you can predict, the real differences between our views lie in the way that virtualization is going to handle the data. What especially concerns me about cloud architecture moving forward is that despite the [cloud allowing you to] have your application code in one place and your data in another, you have to make darn sure that you jam your data [location] up against the location [where that data is processed].
If you look at it from the point of view of how analytics applications are going to evolve to adapt to the cloud and all the social media, [the] lumpiness is going to favor not changing around the systems that already have data on them. It would be better, assuming IBM gets its act together with regard to Windows, to have a scale-up system, and preferably one that’s already set up for the existing business’ critical data. And then you start incorporating the social media stuff. You fit that to the existing scale-up, often mainframe, architecture.
Whether or not 10 years from now the mainframe will be superseded, I can’t predict, but I firmly believe that looking at a case like analytics, it’s not going to make business sense to not build on the foundation of existing data warehousing.
This was first published in April 2011