News Stay informed about the latest enterprise technology news and product updates.

BMC: Get granular with mainframe backups

There doesn't have to be a tornado or flood for disaster recovery plan to kick in. With most application failures tracking back to human error, wouldn't it be easier if you could recover the databases you need, rather than the entire DB2 environment? BMC Software's Rick Weaver, product manager for the company's mainframe database recovery business, talks about that, as well as graying mainframers, creating easier-to-use interfaces, and offloading work to the zIIP specialty engine.

What is going to be the main issue facing the mainframe this year, and what is BMC doing about it?
If you look around at this conference, I figure at least 90 percent of the people here have been getting mail from AARP. So the big challenge going forward is going to be to managing the mainframe environment with the next generation of people who don't have the breadth of skills as people who have been doing this for 30 years. BMC feels a big future for the mainframe is in the area of automation of the processes to support the system.

More on mainframe disaster recovery
American Fidelity shows how DR planning gets done

Mainframers need to focus on process not piecemeal disaster recovery

Mainframe management: Chapter 2 - Mainframe security
BMC came out with a Web-based console last year for mainframe backups. Is that the sort of thing you'll be doing more of?
We have several products on the mainframe that run under a GUI interface, and we're not looking for that interface to be the interface for all products. We'll use it where it makes sense, and we'll build other interfaces where they make sense. The interface you're referring to is specific to IMS data management products, and from that single interface, a user could be looking at things that impact an IMS database in the area of performance for reorganization needs or backup and recovery needs. The interface helps identify the problems and generate the solution, so that's kind of the area – advisor and automation – that we want to build out. CA just came out with some tools that help users offload work onto the zIIP engine. What is BMC doing with zIIP?
We've been doing some research and experiments on it, and we've seen some bits of gains here and there. It's only valuable for certain kinds of workloads. There are a lot of constraints about what you can push to the zIIP, and there is still some research to be done about what the benefit is.

We're not going to invest in pushing stuff down to the zIIP engine unless we see a tangible benefit. I know customers are excited about the idea of having zIIPs because they can latch onto several zIIPs and cap their overall general processing MIPS, and so that keeps their cost of ownership flat. But on the other hand, if it costs more work to push stuff down to the zIIPs and bring back the results, the net benefit might not be there. Outside of the zIIP, what are you doing to help customers keep their software licensing costs in check?
That's always a big concern. Like CA, like IBM, we try to pull together the right products for the customer, and there are some bundling and packaging things that we can do in there to relieve the price issue or add additional value to make the price worth the customer's while. There are some things that we're working on that we're not ready to divulge that are going to make it easier for customers to do business with us. We're working on a program right now that will make it easier for customers to work with us and get the right product mix with us at the right price point. What is BMC working on in the way of mainframe disaster backup and recovery?
There have been studies done by industry analysts that say that most of the time when a business application has a failure, it's not a site-wide disaster. It's a localized event, it's a hardware failure or an operator error or a programming error that affects the local database availability. There are different techniques for doing recovery that dramatically reduce recovery time in the event of a local recovery.

The issue with some of the storage replication technologies is they tend to take a really broad view of the environment. So when they do things they tend to do it for an entire DB2 subsystem. Most DB2 subsystems share several applications, so when you have a recovery event, it's typically more granular than that. You're not going to do a restore operation on the whole subsystem, you're going to do it on individual databases or even smaller units of recovery than that. So that's where you need to be pretty flexible and allow your user to come in and say, "This is the thing that I want to get done," and then go figure out how to do that exploiting your best technology. Why is it good to be able to backup a couple databases, for example, rather than the whole DB2 subsystem?
Different applications have different availability requirements and different recovery requirements, and so you adjust your backup strategy accordingly. Where we'd like to get from an automation standpoint is for a customer to come in and define a service level objective or agreement number with us and say, "For this application, I want my recovery time to be two hours for a local recovery event," and we go and figure out what the backup strategy should be to support that. You mean what databases to back up?
Yes, and how frequently. It results in a backup strategy that reduces recovery time for a local recovery from over six hours using the native recovery utility to under one hour using the BMC process.

Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer, and check out the Blog.

Dig Deeper on IBM system z and mainframe systems

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.