News Stay informed about the latest enterprise technology news and product updates.

Expert offers roadmap for the ITIL data center

ORLANDO -- Almost 20 years ago, the U.K. government approached IBM for the best practices document and the rest is ITIL history. In this Q&A, Alasdair Meldrum offers a roadmap for the ITIL data center.

Why has ITIL been so widely adopted in the U.K., whereas a lot of U.S. companies are still struggling?


There needs to be an original sponsor; and in this case it's been the U.K. government. It rolled out these excellent practices to its own government departments and encouraged these good practices to be taken up without charge by other companies. People could go to the Web site, see these good practices, and they would want to implement them because they're all common sense. They're all based on good practices that came out of the IBM data centers and nobody could argue with them. There wasn't anything else competing with this set of best practices. I'm not aware of any other sponsoring body like the U.S., French or German government trying to push that library of standards and that may be why it hasn't gotten past the U.K. boundaries. What are the common mistakes with an ITIL implementation?


When my people were working with customers back in the 1980s to roll this out, the big mistake was to try to do all eight practices. What was very important was to get one, two or three -- because there were little groupings that were self supporting. Typically you would go in and do problem management, change management and incident management -- because all of those things were interconnected. If you were successful in doing those, got them established and people could see results, then it was easier for the customer to apply those good practices to automating operations, trying to get better availability on the network, or looking at security. Has the shift from mainframe centralized computing to distributed computing influenced change management processes?


When you brought down the mainframe, you lost the company for a period of time. Therefore, backup sites were created with hot standby or warm standby and you either invested in it yourself or you got a contract with a provider to get the level of availability you wanted. A lot of what we did with the early ITIL stuff was automating those operations.

When you got relatively stable hardware systems with one app to one server, the thing changed entirely. It was much cheaper to have an extra server sitting there. But now there is so much more complexity and interconnectivity -- organizations want to connect the applications on these independent servers. If any one of those servers is out, we can't do our data mining or end-of-month processing because one element of it has fallen over. I think it's the interconnectivity of the applications that is forcing people to go back to the original disciplines that were relevant in the mainframe days. We've got a very complex world now, and for business reasons we need to look across multiple servers and get an enterprise view of that data. If we have a problem with one of those servers, how do we recover from it? It's starting to get very serious because companies have encouraged individual application stacks to be built on individual servers. Now the business people want the ability to take the information from all of the different stacks and to pool it. A lot of systems management vendors are touting configuration management database tools as a baseline for ITIL implementation. What is your take on the concept of the CMDB?


It sounds like a very sensible approach to me. But there are questions I would ask. How long does it take to complete the process? How frequently do you need to repeat the process to keep in step with the people purchasing equipment? The other question I always ask is the "so what" question. So what if I've got all this information in one database. Do I have any interrogation software or capability once I'm armed with that collection of facts?

I think purchasing would be interested in that sort of information, so I think it could be business relevant outside the IT center. If I was working in purchasing and there was a database that had 80% of my equipment on it, I'd like to use that in my negotiations with my suppliers. If I have good "what if" tools, I could mandate a single supplier over a two year period and get 30 % off, for example.

The key would be how easy it was to gather the information and how confident you would be that you had captured 70%, 80%, 90% of the data. There is always a percentage that escapes. My concern would be that it would be relatively easy to get the information. There would be a law of diminishing returns. Good you've got 70%, but what is it going to cost you to get the next ten percent, and the next ten percent. I think it will get desperately expensive to get into the 90s, but I think it's better to be asking these questions than not.

Alasdair Meldrum, European Program Director for the Uptime Institute and an independent consultant, literally wrote the book on the IT Infrastructure Library (ITIL). Meldrum was manager of the U.K.-based team at IBM Global Services that wrote the ITIL framework for data center best practices.

Let us know what you think about the article; e-mail: Matt Stansberry, Site Editor.

Dig Deeper on IT compliance and governance strategies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close