As IT pros try on private cloud computing for size, some organizations report stiff resistance to IT as a Service...
concepts like self-service provisioning. Others, however, are forging ahead.
With self-service provisioning, business users request and receive new compute resources without having to go through IT. They log onto a Web-based portal, where they request new servers from a shared pool of available resources, and configure them with the appropriate software, storage and network connectivity. With self-service provisioning, IT is relieved of performing manual, error-prone tasks, and business owners receive their new resources faster.
Neil Smith, virtualization architect at a U.K. hedge fund, is glad he implemented self-service. About two years ago, the firm deployed implemented virtual desktop infrastructure (VDI) for its team of 300 developers, and equipped them with self-service provisioning capabilities via DynamicOps’ Cloud Automation Manager.
“Even though they work from standard images, building out their desktops required a significant effort,” Smith recalled. “Self-service was one of the bigger boxes we wanted to tick.”
Since then, the firm slowly put the remainder of its 2,000 virtual servers under the control of DynamicOps, citing faster provisioning times, of course, but also better visibility, reporting and resource reclamation.
“Once you start to get your existing estate into the [DynamicOps] toolset, you get a better sense of who’s using what,” Smith said. Sprawl, for example, is a classic virtualization “boo boo,” Smith said. “Now we can see if a server is being used or not, and can pull back those resources and use them somewhere else.”
And with the release of DynamicOps 4.0, which can provision physical as well as virtual machines, the firm views its entire IT estate from a single pane of glass. “There’s one place to go for physical or virtual machines,” Smith said. “DynamicOps will be the de facto place for people to go to build out new data centers.”
For others, self-service isn’t so much about visibility and control, but scaling out the data center with maximum efficiency while maintaining high availability.
Fetch Technologies writes specialized enterprise search and data management software. The software consists of several tiers, including Web servers, application servers, “extraction” servers and a database, said Rick Parker, Fetch director of IT. To minimize the time it takes Fetch employees to configure and provision the application, the company implemented cloud management software from Platform Computing, which in the past year has repurposed its grid and cluster management software for private clouds.
At the same time, Fetch is beginning to offer its application as a hosted Software as a service (SaaS) model. To keep down data center buildout costs and maximize uptime, Fetch plans to leverage several colocation facilities near its El Segundo, Calif., headquarters, and “stripe” the virtual machines that make up a customer’s application instance across them.
“I like to think of it as RAID for the data center,” Parker said -- a “Redundant Array of Independent Data Centers,” as it were. By selecting colocations that are on separate power and network grids but which are all interconnected, Fetch can minimize downtime to any one data center, he said, and can grow into new colos as demand increases.
“My goal is to be able to stamp out a new data center very quickly,” Parker said. “I want to get out of the server management business and into the data center management business,” Parker said.
Further out, the goal is to push the platform self-service component to customers, such that they can increase their Fetch resources themselves, on-site, at the Fetch colo, and even on a public cloud like Amazon. “This might be the first hybrid RAID cloud ever built,” Parker said.
Self-service or bust
If IT managers think all this talk of private cloud and self-service sounds a little far-out for the average enterprise, they should think again, said Bernd Harzog, analyst at The Virtualization Practice, because cloud computing represents “an existential threat to your existence.”
“The first thing you do is go ask your business owners if they use any kind of public cloud service,” Harzog said. The great likelihood is that they do, and that can only mean one thing: “You’re already perceived as slow and unresponsive compared to the Amazons of the world,” he said. “You ought to do something about that,” he added.
Fetch’s Parker agrees with that statement. “IT is competing against Amazon. If we can’t do IT cheaper than Amazon, there won’t be IT anymore.”
On the bright side, IT professionals have a lot of things going for them. For one, Parker pointed out, “we don’t have to make a profit.”
For another, IT professionals have history on their side, Harzog said. “IT has proven that it can secure the data, and it’s accustomed to doing performance management -- cloud providers don’t have a clue about that.” Last but not least, internally hosted applications aren’t accessed over the public Internet -- “a place no one takes responsibility for.” If something goes wrong, “who are you going to call to yell at?”
The question, therefore, becomes not if IT pros should explore IT as a self-service, but how quickly and how deeply? Making matters easier, Harzog said, there are plenty of providers to choose from -- the aforementioned Platform Computing and DynamicOps, but also other vendors like NewScale, ManageIQ and Embotics, as well as bigger virtualization players like VMware with vCloud Director and Citrix with new features in XenServer 5.6.
“If all you want to do is get basic self-service up and running, it’s really not that tough,” Harzog said.