Many clients ask us to "cut storage costs." Whether it's reducing the unit cost of storage, the overall total cost of ownership
But what happens after the project is completed? A typical cost-cutting initiative is a short-term success, driven by the need to produce immediate, tangible results. Surprisingly, however, these efforts are often long-term failures. The cutting tends to address the symptoms only, while overlooking--or even exacerbating--systemic issues that are less-readily apparent.
Companies that aren't willing or ready to invest in sustainable solutions will find themselves going through the same cost-cutting exercise a year later. Here's how to move from short-term cost cutting to sustainable cost consciousness in your organization.
For most IT managers, the pressure to reduce costs is constant. They're often put on the spot, parrying questions about storage. If the cost-per-gigabyte of disk storage is dropping, why do costs keep going up? Why is data volume growing at an unchecked pace, driving purchase after purchase of more disk? What are all those storage people doing--does your team really need to be that big?
Over the last 15 to 20 years, some application design has come to rely more on what's perceived as cheap and ubiquitous storage. Rather than using disciplined, sophisticated engineering to produce applications that perform better, many application engineers specify high-performance storage arrays and ample disk volume to meet performance requirements. Worse, they often overlook how applications purge data in a manner that complies with business and regulatory requirements. As a result, many companies end up keeping everything forever.
Compounding the problem
This problem can be exacerbated when firms overuse data protection and availability products. Replication and point-in-time copies multiply storage demand by a factor of two or three, driving more volumes of storage into the data center.
The cost issue may be amplified by the way in which storage was incorporated into the infrastructure. Companies typically allocated storage on a project-by-project basis. The result was a mishmash of underutilized infrastructure supporting a bloated application base.
One company's immediate response to this problem was to reduce the amount of data going to tape media. It's a simple proposition: Reduce the volume of data being backed up and save money on tape. The company was able to reduce the data volume of its weekly backup cycle by approximately 30%, which effectively trimmed media usage and extended the life of the backup environment by reducing its load and making capacity available for future requirements. But was this the right solution?
Although apparently successful, this effort didn't address the policy and behavior modifications that have lasting benefits. The waste of protecting application system and temp files, MP3s and the like is eliminated. But sometime in the future, a new application will be installed, new clients will come online and a new crop of non-critical file types will appear. There's a good chance that these files won't be among the file types identified in the original effort, so wasteful backups will occur. It's also possible that the exclusions originally identified will lose their validity as the application, infrastructure or business environment changes.
Savings that last The best way to realize ongoing cost reductions is to shift the focus from technology to process. By thinking in terms of people and process, you'll have a better chance of transforming one-time events into permanent processes that save money. In any organization, there will likely be widely varying views on what cost savings are, how they're measured and to whom the saved costs will be allocated. Spend some time discussing your cost-cutting framework with finance, IT management and business users to gain a consensus. In some cases, sponsorship from the highest levels of the organization will be needed to align the various groups around what's best for your company. This alignment phase will not only help validate your results, but will serve to gain participation from data owners.
You should next look at ways to install processes that make cost-saving opportunities repeatable. For example, have you created a policy for not backing up specific file types? Incorporate that policy into the overall backup process. As noted earlier, shifting business requirements may invalidate the policy, so include a schedule of steps for the storage team to maintain an ongoing dialog with data owners. You should also consider automating some of the newly minted processes to ease the storage team's workload. Perhaps most importantly, use your documented processes to enforce accountability.
What about execution? To avoid a technical mishap, either by error or omission, document all storage processes. By creating repeatable procedures and paying particular attention to change and configuration management, reliance on a specific individual can be mitigated. This will eliminate the most common single point of failure in storage: the individual. Reduce errors and lessen the burden on your star performers by capturing knowledge and inserting the right checks and balances to keep operations running smoothly.
Instead of a one-time fix, cost cutting should be viewed as a valuable precursor to long-term cost-reduction initiatives. By investing in the repeatability of processes, connectivity with the business and alignment across different parts of the organization, you can create a cost consciousness that takes your bright ideas and makes them part of the operation. That's a win-win situation: You get more time to focus on storage innovation, and the business gets more for less.
Do you know...
Karl Langdon (firstname.lastname@example.org) is an engagement partner at GlassHouse Technologies.
This was first published in May 2006