Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Five new IT metrics fill the gaps in data center tracking

Traditional data center metrics don't follow the "no man left behind" rule. Gaps in IT metrics let some costs fall by the wayside.

Most CIOs who run large enterprise IT shops have some traditional data-centric metrics in place, which have historically...

served them well. However, gaps are appearing in IT metrics as systems evolve to handle mobility and big data.

These five new IT metrics are useful to call attention to and monitor because they represent IT's service to the business and cost-effectiveness. Some sound counterintuitive, but prove appropriate for analyzing important aspects of data center performance, especially with new initiatives.

1. Number of database instances per administrator

I originally identified this IT metric 15 years ago in total cost of ownership studies on the differences between databases used in small and medium-size businesses. This is one IT metric many data center managers have at their fingertips: So many instances per Oracle database and so many per Hadoop data management system.

In recent years, John Shedletsky, vice president of System z Competitive Technology at IBM, analyzed data center spending, showing that database costs were steadily climbing proportionately to overall expenses, and database administration costs were escalating towards greater than 50% of the overall database budget. In other words, a typical large enterprise may see up to 20% of its costs per application sunk into database administration.

The key controllable variable is the vendor database; in many cases, switching between providers is impossible. It's still sometimes iffy switching from Oracle to IBM -- existing workloads are more likely to demand the same vendor database for the next few years. However, big data processing and other initiatives offer the opportunity to choose a database that is just as effective and scalable as the corporate standard, and supports far more instances per administrator.

Many data center managers and CIOs are surprised at how many opportunities for future improvement it identifies. With costs constrained, IT organizations can no longer say, "Nobody ever got fired for choosing database x." The number of database instances per administrator is a proven method for fettering out ways to cut these key costs, so pay attention to it.

2. Number of significant changes in the middle of a development or bug-fix project

The advent of DevOps makes it clear that the data center is an important part of software development. DevOps and agile IT need metrics in the data center fit for this new approach.

A fair amount of agile experts advise against adopting new IT metrics that constrain development agility and encourage the wrong behavior. One such example is development-cost metrics, which assume the design spec won't change.

An effective metric for measuring agility in offline and agile development and coordinated online bug fixes is the number of big changes per project. I used this metric in a survey and found that, over time, effective agile development increases the number of significant changes per project.

Other influences, such as the size or complexity of the project or deviations in the way "significant change" is measured, average out over time. Compared to pre-agile IT processes, you should see a major increase in the number of midstream changes.

Too many "agile IT metrics" treat change as a negative; this one treats change as a positive. This metric is not fine enough to capture problems with a particular project, but shows how well the process is working on average, over several projects per year. IT shops that apply this metric astutely will improve responsiveness within agile operations.

3. Performance slowdowns that don't involve outages

The IT organization focuses so much on preventing company-threatening outages that it sometimes fails to notice ongoing slow performance or progressive degradation. These performance slowdowns are almost as critical as outages. The performance slowdown metric tells you how big the problem is, but it's your job to fix it.

Slow performance is a harbinger of especially hard-to-fix outages. Performance degradation that flies under IT's radar typically involves many types and layers of software, making the root cause or causes much harder to identify than an unplugged server or network mix-up.

Performance slowdowns are tantamount to outages when it comes to customer satisfaction. As more and more corporations depend on customers interacting with software, users are less likely to put up with performance issues.

Performance slowdowns often signal that cost constraints are beginning to cut into the bone of rapid scalability, which is vital to the success of big data projects. Outsourcing or cloud hosting can delay such an eventuality, but the costs of transitioning outside the data center can add up as well.

4. Percent of information lost at each stage of the data handling process

Corporate partners don't think IT satisfactorily supplies the information they need, according to studies from Infostructure Associates, MIT Sloan School of Management and elsewhere.

Data center information systems, which grow by ad-hoc increments, are increasingly unable to deliver the data required to feed business initiatives, such as determining customer-buying patterns, and other forms of big data analysis.

The answer lies with a metric that helps determine how IT is falling short. Raw data converts to useful information in a step-by-step process. The usefulness of data is compromised at each step by lossage.

The chief problem at the data input stage is incorrect entry -- throwing away about 20% of the potentially useful information received, according to my survey. This lossage is often abetted by an IT failure to examine bad data input for flaws in the step.

Data aggregation links newly input information to other related data already in the system. Inconsistent data is not checked against existing data to reveal areas that should be resolved, rendering potentially another 15% of the data wrong or otherwise unusable.

The third step is data combination, where input is made available as part of a broader environment than the system, e.g., online transaction processing, that received it. One key function of a data warehouse is data aggregation, but over time, less of the needed information is put into and pulled out of the warehouse. The information isn't shown entirely; about 20% of data may exist happily in the data center or in the cloud, but could be unavailable for actual analysis by the business.

At data delivery, the most frequent complaint is timeliness. It is an art to determine what things decision-makers must see quickly and what information needs only be presented weekly or monthly. Of the information-loss problems, this is the most visible. Estimates of unnecessary information at this stage range from 15% to 25%.

The final step is data analysis, and here, again, the tools that let a decision-maker focus on information in an overall data presentation are deficient. Another 15% of needed information can get lost at this stage.

IT organizations report that about two-thirds of all potentially useful information falls by the wayside during data handling -- a moderate amount at each stage. A metric that samples data at each step illuminates where the losses occurred and where IT can implement easy fixes. This monitoring also has a huge impact on corporate perceptions of IT and enterprise effectiveness.

5. Customer satisfaction

The advent of pervasive computing and its effect on all user and internal user interactions with the enterprise means IT-run software is a growing proportion of customer and user satisfaction.

Even today, customer satisfaction surveys and user polls are typically inflexible and broad-brush -- they miss key parts about what users found unacceptable or extraordinary. However, even with a blunt instrument, it is possible to gain hints about what is going well. Moreover, this metric reminds IT and corporate stakeholders that what matters most is the perception by the end user, not short-term corporate or IT opinions.

This was last published in November 2014

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Data center metrics and standards guide

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What IT metric should all organizations actively track?
Cancel
Personally, I think it comes down to performance tracking (Especially when it comes to time-sensitive projects). If it takes a while to fix an issue, the customer won't be satisfied and might migrate to another organization.
Cancel
Yes, time to complete projects, number of projects successfully completed - something that tracks whether things actually got done is important. 

We know users are taking more and more control of their own IT, so keeping the internal 'customer' happy is vital. 
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close