Is energy-efficient software the next step to reduce operating costs?

When the limits of hardware innovation have been reached, energy-efficient software could be way to cut electricity costs.

The next data center innovation must come by way of energy-efficient software.

Hardware manufacturers, data center designers, energy-efficiency-standards groups and people in many other industries are all working to develop products with higher energy efficiency. Those designs may take shape in the data center sooner than we expect, when nearly every facility will be cooled entirely with outside air, in virtually any climate. Eliminating mechanical refrigeration leads to major energy savings.

How do we further reduce consumption and improve energy efficiency, when we've gone as far as we can with computing hardware and facility infrastructure? The only answer -- outside of some unforeseen breakthrough in digital electronics -- is slimming down software for better efficiency.

The thinning of software unfortunately won't happen unless software developers recognize its necessity and importance. Further, it's a wakeup call to the IT community, which has to start thinking differently about how it evaluates and buys, if it's going to influence software developers to head in the right direction.

Look to the past for software innovation

We all know -- as do the program developers -- that software has become bloated. In the early days of computers, memory was very expensive and processors were slow. Code had to be written tightly to run in any reasonable length of time on mainframes with only 64 KB of memory.

Why so little? Many younger IT people today have never seen magnetic core memory and don't realize why even big mainframes had such limited amounts. Core memory used tiny magnetic donuts, about the diameter of pencil lead, strung on tiny wires in three directions -- one magnet per data bit. So 64 KB of memory required 512,000 tiny magnets, all strung by hand onto wires inside a cube.

I once saw one of the only 1 MB core memories in existence: A 10-foot cube surrounded by a chain-link fence that served 14 big mainframes. Back in 1985, it cost $1 million -- just think how that would translate to the multi-GB memory sticks we carry around in our pockets today.

In the 1980s, both operating systems and applications had to be written with a minimum number of instructions to be viable. In short, code back then was highly efficient, and both programs and machines were benchmarked for speed before purchase.

Programmers used to count the cycles each instruction used to make sure the program could run in a realistic period. One infamous government team failed to do this, and a three-year effort to develop a program meant for daily updates took 28 hours to run. As a result, the industry developed programs that looked ahead and transferred data from tape or disc exactly when needed. Such techniques were needed to get maximum use out of expensive memory resources. That has changed.

The development of cheap memory and high processor clock speeds took the handcuffs off software developers. The result has been an explosion of capabilities that weren't even dreamed of 40 years ago, and that's not necessarily a bad thing.

However, it's also created many trillions of lines of code, which large teams of programmers wrote and then patched again and again to fix flaws or to plug security holes. The software bloat is no surprise, but it's as much the fault of us, who demand ever more capabilities and faster roll-out rates, as it is of the program writers who, for years, gave little consideration to complexity and run times. The hardware developers have enabled it by proving that Moore's Law still holds true.

Regardless of what hardware we're running, every instruction ends up inside the computer as binary numbers that use machine cycles – and therefore energy -- to process.

Bloated software has to go

So why is software "the final frontier" in gaining energy efficiency? When we reach the limit of what hardware and infrastructure can do, the only thing likely to achieve further improvement is streamlined software.

Enormous progress continues to be made in reducing hardware energy consumption and increasing power and cooling efficiency, but it can only go so far. A new standard for minimum energy efficiency of data centers is in development, and guidelines for hardware development have been published that will run continuously in virtually any climate without mechanical refrigeration -- just outside air -- for cooling. Once those benefits are tapped, we have to look to energy-efficient software.

Tiny signs of change are already here. For example, newer versions of a few well-known programs have mentioned reduced storage space and faster processing, along with the list of other version enhancements, but that's a small start.

There was a time when an important consideration in selecting applications software was how few keystrokes or mouse clicks were necessary for any operation. Now, every program has to have more features and be "all things to all people" -- even though the vast majority of users utilize only a very small fraction of the capabilities. Most programs like this are written in modules.

If only we could easily turn off or remove modules for which we have no use. But even configuring basic features on most software today requires drilling down into multiple layers of menus. One company even made energy-saving features on servers a special activation, because access was so hard to uncover otherwise.

As we've seen with legacy COBOL programs, you can't replace millions of lines of code overnight -- or even, sometimes, over decades. So, just as the hardware side of the business has been working diligently to improve energy efficiency, the software side must learn to recognize the effect their products have on data center energy usage and make a more aggressive attack on bloated, inefficient code. It will take a long time for such an effort to have any significant effect, but if new programs start to consider this important, in time, the results will follow.

Some software needs to be robust, but it would be surprising if anything -- even robust programs -- written today could not be streamlined, while retaining speed and functionality. Like it or not, every machine cycle takes energy, even on highly efficient processors. And memory draws power as well -- especially spinning disc storage. So if we truly want to maximize the energy efficiency of our data centers --and reduce consumption by our millions of personal computers as well -- the industry will need to address how it produces software and return to the mindset of the days when both memory and processing speed were precious commodities, not to be wasted.

About the author:
Robert McFarlane is a principal in charge of data center design for the international consulting firm Shen Milsom and Wilke LLC. McFarlane has spent more than 35 years in communications consulting, has experience in every segment of the data center industry and was a pioneer in developing the field of building cable design. McFarlane also teaches the data center facilities course in the Marist College Institute for Data Center Professionals program, is a data center power and cooling expert, is widely published, speaks at many industry seminars and is a corresponding member of ASHRAE TC9.9, which publishes a wide range of industry guidelines.

This was first published in April 2013

Dig deeper on Data center energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Robert McFarlane asks:

How do you think the market for energy-efficient software will evolve?

4  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close