zimmytws - Fotolia
Utility bills are no small data center expense. As part of the push to address IT spending, data center managers and organizations continue to look for ways to drive down power charges and increase overall energy efficiency.
Because there are so many infrastructure components that use electricity, managers can address CPU, storage, cloud infrastructure and operating temperature settings to reduce data center power consumption.
Admins should spell out any risks and get executive support before pursuing any of the following tactics to reduce data center power consumption as a high-density, highly efficient infrastructure can make a data center go thermal in seconds.
1. Reduce CPU power demands
Virtualization is not a cure-all for reducing data center power consumption. Of course, there's a clear advantage to high-density computing -- admins can cram many VMs into a single server -- but CPU demands for power and cooling do grow with each VM.
In many cases, these costs shift from the distribution of power across lots of small servers to concentrated power that must cool red-hot VM-hosting systems.
Switch to variable-speed fans
Recent research has found that power consumption can be decreased by 20% through CPU fan speed reduction. As such, organizations should use variable-speed fans to cool data center equipment. These fans only consume power when they run, and only run at required speeds that are based on sophisticated thermostatic measures.
Because these fans slow down during low CPU utilization, they quickly decrease power usage with each nonturning blade. And don't stop with servers; check the cooling features of uninterruptible power supply devices and the power supplies of various appliances on the same power grid, plus any other hot spots that may have a fan spinning.
Use liquid cooling
Another way to reduce power consumption is to adopt liquid cooling for CPUs. Instead of fans that blow air across a heat sink, liquid cooling works similarly to a car's radiator and uses liquid to dissipate heat.
Liquid cooling is widely regarded as being more effective than air-based cooling methods, and depending on the application, may have the additional benefit of reduced noise.
2. Address cooling-related costs
Cooling costs aren't just about CPUs cranking out British thermal units, which raises the next issue. The power requirements for cooling, lighting and battery backup usually accounts for 35% or more of a data center's total energy consumption, regardless of how efficient a building is built. Servers gobble watts and addressing server power needs is major overhead.
Raise the air temperature
According to data center infrastructure suppliers, modern servers can perform well up to 77 degrees Fahrenheit; some data centers operate servers closer to 65 degrees Fahrenheit.
If admins raise the ambient temperature a few degrees, there can be an immediate drop in power usage from the cooling system without any effects on server performance. There's no overhead or investment needed, although close temperature and server monitoring -- as well as a pilot program -- are advisable to avoid unpleasant surprises.
Admins should not haphazardly raise data center temperatures. Guidelines from ASHRAE provide recommended operating standards for energy consumption, temperature and humidity control.
3. Consider the effects of storage
Storage is a major driver of data center power consumption. The actual power consumption varies based on the hard disk's make and model, but it is not uncommon for hard disks to consume approximately 6 watts of power each.
When admins consider the number of hard disks in their data center, it is easy to see how they can collectively consume large amounts of power. Additionally, each hard disk gives off heat, which increases power consumption related to data center cooling.
Use bigger, slower drives
Using bigger, slower drives can help, but this should not be done for high-demand transactional processes, such as financial databases or critical 24-hour systems. If admins delegate a percentage of mostly unused files to a lower storage tier, they will be able to replace faster units with low-energy demand drives. In turn, less drives burn less energy, creating less heat. This can be an expensive undertaking, but as most organizations build out more storage every quarter, it can be a worthwhile investment.
Organizations should also use the power management profiles of the OS to put hard drives into standby mode when they are not actively in use. This reduces power consumption and prolongs the hard drive's lifespan.
Switch to SSDs
Organizations should also consider replacing hard disks with SSDs where it is practical. SSDs generally consume far less power than hard disks and deliver a greater number of IOPS.
For example, Samsung's enterprise SSDs consume only 1.25 watts of power in active mode and 0.3 watts when idle. This is roughly one-fourth of the power consumed by a 15,000 rpm SAS HDD. Plus, SSDs do not have any moving parts, which means they produce significantly less heat than hard disks.
4. Use cloud-based services
Though moving IT workloads to a cloud or colocation provider externalizes the carbon consumption to the host site, many organizations concede that big vendors are experts at squeezing the most out of a kilowatt. Hosted services providers often focus on delivering the best value for power at a lower cost for their customers.