Top data center technologies to watch in 2012: Advisory Board Q&A

IT professionals should watch for a wealth of emerging hardware and facilities technologies in 2012.

There is something exciting about the future. Maybe it’s the sense of anticipation. For IT professionals, this often manifests in the optimism of problems solved, the discovery of new business potential, or simply unlocking the unrealized possibilities in a new technology or tool. As 2011 draws to a close and our thoughts turn to the holiday bustle, we asked our Advisory Board members to discuss the technologies that they are looking...

at for 2012 and talk about why those technologies are important to them and to the data center.

Bill Bradford, senior systems administrator, SUNHELP.org
In 2012, I'm looking forward to improved 64-bit ARM processors with the ability to handle 4 GB of RAM (and more) along with HP's ARM server efforts. I think ARM is an idea whose time has come. More and more desktop and server systems are based around reduced instruction set architecture while software is becoming increasingly architecture-independent (just recompile from source, or get a fat binary). I expect Apple to release an ARM-based MacBook Air system using their A6 or future processor in 2012 or 2013. Personally, everything I do on a computer (other than virtualizing other x86 systems) can be done equally as well on ARM or x86.

ARM offers lower power, lower cost and less heat – especially on the desktop. Why have fan noise when you don't need it? In the data center, this leads to lower costs for power and cooling. ARM systems also tend to take up less space due to lighter internal cooling requirements, meaning you can pack more of them into a rack for greater space efficiency. Still, it will be a while before ARM is a suitable alternative to x86. Don't just go with the hype – make sure that the hardware you're buying is a good fit for what you need servers and systems to do.

Robert Crawford, lead systems programmer and mainframe columnist
I’ll talk a bit about mainframes. Next year should be an interesting one for mainframe hardware as rumors circulate about the possibility of a new IBM processor family in mid-2012. In addition, IBM's end of life announcement for the z196 earlier this year means that mainframe bargain hunters will be able to find a lot of those processors on the used equipment market next year.

The next generation of mainframe processor will be faster and cheaper. Along with the greater computing power, there will be a lot of attention paid to new instructions and other assists that IBM might add to boost the performance of non-traditional mainframe workloads, such as Java and XML parsing. It will also be interesting to see how IBM extends the z/Enterprise concept to bring even more of the distributed platform under the mainframe management umbrella.

Since the processors will almost certainly pack more power into a smaller package, data centers will be able to consolidate more operating system images (LPAR's) onto fewer boxes and save some floor space. There should be power savings with the new hardware as well.

IBM has a track record for smooth transitions between hardware releases and this should be no exception. The tricky part will be deciding if it's time to invest in new hardware in a stagnant economy. Many performance, financial and capacity analysts will spend a lot of time trying to decide whether to spring for the new technology or to wring one more year out of what they have.

Bill Kleyman, virtualization architect, MTM Technologies Inc.
One of the biggest trends that I'm seeing for 2012 is the integration of a private/public cloud. Many organizations see the benefit of delivering entire workloads over the wide area network to the end user who can then utilize their own personal device. This technology demands a strong central data center with a good core set of servers.

A major technological advancement for 2012 will be the increase in server density. New processors, better RAM and more efficient virtualization techniques will help create a more robust data center. Some of these improved server densities will come in the form of better blade chassis systems. I'm also looking for big advancements with products such as the Cisco UCS line. Onboard hardware profile virtualization allows for quick provisioning of entire blades and this technology will create better cloud-ready infrastructures.

Why the interest for 2012? Budgets continue to be tight, so administrators are looking for more efficient server technologies. Enhancements with server density will allow engineers to place more users per server and still maintain a positive end user experience. More organizations see the benefits of the cloud – but they also need to understand that they need the underlying hardware there to support it. Better blade chassis designs allow for cost-effective hardware capable of doing more with less.

The impact can be profound. With more effective core server technologies, administrators will be able to do much more to help their organizations grow. Better disaster recovery capabilities, backup methodologies and more cloud deployments will begin to occur with the adoption of more efficient data center servers. Many organizations will be able to remove the end-point and replace it with a “bring your own device” initiative. With better server technologies, IT administrators will be more confident in centralizing their data, replicating it to hot/cold sites and delivering the workload down to the end user. The key here is that the workload will be delivered seamlessly without much latency or performance degradation.

As with any new technology, proper planning will be a key factor here. When upgrading sever technologies, remember that other components may need to be upgraded as well. Just because an environment has some of the fastest core servers available doesn’t mean the network or storage infrastructure is capable of handling the throughput. With server upgrades come the need to evaluate storage area network environments and the networking capabilities of the data center. Core server and data center upgrades should be made alongside upgrades to other affected technologies. The return on investment on a new set of blades can be severely impacted if the networking environment is not capable of passing the necessary traffic throughout the infrastructure.

When looking at new servers capable of doing much more than current systems, make sure you plan out the full use of these servers. Over-allocating resources can be the same as throwing money away. It's important to understand what new technologies are available, how they will impact your data center, and how to best integrate them into your existing environment.

Matt Stansberry, director of content and publications, Uptime Institute
As a facilities pro, I see modular and prefabricated data centers as some of the most important technologies of 2012. The landscape of data center design has changed, and as systemized approaches continue to evolve and improve, custom one-off data center designs will no longer be the de facto approach to data center deployment. Data center managers will need to make decisions in the future between systemized, modular data center designs or one-off custom engineering efforts. To make those decisions effectively, executives need to understand what the modular designs do or don't offer, and how those decisions impact cost and availability of data center capacity. Data center pros need to educate themselves on the products and systems components from vendors, the economics of modular data center deployment versus traditional construction, and the design considerations for deploying a modular data center campus.

Robert McFarlane, data center design expert, Shen Milsom Wilke Inc.
For 2012, I am mainly considering two technologies: improved data center monitoring systems and advances in liquid cooling, but I’m also watching to see if diskless storage will make any significant inroads in the months ahead.

The continued push for increased energy efficiency and reduced consumption requires good performance information from both the facility infrastructure and the computing equipment. An enormous number of data points are provided by air conditioners, uninterruptable power supply (UPS) systems, cabinet power strips, mechanical plants, generators, and modern server and storage hardware. But separate monitoring is often needed for each system, and the user is faced with mountains of "data" instead of integrated and meaningful "information."

Monitoring systems available in 2011 are a significant improvement over what was available before, but the best and most publicized solutions have still been highly customized. These are expensive and require dedicated projects to develop – so they are impractical for most enterprises. What is needed are essentially "plug and play" monitoring systems that will enable fully networked attachment of anything and everything, matched with simple user configurability to clearly show key performance parameters, alarms, and histories, and enabling users to drill down to more detailed data when it is needed.

Cooling is another critical issue. High-density computers have reached a point where cooling with air alone is not only becoming impractical, but risky as well. Liquid cooling is appearing more often – it does a better job of removing heat, is far more energy efficient that air and it can easily be kept circulating to maintain cooling while generators start. But it will be necessary for manufacturers of computing hardware and cabinets to develop more sophisticated ways to handle the liquid, both to overcome operator fears and to make liquid connectivity as easy as power.

Although I do not actually select or specify storage solutions, I have to plan the infrastructure to support it. Storage is growing at a rapid pace, and the power draw from thousands of high-speed spinning disks is significant. A move to diskless storage would likely reduce power and cooling needs and consequently improve energy efficiency, so it will be important to see whether the industry will find diskless storage sufficiently reliable to start a trend in that direction.

Improved monitoring will make it easier to manage the efficiency of the data center, which is becoming as much of a data center manager function as it is a facilities responsibility. Advances in liquid cooling should make it easier for data center managers to accept this cooling approach and achieve better, more reliable cooling of very high performance computing equipment. Diskless storage (if it proves as reliable as disk and becomes widely accepted) will reduce power consumption, cooling challenges and operating costs.

There are a few issues to watch for. For example, weigh the amount of custom programming needed to make monitoring systems do what you expect, and ask particularly about the ease of integrating additional monitoring points that you may not yet have available. Things like chilled water flow rate, pressure and temperature may be more difficult than standardized UPS and computer room air conditioner monitoring. And be realistic about liquid cooling. Examine manufacturers’ approaches to protecting against leaks and talk with people who are using it. You may be surprised at how reliable liquid cooling can be, but it must be installed and maintained according to manufacturers’ recommendations and using best industry practices.

Robert Rosen, CIO, mainframe user group leader
In a general sense, the technologies for 2012 won't be radically different – especially given today’s very tight budgets. However, there are some existing technologies that will continue to be relevant.

For example, I expect to see a continued focus on virtual servers. Since most servers are operating at less than 10% utilization, making more use of a server means less need to buy more hardware, less energy used and less real estate (physical space) required. I also see the importance of real-time security monitoring because the time available for log analysis is decreasing to zero. Security threats are multiplying and there is less time to react. Finally, I see value in continuous data protection. Backup windows are decreasing and people simply can't afford to lose data they created between backups. They also need less disruptive data protection schemes that can provide their data without a disaster being declared, sites switched, and so on – they want to work continuously.

The new emphasis for 2012 will be a fresh focus on managing consumer-level mobile devices (e.g.,Apple iPads and RIM BlackBerry devices) to ensure security and ways to use those devices to remotely manage the data center. Storage is always an issue, but I don't see any significant breakthroughs that will help us address the continuing problems of more storage without more physical space and weight (floor loading).

An evolution, not a revolution
The sentiment for 2012 is optimistic but conservative. No new technologies are promising to revolutionize the data center next year, but existing technologies continue to advance – adding value, improving efficiency and helping IT professionals make the most of a data center investment.

This was first published in December 2011

Dig deeper on Data center hosting, outsourcing and colocation

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close