Applications dictate new server technologies

Enterprise IT now has specific next-gen servers that are created by the workload and are fully customized. Has the future of servers arrived?

SANTA CLARA, Calif. -- Next-generation servers break the commodity mold in favor of tailored silicon and accessories...

that meet specific application demands.

"How do server designers choose a chip? By the workload," said Karl Freund of processor company Advanced Micro Devices (AMD), speaking on a server technologies panel at the Open Server Summit 2014 here this week.

Traditional Windows and Linux servers use an x86 general-purpose processor; cloud-scale computing workloads run more efficiently on an ARM chip design with input/output and networking innovations; hosted desktop environments take advantage of the performance, cost and power features of an accelerated processing unit (APU) that combines central processing unit (CPU) and graphics processing cores. APUs' heterogeneous system architectures cut out the step of data copying from the CPU to the GPU.

To support the high-volume, narrow-focus companies in the digital services industry, like Uber, AirBnB, and Alibaba, Intel has shifted its focus from general purpose x86 servers to customize the processor cores, frequency, thermals and other specifics for each workload, said Raejeanne Skillern, Intel's cloud provider business manager.

ARM chips, the silicon of choice for mobile devices, are infiltrating enterprise IT now that data center scale-out is increasing, said Jeff Underhill, ARM's director of server programs. The architecture's trademark is low power consumption, and ARM v8 scales up to 64+ cores.

The upcoming AMD Skybridge design framework offers one socket that accepts x86 or ARM chips, so the same basic server build can be tailored to specific tasks.

The time might finally have come for microservers, presenters at the Summit agreed. The new Intel Xeon D 64-bit architecture promises data-center-class features and complementary operation with the established Atom family. In addition to cloud service hosting, microservers provide lower costs with performance that handles tasks like cold storage, networking functions and other use cases, Skillern said.

The HP ProLiant m400, based on ARM chips, synthesizes "fat I/O, lots of memory, and good compute" with low power use, said Gaurav Singh of Applied Micro Circuits Corp., which deploys server technologies based on the ARM v8 64-bit architecture.

Embedded capabilities for any server

Building native support onto the silicon for network, security, power management and other functions is another server trend on the upswing.

While embedded security isn't new, it has grown in dynamic software-defined data centers, with real-time security tracking and frequent port utilization shifts.

"Expect native support for RoCE [remote direct memory access over Ethernet],” Singh said. AppliedMicro's X-Gene 2, currently sampling, gains efficiency from RoCE integration.

"Web-scale IT is about eliminating latency," he said, and it's happening on 10 GbE, to push adoption of RoCE."

As for interoperability, Microsoft made major processor upgrades in its Open CloudServer version 2, packaged in the standard 19" chassis.

"Five vendors can interoperate in this one chassis," said Mark Shaw, Microsoft's director of hardware deployment.

The design uses 28 core Intel Xeon E5 v3 and the option to integrate GPUs and field programmable gate arrays (FPGAs) for specialization. Microsoft also developed an FPGA accelerator to offload compute for specialized parts of applications. Management, power and cooling fans are shared across 24 servers in the chassis. The design decouples compute from networking to storage systems. With separate compute and JBOD disk storage layers, the server can be tailored to the application's needs.

Everything beyond the cores

IBM unlocked its POWER chip architecture to the OpenPOWER consortium, to reduce total cost of ownership (TCO) for data center servers, said Norman James, a senior technical staff member at IBM. TCO improvements can't solely fall on the silicon in a server because they also rely on the quality of service management, automated throttling, attached accelerators and other devices that consortium members develop to work with the architecture.

"DDR4 memory will appear on next-generation servers," said Marty Foltyn, president of BitSprings Systems, a consulting firm in Del Mar, Calif., and representative for the Storage Networking Industry Association. Intel's "Grantley" Xeon E5 v3 integrates DDR4 for lower power consumption with fast memory.

However, DDRx attached memory won't suffice forever, James said, and alternatives are needed to take servers into the future of computing.

Microsoft touted its M.2-interface NVM Express solid-state drive that fits 8-TB storage into a form factor about a finger's length. The technology tweaks flash memory from the mobile/consumer space to reduce data retention and increase endurance, bringing it into the world of cloud-scale servers.

Next-gen server chassis and network interface cards (NICs) may claim enhanced importance as a place to offload the compute tasks associated with software-defined networking. The server CPU has enough to handle with higher utilization packing virtual machines densely onto the system, said Ron DiGiuseppe of Synopsys, a semiconductor design IP provider. Hardware assist in the NIC natively handles network function virtualization for lower latency operations.

Server power is also about to become simpler.

"Expect new servers to run on 12 V DC power -- taking out all that power-related complexity from the IT equipment," said John Meinecke, president and CEO of Edison DC Systems Inc., a Milwaukee-area provider of DC data center power technologies. "Why do you want the server doing all this?"

Of course, no server is an island, so new technologies must integrate with the OS, network and data center as a whole. Increasingly, IT workloads run on servers distributed around the world, so data center conditions and server maintenance will vary.

Application availability has to come from the software, not from failure-proof hardware, when you reach large-scale and distributed computing, according to Kushagra Vaid of Microsoft. When a server in a fabric fails, the software spins up the workloads elsewhere from replicated data and re-balances the load.

"Administrators aren't rushing in to hot-swap a hard disk drive or other component to protect the application," he said. "They can pull the whole server out, perhaps days later, and fix the hardware."

Dig Deeper on Emerging IT workload types

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close