SANTA CLARA, Calif. -- The emergence of digital services, real-time data analytics and pervasive computing have...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
altered the face of traditional IT, and data centers need to catch up.
Traditional data centers with manual provisioning, diverse IT infrastructure and layers of disparate management tools can't keep up with business computing, according to presenters and attendees of the Open Server Summit here last week. The new model of cloud-scale computing is trickling into enterprise IT.
Here are some of the biggest takeaways, from the lack of true open standard adopters to the evolution of servers.
"Open standards are akin to the historic trade routes such as the Silk Road: Highways that opened up the ability for people to trade their specializations (in our contemporary case, data sets) so that we can innovate and generate new opportunities," -- Benjamin Woo, managing director of Neuralytix Inc., an IT market research and consulting firm.
Woo took issue with keynote speaker Alex Henthorn-Iwane, VP of marketing for automation tool provider QualiSystems referring to the hyperscale data centers as "unicorns." In reality, Woo said, Google, Facebook, LinkedIn and others built their infrastructure at a time when no traditional infrastructure could support what they were doing, but they didn't burn the map once they got there.
"They essentially created islands of data with huge potential," Woo added.
Henthorn-Iwane suggested the way to bridge hyperscale and enterprise computing was operational relevancy. Woo dismissed this: "Operationally, there is nothing stopping any enterprise from making their traditional operations relevant to the new methods or vice versa," he said.
The challenge, Woo said, is not automation choices, but more political and philosophical relevancy. All the technology necessary to bridge these two worlds and to evolve from the traditional to the contemporary is already in place. Politically and, in many cases, economically, the drivers to do so are not yet evident. IT leaders can disrupt the natural evolution of the traditional to the contemporary with their interpretation of how the infrastructure should look, which is a way of ensuring job security, Woo said.
"Servers in data centers are largely the same -- everyone's on 1P or 2P or 4P servers. Dell, HP, etc. are different designs but the core is the same ... That's starting to change. [System-on-a-chip] SoC is a trend in the mobile and consumer industry that's coming to servers" -- Leendert Van Doorn, fellow and VP at semiconductor company Advanced Micro Devices Inc.
"SoC will lead to a paradigm shift in server design," -- Brian Zahnstecher, principal of PowerRox, a consulting firm based in San Jose, Calif.
Data center managers and facilities operators don't spend much time looking at their servers' chips. But a lot of modern data center concerns -- lower PUE, reduced latency, workload-specific processor performance -- stem from what's happening on the board in the server.
SoC takes the system functionality out of different components on a server motherboard -- processors, glue logic, memory, accelerators -- and integrates it all into a single integrated circuit. The benefit to data center operators is fast dispatch without crossing into the operating system. The server requires less power to operate, effectively decreasing the waste heat generated at the server level. Fewer layers to interconnect disparate components on the server boards frees up routing space for high-speed data.
Fixed function accelerators range from memory compression to data plane integration for networking, storage compression for distributed databases and crypto accelerators for data protection. Programmable accelerators include graphics processing units (GPUs) and field-programmable gate arrays (FPGAs). Specific verticals, such as the oil and gas industry, advance productivity with GPU's vector-based processing. For IT workloads that change a lot, for example in deep packet inspection, FPGAs are helpful. Application-specific integrated circuits are faster, but expensive and not as flexible.
Software is the challenge with this server transformation. To make a heterogeneous chip work in a server deployed in a real data center requires industry standards, plug-and-go architectures, runtime system specialization and open source options. With the chip component interoperability problem solved, server companies can focus on integrating the functionality with their software, firmware and differentiating features, Zahnstecher said.
"400 smartphones = 1 new data center server. 100 medical wearables = 1 new data center server. 20 smart signs = 1 new data center server ... We can't do it today with how we build data centers," -- Raejeanne Skillern, general manager of the cloud service provider business within Intel's Data Center Group.
New data center infrastructure will need customized processors optimized by workflow, but with the same economics and volumes of general-purpose silicon. Rather than manually assigning applications to a fixed set of resources, the application should consistently tell an orchestration layer what it needs and have the resources automatically scale, Skillern said. Expect disaggregated resource pools in servers as early as 2015, as a way to flexibly adapt with application needs, she added.
This was a common theme of Open Server Summit, with Microsoft's general manager of server hardware engineering Kushagra Vaid pointing out that cloud applications don't require the same resources as traditional workload. So changing server design and redundancy occurs in the data not the hardware, and a broken server is cause for a maintenance note rather than an alarm. And the traditional data center network isn't broken, but it isn't fast enough for a real-time business infrastructure, said Steve Garrison of open switch creator Pica8 Inc. It also scales 10 times less than an open cloud data center architecture.