Victoria - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

PCI Express' speed to usher in a new server era

PCIe speeds of 20, even 100 Gbps enable the plug-and-play server flexibility and scalability that data center managers only dream of, and at a bargain.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Liquid immersion cooling surfaces in the server market

The technology that will revolutionize servers has been available for a long, long time.

We've been hearing the term disaggregation lately in IT. Cisco introduced a line of UCS servers, the M-Series, that disaggregates servers into their component parts. Intel's showcased its silicon photonics technology at various events, including the Intel Developer Forum.

It's an interesting step forward for data center computing, because with more bandwidth we're redefining what a server is.

To understand what's going on, we need to look at the underlying technology here: PCI Express. PCIe showed up in 2004 as a collection of sub-protocols, with standards defining a physical layer, a data link layer and a transaction layer. Sounds like part of the OSI networking model, doesn't it? And just like networking, you can change the physical layer.

Early on, PCI Express was only found inside computers, but in 2011 an interface called Thunderbolt took PCI Express outside the computer case.

Thunderbolt is fast, at 20 Gbps, but really it's just a different physical connection for PCIe. In fact, you can get external Thunderbolt enclosures that have regular PCIe slots in them. Thunderbolt was supposed to be optical, based on Intel's silicon photonics work, but copper cables were cheaper. Since Thunderbolt was aimed at non-IT consumers, the cheaper option won.

In the meantime, Intel has been patiently developing its Light Peak -- the optical version of Thunderbolt -- with a PCI Express speed of 100 Gbps. The only other 100 Gbps interfaces out there are Ethernet, and they are tremendously expensive. Intel's silicon photonics product looks as if it'll be far more economical, since it is actually aimed at consumer-level interconnects. Furthermore, this design has the whole PCIe bus on it, whereas Ethernet has, well, Ethernet.

Imagine if your network switch, your storage array and perhaps other servers in a cluster just appeared as additional devices on your PCIe bus. Not only are you communicating with them at speeds of 100 Gbps, but it's across two cables for redundancy, not six, eight or more.

The system management implications of this kind of PCIe speed are significant. What if your storage array was a plug-and-play device? Chuckle if you want, but it's going to happen. Plug in a server, let Windows Update grab drivers for it, assign it to a storage group, and you're done. Or did you like the tedious, error-prone way storage works now? The same goes for networking. Who needs IP when you can write directly into your neighbor's RAM with remote direct memory access? Automation is a Band-Aid; this disaggregated approach attacks the root of the IT complexity problem itself.

Most interesting are the server-side possibilities. Servers use non-uniform memory access (NUMA) to essentially turn each CPU in a server into a small island of compute resources. Your four-socket server is really just four one-socket servers that talk to each other over -- you guessed it -- PCIe. What if all "servers" were just single-CPU cards with memory, and you added however many the organization needed? How much does a 32-way IBM System p cost? Lots. How much does a new Cray CS-Storm cluster cost? Cray's secret sauce is the interconnections. Same for Cisco, with its M-Series. It draws on the UCS fabric interconnects to do exactly what I'm describing here, assembling larger "servers" from 1 CPU, 32 GB RAM discrete compute nodes that then get all their other resources (storage, networking) across those same interconnects.

This is a space that Intel is going to commoditize, and the upside is enormous on many fronts: standards, cost, flexibility, Opex, maybe even Capex. This will be total IT convergence, not into a bunch of 2U chassis kludged together with Ethernet and IP, but onto a single distributed backplane, thanks to an Intel innovation that first took shape decades ago.

This was last published in October 2014

Dig Deeper on Server hardware strategy

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close