How silicon photonics is helping to reshape servers and racks

Optical interconnects turn server designs inside out. The use of silicon photonics will send optical signals on a long-distance journey without loss.

Traditional copper cabling limits the way IT equipment is designed and deployed. Such limitations have had profound

effects on the evolution of data center servers. As optical signal interconnections become more reliable and less expensive, silicon photonics technology is changing the way systems exchange data and is transforming the shape of racks and rack equipment.

How does silicon photonics change systems and rack cabling?

Silicon (Si) photonics use semiconductor devices to direct light signals between devices with thin optical cables. Optical signals can travel faster and farther with less loss than electrical signals.

Si photonics interfaces are more expensive, however, and poorly supported for many local IT applications. Discrete interfaces have to convert electrical signals into light signals and back again. With the addition of these hybrid electronic/photonic devices to send and receive signals, silicon photonic devices can be integrated into systems with only minor design changes.

Optically connected devices develop performance levels unmatched by conventional copper cables. For example, disaggregated rack prototypes from vendors like Intel Corp. report data transfer rates up to 100 gigabits per second (Gbps) between modules, even when the modules are separated by several feet.

Is there an advantage to separating compute features?

Si photonics-enabled disaggregated racks are populated with far more flexibility than traditional data center racks, because electromagnetic interference (EMI) and slow copper interconnect speeds are not an issue.

In a traditional server rack, high-frequency, data-rich signals must be kept short and shielded, and signals transferred over longer distances must be slow to mitigate EMI effects. This is why servers have the high-performance processor, memory and I/O components on the same motherboard in close proximity. Error-forgiving protocols help data move along slow network cabling to other devices.

In a disaggregated rack design, individual functional modules can reside almost anywhere and still provide full-speed performance for constituent workloads. Imagine a rack where all of the power supplies are located in a single tray at the bottom, followed by a storage subsystem above, and then a series of modular servers using various processors, all interconnected to distribute Ethernet switch modules with optical cables.

Disaggregating the rack solves the problem of waste during server refreshes, where everything is replaced regardless of how narrow the goal was: to gain processor efficiency, for example. Relocating modules within the disaggregated rack can also help IT planners deal with thermal management issues, which often plague dense racks and other modular server architectures, like blade chassis. In addition, replacing compute components on a modular basis is often quicker and less disruptive than replacing complete server chassis.

What design initiatives should I be looking at when considering new rack approaches?

Rack disaggregation with silicon photonic interconnects is largely in the prototype phase, but there are several initiatives and projects that are worth observing.

The evolution of disaggregated racks is coupled with Facebook's Open Compute Project (OCP). The original goal of OCP was to create a modular, easily replaceable server that contained core computing components and relied on external DC power.

Early permutations of this approach appeared in Intel's Scorpio rack, where fans and power supplies were separated from server units. Although this is a unique departure from standardized rack systems, the idea was not innovative -- blade systems already separate the power supply from the server, storage and network blade modules.

The next Open Compute Project manifestation is a more modular approach intended to decouple functional areas that previously had to share the motherboard. For example, Intel's disaggregated rack developed with Quanta Computer places Xeon CPUs in one modular tray, Atom CPUs in another tray, storage in a third tray and so on. Each tray is optically interconnected with what Intel calls a New Photonic Connector. This enables tray-level upgrades and replacements without the need to replace all of the functional subsystems as you would with a traditional server refresh; just replace the aging processor modules in a tray with newer processor modules.

Disaggregated rack vs. converged infrastructure

Blade systems, such as those used in converged infrastructures (CI), represent a consolidated approach to system design. Blade systems remove and centralize core elements like the fans, power supply and external (intra-system) connections, while reducing core computing and networking functions to easily replaceable modules (blades). This approach is similar to early Open Compute Project racks.

Continued OCP development aims for discrete functional modules of server, storage, etc., that are interconnected through a high-performance optical network rather than common copper wiring or backplanes.

Although CI blades and disaggregated racks use different standards and approaches, they share similar goals: to simplify and consolidate data center computing for more efficiency, and to provide hardware platforms that can be upgraded more easily and cost-effectively than traditional full-featured servers.

Driven by OCP's initiative, the move toward disaggregated rack systems allows data centers to shift focus away from the number of servers installed, and focus instead on the amount of computing, networking and storage resources available. Long-term goals of system consolidation and private cloud computing depend on fast, flexible and scalable computing resources. The fully modular subsystem architecture enables upgrades and expansions as new subsystems become available, at lower cost and with less waste than traditional upgrades.

This was first published in February 2014

Dig deeper on Data center energy efficiency

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close