What’s old is new again. IT managers are giving well-worn compute fabric technologies a second look, this time for enterprise workloads running on virtualized infrastructure.
Fabrics are nothing new. At a high level, the idea behind fabrics is to treat individual resources such as processors, memory and I/O as discrete reusable components, making it possible to dynamically build out and tear down compute systems.
Back in the day, fabrics were sometimes referred to as grid and cluster interconnects or processor area networks, and were largely used for high-performance computing (HPC), helping scale-out nodes share large data sets among themselves. Fabrics still cater to applications with those sorts of large “east-west” traffic patterns, but increasingly, those tend to be virtualized server environments.
Virtualized environments are the target for the latest vendor to throw its hat in the fabric ring: Xsigo. Its new Server Fabric technology allows users of the Xsigo I/O Director platform to directly connect servers and virtual machines to one another without having to configure switches, switch ports, VLANs or routing.
Xsigo’s Server Fabric is a game-changer, according to beta tester Aaron Branham, director of IT at Bluelock, a cloud hosting provider in Indianapolis, Ind.
“One of the limitations of Xsigo is that all the Ethernet traffic must go through the line card,” Branham said. Now, as long as two servers are on the same VLAN, they can communicate, he said. This will be especially useful for east-west traffic such as VMware VMotions and large file transfers.
At the same time, performing east-west data transfers across the Xsigo fabric frees up Bluelock’s northbound ports, he said, lessening the need to invest in core networking infrastructure, Branham said.
A tight weave
But Xsigo is hardly the only vendor touting its fabric wares. Networking vendors like Cisco, Juniper and Brocade all have fabric technologies in varying stages of availability, while server vendors like Hewlett-Packard Co. and Dell offer fabric technology in the form of HP’s VirtualConnect FlexFabric, or Dell’s PAN System, originally from Egenera.
And they all report similar increases in east-west traffic coming from virtual environments that could push IT shops toward their wares.
“There’s still a lot of traditional north-south traffic, but anecdotally, east-west traffic is trending up,” said Omar Sultan, Cisco senior manager for data center switching.
At the same time, networking technology and design have evolved to make fabrics a more viable concept, he added.
“The desire’s always been there, but the technology hasn’t,” Sultan said.
Specifically, fabric technology now benefits from fatter pipes, lower latency and improved management models that help IT managers think of resources as abstracted infrastructure, Sultan said. In Cisco’s case, that’s exemplified by its UCS Manager software, which allows IT administrators to define service templates that they can apply to resources in the fabric. Without that, reconfiguring servers is still a manual operation.
Baby steps starting with cabling
For some IT architects, implementing a fabric has more to do with cabling simplicity than some huge uptick in east-west traffic.
Woodforest National Bank installed HP VirtualConnect and FlexFabric on five HP BladeSystem chassis as part of an infrastructure refresh late last year. “We decided to go with VirtualConnect to reduce our cables to the switches, and to put less ports on our Fibre [Channel] switches,” said Stephen Jones, solutions architect at the in Woodlands, Texas, retail bank.
Before VirtualConnect and FlexFabric, Woodforest had 64 cables per chassis, and that number dropped down to just 16. “It really does reduce complexity,” he said, while still providing ample bandwidth. At the same time, VirtualConnect provides flexibility in the event of a hardware failure by making it easy to relocate a server profile to a new piece of hardware.
Woodforest’s environment is about 97% virtualized on VMware, and as a result, a lot of network traffic is moving between VMs that are all in the same chassis, and often on the same host. Jones conceded that keeping network traffic within the chassis lessens the load on its core network – but only to a point. “Cross-network communication still needs to travel through the core network to be routed – and that’s not going to change.”