Get started Bring yourself up to speed with our introductory content.

What's the best data center network topology?

You need to know these most common data center network topologies, and check out a host of alternative topologies waiting in the wings.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Cloud migration confidential:

There's no one best data center network topology for every company. Once you understand the major topology options,...

it is easy to see which will work best for your network traffic, or get ideas to troubleshoot problems in your existing network.

What are the important data center network topologies to know?

Today's data center networks are primarily three-layer topologies. This comprises a core of data center switches that connect to each other and to the external network provider(s), a user or access layer, and the aggregation layer between these two that moves information primarily north and south.

Leaf-spine is a data center network topology that's catching on for data centers that experience more east-west network traffic. This topology augments the spine-layer with more switches to handle traffic within the data center, such as storage area network data traffic.

Alternate and emerging network topologies

These designs address specific issues for specific applications. Alternatively, newer designs rethink network design theory completely, moving network intelligence into the hosts, and using those hosts as forwarding nodes in addition to traditional switches. Mainstream networks might not need that sort of capability today, but emerging trends often trickle down to the mainstream. While they might not be what's now, they could be what's next.

There are a few other generally accepted network data center topologies beyond the traditional three-layer network and leaf-spine options. While they are less commonly found in real-world deployments, they are relevant and well-understood.

Multi-tier leaf-spine. One approach to scaling a leaf-spine network horizontally while maintaining an acceptable oversubscription ratio is to add a second vertical leaf layer.

Hypercube. A simple 3D hypercube network is really just a cube: a six-sided box with switches at each corner. A 4D hypercube (aka a tesseract) is a cube within a cube, with switches at the corners connecting each other -- the inner cube connects to the outer cube at the corners. Hosts connect to the switches on the outer cube. An organization needs to understand its application traffic flows in detail to know whether or not a hypercube topology is worth considering.

Toroidal. This term refers to any ring-shaped topology. A 3D torus is a highly structured internetwork of rings. Toroids are a popular option in high-performance computing environments, and might rely on switches to interconnect between compute nodes.

Jellyfish. The Jellyfish topology is largely random. In this design, switches are interconnected based on the network designer's preference. In research studies, testing of Jellyfish designs resulted in 25% higher capacity over traditional network topologies.

Scafida. Scale-free or Scafida network topologies are somewhat like Jellyfish in that there is randomness about them, but paradoxically in that randomness, more structure becomes apparent. The idea is that certain switches end up as densely connected hub sites, similar to the way an airline manages flight patterns.

DCell. Many servers ship with multiple network interface cards (NICs). Some of these NICs connect in a cell directly from one server to another, while others interconnect via a switch to other cells. DCell assumes a server has four or more NICs.

FiConn. Similar to DCell, FiConn uses a hierarchy of server-to-server interconnects and cells, but only assumes two NICs.

BCube. Like DCell and FiConn, BCube uses extra server ports for direct communication, but is optimized specifically for modular data centers that are deployed as shipping containers. Microsoft, the power behind BCube, built the BCube Source Routing protocol to manage forwarding across this network data center topology.

CamCube. This topology is effectively a 3D torus running Microsoft's CamCubeOS on top. The purpose is to optimize traffic flow across the torus while it is being used to interconnect clusters of hosts. CamCubeOS assumes that traditional network forwarding paradigms are ineffective in this application and replaces them.

Butterfly. Google's flattened butterfly is a specific network construct akin to a chessboard. In this grid of switches, traffic can move to any switch in a given dimension. The purpose is to reduce power consumption, a great concern of Google's.

Next Steps

Read part one: The case for leaf-spine data center topology

This was last published in September 2014

Dig Deeper on SDN and other network strategies



Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.