Sergey Nivens - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Meet the application-minded IT infrastructure architecture

A core-and-pod architecture segments the data center's resources by application. Could your IT organization benefit from the scalability and automation that pods promise?

IT organizations tinker with data center infrastructure architectures to maximize system performance: centralized versus decentralized, hub-and-spoke versus mesh, client/server versus monolithic system. Now, add core-and-pod.

Core-and-pod designs boost productivity in enterprise data centers. Changing traffic patterns, shifting to east-west, are spurring interest in this IT infrastructure architecture. The entire organization will need training and new tools to manage this infrastructure that are non-uniform and segmented by application.

More than half of corporate applications are virtualized, according to Boston-based research firm Aberdeen Group. Traditionally, processing chores flowed from the user to the central server and back. With a virtualized system, processing tasks often move among different system components in the data center, increasing east-west traffic. Therefore, 70% to 80% of traffic now flows east-west inside the enterprise data center.

Another trend growing data center traffic flow is converged systems, which combine server, storage and network functions in one box. Converged infrastructure adoption will push spending on these systems from $2 billion in 2011 to $17.8 billion in 2016, according to International Data Group in Framingham, Mass.

Learning from hyperscale data centers

The core-and-pod network design trickled down from massively scalable data center architectures at the world's largest Internet companies. It contrasts with the traditional three-tier enterprise network, which features edge and end devices. In the latter, the core systems reside at the data center. Information is consolidated by devices (routers, switches, storage systems) at the fringe of the data center.

N-tier data design
Figure 1: N-tier data design

The core-and-pod design places important items in the heart of the data center. These might include a big data warehouse, a large server farm or a centralized storage system. Other pieces of the IT infrastructure -- servers, network connections, storage systems -- move out to autonomous or semi-autonomous pods. Each pod is a pre-established set of data center resources, creating a modular unit of network, compute, storage, power and space resources. A new pod spins up for each new application the company deploys.

One possible benefit to this change in IT infrastructure architecture is better scalability. To increase processing power, add more integrated compute stacks. This frees the IT organization from the constraints of any one central system, whether it be servers, network solutions or storage systems.

Core-and-pod designs could increase availability. With fewer central points of failure, the infrastructure is better able to keep compute, network and storage systems up. Each pod implements individual high-availability functions to meet the application's specific needs, rather than be tied to the central system.

Speed of deployment could also improve. New compute modules drop into a network rapidly. IT administrators move more quickly with automated setup functions.

Core-and-pod design
Figure 2: Core-and-pod design

Companies establish more precise service levels when applications rely on separate resource modules. One application can set higher service-level agreement (SLA) requirements based on a business model or organizational hierarchy. The distribution of resources offers more system granularity. For example, pod A has higher compute and network bandwidth capabilities than pod B, while pod B has a higher storage capacity. The objective is to ensure applications within each pod receive the resources needed to meet their subscribed SLAs.

A similar scenario exists for security: Businesses can develop different types of security checks for each pod, putting more stringent measures in place for sensitive information.

The core-and-pod architecture has potential downsides, like increased complexity. Data center staff members manage a series of resource configurations rather than one central design. It may cause difficulties when troubleshooting connections, especially if a pod accesses data or resources in the core. Management tools need to recognize the particulars of this setup to help technicians pinpoint bottlenecks.

Training is also an issue. IT professionals used to working with traditional data center architectures need instruction on how to set up, run and manage resources with the core-and-pod design. The IT budget, and staff time, might not allow for such comprehensive training.

About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been covering IT issues for more than two decades, is based in Sudbury, MA and can be reached at

Next Steps

New IT demands call for new infrastructure architecture

Another kind of data center topology

The Open Compute Project's better architecture planning strategies

Dig Deeper on Emerging IT workload types

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What IT infrastructure architecture is currently deployed in your data center?
FlexPod solutions is the current architecture deployed in my data center that is based in central New Mexico. I have found that this platform integrates critical business applications through cost effective and non-disruptive operations. As my business and data needs have expanded, I discovered how very versatile FlexPod is in terms of scaling capabilities. Expanding my virtual environments through this converged infrastructure has paid for the purchase of the management platform many times over. I would recommend FlexPod for any size of data center, but research other available options in order to make a well informed purchasing decision.