Sergey Nivens - Fotolia
IT organizations tinker with data center infrastructure architectures to maximize system performance: centralized versus decentralized, hub-and-spoke versus mesh, client/server versus monolithic system. Now, add core-and-pod.
Core-and-pod designs boost productivity in enterprise data centers. Changing traffic patterns, shifting to east-west, are spurring interest in this IT infrastructure architecture. The entire organization will need training and new tools to manage this infrastructure that are non-uniform and segmented by application.
More than half of corporate applications are virtualized, according to Boston-based research firm Aberdeen Group. Traditionally, processing chores flowed from the user to the central server and back. With a virtualized system, processing tasks often move among different system components in the data center, increasing east-west traffic. Therefore, 70% to 80% of traffic now flows east-west inside the enterprise data center.
Another trend growing data center traffic flow is converged systems, which combine server, storage and network functions in one box. Converged infrastructure adoption will push spending on these systems from $2 billion in 2011 to $17.8 billion in 2016, according to International Data Group in Framingham, Mass.
Learning from hyperscale data centers
The core-and-pod network design trickled down from massively scalable data center architectures at the world's largest Internet companies. It contrasts with the traditional three-tier enterprise network, which features edge and end devices. In the latter, the core systems reside at the data center. Information is consolidated by devices (routers, switches, storage systems) at the fringe of the data center.
The core-and-pod design places important items in the heart of the data center. These might include a big data warehouse, a large server farm or a centralized storage system. Other pieces of the IT infrastructure -- servers, network connections, storage systems -- move out to autonomous or semi-autonomous pods. Each pod is a pre-established set of data center resources, creating a modular unit of network, compute, storage, power and space resources. A new pod spins up for each new application the company deploys.
One possible benefit to this change in IT infrastructure architecture is better scalability. To increase processing power, add more integrated compute stacks. This frees the IT organization from the constraints of any one central system, whether it be servers, network solutions or storage systems.
Core-and-pod designs could increase availability. With fewer central points of failure, the infrastructure is better able to keep compute, network and storage systems up. Each pod implements individual high-availability functions to meet the application's specific needs, rather than be tied to the central system.
Speed of deployment could also improve. New compute modules drop into a network rapidly. IT administrators move more quickly with automated setup functions.
Companies establish more precise service levels when applications rely on separate resource modules. One application can set higher service-level agreement (SLA) requirements based on a business model or organizational hierarchy. The distribution of resources offers more system granularity. For example, pod A has higher compute and network bandwidth capabilities than pod B, while pod B has a higher storage capacity. The objective is to ensure applications within each pod receive the resources needed to meet their subscribed SLAs.
A similar scenario exists for security: Businesses can develop different types of security checks for each pod, putting more stringent measures in place for sensitive information.
The core-and-pod architecture has potential downsides, like increased complexity. Data center staff members manage a series of resource configurations rather than one central design. It may cause difficulties when troubleshooting connections, especially if a pod accesses data or resources in the core. Management tools need to recognize the particulars of this setup to help technicians pinpoint bottlenecks.
Training is also an issue. IT professionals used to working with traditional data center architectures need instruction on how to set up, run and manage resources with the core-and-pod design. The IT budget, and staff time, might not allow for such comprehensive training.
About the author:
Paul Korzeniowski is a freelance writer who specializes in data center issues. He has been covering IT issues for more than two decades, is based in Sudbury, MA and can be reached at email@example.com.
New IT demands call for new infrastructure architecture
Another kind of data center topology
The Open Compute Project's better architecture planning strategies