As with any local data center environment, administrators must continuously patrol the infrastructure to prevent issues and optimize performance. There are several considerations to make when managing the remote data centers.
Planning for a distributed environment deployment
Whether you opt for cloud or physical colocation facilities, remote data centers require careful planning. For example, if the remote data center is only hosting a testing environment, project managers must still include appropriate teams to help build and drive the project. Active Directory authentication, storage area network (SAN) allocation and other vital resources may be spread to various teams. Without proper planning, an environment may be launched but it will probably be mismanaged.
In cases where planning excludes specific team members, administrators may be unaware of potential inefficiencies. The exclusion of the security team just because the environment is deemed “non-critical” is one example. A high level of preparedness must be in place since access to corporate workloads is delivered over the WAN. Even in a low priority environment, all necessary considerations and planning steps must be made.
If an organization plans to use a service provider with a pay-as-you-go data center model, take the time to develop an appropriate service-level agreement (SLA). This is a crucial part of the planning process that is often overlooked. Understand uptime requirements, service metrics and how overages will be charged. The key is to make sure the environment makes sense and works in favor of the organization, not the service provider.
WAN management for a distributed environment
Every organization’s distributed data center plan must consider WAN connectivity and its impact on the environment. Since the goals of each organization are unique, requirements for bandwidth will depend on the requirements of the infrastructure. When working with a WAN provider, administrators must understand the limitations and caveats of their WAN SLA to ensure optimal performance.
Prior to engaging a WAN provider, it’s important to understand the demands of the distributed environment by testing existing workloads and their bandwidth requirements. Consider SQL clusters, SAN-to-SAN replication and application networking when determining the right amount of bandwidth between locations. While no two environments are the same, there are some best practices to follow for respective site types.
Major data center: This is a central computing environment with major infrastructure components. Hundreds — even thousands — of users connect to this environment. Very high bandwidth and very low latency is needed. WANs in this site demand multi-protocol label switching (MPLS), optical circuits or carrier Ethernet services.
Distributed branch or colo data center: These are usually a smaller, but still sizeable, environments used to house vital secondary systems. In this type of data center, the requirements are moderate bandwidth availability with the possible need for low latency using MPLS or a carrier Ethernet service.
Small data center for disaster recovery or testing: This is a small branch data center with a few components that are typically used for testing and development or for smaller DR purposes. Requirements in this environment call for low bandwidth but may still need low latency and the option for mobility. The recommendation here is MPLS over T1/DSL, broadband wireless options or Internet VPNs.
Selecting distributed environment management tools
Resource management has become easier with better tools and the superior visibility they provide into a local environment. But the tools should be tuned to help the administrator head off issues before they become meltdowns.
There are two ways to look at remote data center resource management: controlled and service-driven. In a controlled situation, the organization takes responsibility for managing the environment. Engineers use management tools to observe and act based on the needs of the remote data center infrastructure. In these situations, administrators are able to use tools available from their native hypervisor platform.
For example, Citrix’s XenServer comes with XenCenter which has excellent visibility into either a local or remote data center. Similarly, VMware vCenter has the capability of examining multiple data centers from with its console or from the vCenter Web Client. Administrators are able to use third-party tools such as those from up.time or even ones provided by the remote data center hosting facility.
In a service-driven situation, the organization outsources its data center to third-party vendors, like Savvis/CenturyLink, Ubistor and Equinix, and the client manages its own resources. Often times, in a hosted environment, organizations are able to leverage existing resource monitoring tools as per their contract. Other times, administrators are just renting the server hardware. In these cases, everything from the hypervisor through the workload is managed by the customer. Depending on the type of contract and solution, the options will vary as to which tools to leverage. In this scenario, it’s important to monitor existing workloads because service provider contracts will have stipulations on overages on RAM, CPU and WAN usage. Proper workload and VM management will prevent these additional costs.
Managing the distributed environment with native virtualization tools
We’ve considered the importance in planning a remote environment and managing its resources. Next, let’s look at how this can be accomplished. Modern data center design has grown to rely heavily on virtualization. Hypervisor technology has also matured; native hypervisor tools now provide powerful features and granular visibility into an environment.
Let’s assume a business has a remote data center running on the XenServer 6.0 platform. This organization uses the remote environment as a testing and development platform and owns the infrastructure located in the cloud. Administrators monitor performance and resource utilization. At some point, an administrator may come across a report like the one in Figure 1.
Figure 1: XenServer 6 demo environment located offsite. Performance monitoring displays CPU, memory and networking utilization. Administrators can customize these graphs to display other statistics.
Figure 1 shows, over a span of about 30 minutes, one of the physical hosts reached maximum RAM capacity. Unchecked, this could have been a serious issue, but the administrators had a plan with proper alerts set up. Alerts can be set per host or per VM, depending on the needs of the environment. Alerts should also monitor storage, CPU and networking.
In the next example, a company may use VMware’s vSphere 5 infrastructure to manage its remote data center. Much like XenServer, vSphere has a powerful tool set capable of monitoring and controlling distributed data center resources. To make remote management easier, vSphere’s Web Client tool – such as in Figure 2 – features granular control.
Figure 2: vCenter 5 Web Client connected to a remote data center. VMs can be managed directly from the Web Client creating a versatile remotely managed environment.
Administrators using tools like vCenter 5 Web Client can see all the VMs running in the pool and data center. From there, administrators can perform numerous management tasks, including VM maintenance using live migration, as shown in Figure 3. The integrated tools of the vSphere Web Client allow administrators to perform functions as if they are sitting at the console.
Figure 3: vCenter 5 Web Client has the ability to fully manage a virtual environment. In this scenario, an administrator is able to migrate VMs between resource pools to most effectively utilize environment resources.
As you see in Figure 4, the alerts in vCenter 5 are detailed and can notify the appropriate administrator quickly. These alerts help staff respond to issues before they become major problems.
Figure 4: VMware vSphere 5 vCenter alert management platform is able to set up different types of alerts to help administrators monitor for specific events.
Extending enterprise monitoring software to the distributed environment
Another remote data center management option is to leverage existing in-house enterprise monitoring software, many of which can be configured to span multiple sites.
One such tool is up.time 5, which takes a “single pane of glass” approach by unifying management tools and providing a global view of an entire infrastructure. Administrators can drill down to see all of the necessary components to make sure the environment remains healthy. Figure 5 illustrates an example of an up.time 5 global scan over 24 hours.
Figure 5: up.time 5 granularly examines remote data centers and displays how resources are being used. This screenshot displays one remote location and its current resource usage, as well as statistics from the last 24 hours.
The up.time 5 tool set also allows administrators to monitor multiple sites as shown in Figure 6.
Figure 6: With up.time 5 Enterprise Monitoring Software, administrators are able to see the service status of remote data centers from a single pane of glass. This type of visibility may be necessary for multi-national data center environments.
Third-party tools can expand the capabilities of a distributed environment. Highly dispersed data center environments require this type of granular visibility where native hypervisor and monitoring tools fall short. Since every environment is unique, IT managers must decide which approach is best for their distributed data center infrastructure. The determination to which tools to use must be decided based on the goals of the environment. For example, if an organization must manage their WAN QOS, a hypervisor GUI may not be enough. In these cases, it’s important to look at other tools which help create the type of clear visibility into the WAN infrastructure that is required. Certain type of traffic may need to be prioritized for optimal efficiency, make sure the tools which are selected are able to cover these scenarios.
Insight is the key to successful remote data center management
Regardless of the type of distributed infrastructure, administrators must always have a handle on the data center resources. As environments evolve, the IT managers need to learn what tools will work best to keep the organization efficient and properly tuned.
ABOUT THE AUTHOR: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Virtualization Architect at MTM Technologies Inc. He previously worked as Director of Technology at World Wide Fittings Inc.
This was first published in February 2012