When a legacy data center was ready for a full rip and replace, it coincided with the CIO's vision of IT buoying the business prospects of the enterprise, not just "keeping the lights on."
The data center at MTS Systems Corp., with headquarters in Minnesota, endured about a decade of additions and changes designed to cut IT costs. The flat roof was leaking onto server racks when the snowfall was heavy; the single network carrier line connected above ground, exposed to the elements. Cables had gone unmanaged and servers were five to eight years old.
Two years ago, the company, which builds testing equipment such as wind tunnels for the aerospace industry, realized that it needed to bring the data center and network up to speed to support business better. SearchDataCenter spoke with IT services manager Greg Tupper and senior network engineering lead Chris Anderson about their ambitious and savvy rip and replace plans.
What changed when you decided to revamp the data center?
Greg Tupper: We reorganized the data center to be reliable, agile and more efficient. The first planned upgrade was a big voice over IP [VoIP] system. But the existing network lost one out of every four packets and couldn't handle oversubscription. There was network sprawl and the uplinks were degraded.
Chris Anderson: We wanted to bring together the storage network and regular network, flatten the network and add Internet lines.
Tupper: We added a CenturyLink Self-Healing Alternate Route Protection -- it's called SHARP -- fiber line underground to create a solid ring. We tore up the lawn putting that in, which didn't make the IT team any friends in the facilities department, but it was necessary for the network.
Anderson: We installed 10-Gbit/sec. 0-in-4 fiber uplinks, new edge components, a VPN and firewall. With the 10 Gbit/sec. backbone, we have a multipath through the data center. We also added Cisco Nexus 7000 switches to an existing Nexus 5000 in the data center. We put full redundancy into each, increasing reliability without having to buy two 7000s.
We made changes to the WAN [wide-area network], which was a C1 frame relay WAN using public Internet, to save costs. Performance and stability were bad, particularly in Asia. We went with an MPLS-based WAN, all connected to private WAN globally. Before and after, there was up to 40% improved latency on the same bandwidth. We saw one second or longer saved in time per transaction.
Tupper: There were about 20 legacy racks in the data center filled with five- to eight-year-old Cisco [unified computing system] servers and NetApp storage. Virtualization was on a small portion of the systems and on an older version of VMware. We wanted to limit the legacy server footprint but still leverage UCS, so we virtualized most of the Windows servers with VMware vSphere and updated the IBM AIX OS to support our SAP application. Now, the servers are about 85% virtualized and we can provision new ones quickly. MTS is now at about 40 servers stacked up.
In future upgrades, we need to put more automation into the data center and a private cloud. By adding Cisco's UCS Director, we'll have a converged infrastructure FlexPod.
Did you also make facilities changes? Power and cooling?
Tupper: The facilities team wants to save energy costs and bring overall spending down. With replacing older machines, we saved energy costs there.
The server room is too big for the current infrastructure, so the cooling equipment is wasting energy. We have two AC units for the server room, and probably don't need that much cooling power. We contained the hot aisle to move air upward. We're also looking into doing in-rack cooling with refrigerant. I plan to do a three-rack trial with Opticool's product. We could shut off one of the AC units if it works as we expect. If we expand in-rack cooling to all the critical racks, we could turn off the AC units completely.
Then we have the question, can we use external air? This is Minnesota, and it's cold for much of the year. We're talking with the building's facility managers to see what they recommend in terms of the free cooling method that will work with the building and in this environment.
What about culture changes?
Tupper: The business side of MTS had lost faith in the IT group, so we had to build relationships. It started with the CIO's vision of changing everything for a global company.
We also knew that it was going to get bumpy during the rip and replace and various facility upgrades, and we had to prepare the business side to work through it. The IT team made complex plans to mitigate risk, but we still had to let engineers and managers know what was happening and why.
You know, the new leadership that came into MTS said "we can't compete in the market with manual, broken-down IT." The company did a lot of business-side initiatives like onboarding SalesForce.com Software as a Service in the cloud and automation apps. A lot of customization goes into MTS's products, and updating our customer-facing apps made order fulfillment more accurate and faster. This is an ongoing initiative.
Part of this is using plain English to talk with our employees. They're very smart -- engineers and R&D people that have been in this industry decades -- but we need to speak the same language. 'This is your email. This is your backup,' etc. We can't just talk technical mumbo jumbo and expect them to correlate it to something that makes sense. We're looking into implementing a showback IT model.
The whole time, we need to communicate with the business. If we just start rolling out things like a bulldozer, they will have pain points. So they have to see the end goal, and [we] have to show the business the value of the data center, with flexibility, reliability and speedy response times. Maybe we can outsource some tasks, like help desk calls for password resets and the like. Outsourcing can add costs but give flexibility. Our staff can spend time on better support. We also have a huge cell-phone fleet, so staff could focus on MDM [mobile device management].