C7 Data Centers, a colocation provider, recently constructed a 95,000-foot data center and office complex in Utah, three miles from the National Security Administration's new data center. C7 builds its data centers in Utah, including the new Granite Point II Data Center, because the desert crescent geography is safe, Utah has great connectivity and public infrastructure, land and labor are relatively inexpensive in the region, and power costs are low. The cold desert's low humidity enables C7 to use ambient data center cooling nine months out of the year.
Here, Wes Swenson, CEO of C7, discusses the details of a modern data center build out, how colocation compares to cloud services, industry trends, and some considerations for choosing a colocation provider.
Why would colocation consumers put their data center equipment in Utah rather than at proximity sites 30 to 50 miles away from the business?
Swenson: Latency used to be an issue -- in the earlier days of data centers -- for locating equipment more than 30 to 50 miles away. Today, dark fiber and cloud computing eliminate the latency concern in most cases. The connectivity and software are available to manage data centers remotely much better than before. The mindset has changed regarding colocation in the past 15 years.
Is power consumption less of a concern for your new data centers since electricity is less expensive in Utah than in other parts of the country?
Swenson: We do consider power usage, including the cold air cooling we can achieve from the desert environment. We built a high-density data center with cold-row containment cooling based on the server load. We can cool a rack that uses 50 to 60kW of power.
Thanks to the cold air and Utah's abundant power, we run at 30% to 40% lower operating costs than a typical colocation data center.
How are IT spending trends changing?
Swenson: Growth in compute use is much higher than the growth in revenue, generally. Companies are rethinking what they spend on servers, routers and other hardware.
Also, companies that didn't think they needed IT three years ago see that they do need it today. Big data analytics can give companies an edge, for instance.
Smaller companies go to the cloud when they realize that they need IT. There are no capital expenditures, such as $25,000 to $30,000 for an Exchange Server.
If companies reach a benchmark of about $100,000/month in cloud costs though, they move to a colocation data center. 'We're spending this much money and we don't control the hardware, the systems?' That's the tipping point usually.
With cloud providers, such as Amazon, you're paying for the convenience and sometimes for features you don't need. Outsourcing to public clouds is not necessarily cheaper than owning the hardware, even though you don't have to pay Capex ...
Many customers don't have the IT skillset they need to support business initiatives. They rely on managed services, not just hardware support. That's why C7 bought a managed services provider last year.
Where are the biggest areas of IT spending?
Swenson: A good portion of colo customers used to own and maintain their own data centers. But if it was built 10 years ago, today that data center is a relic. For example, a 12-inch raised floor was more than adequate when that data center was constructed, but we build with 36-inch raised floors today.
Overall understanding of compute and the capabilities of the data center has increased greatly in the past five years.
CEO, C7 Data Centers
Colos are more prepared for future trends in computing because we can gather intelligence from a wider range of industries and business sizes. We see trends from 400 customers, not just one.
The required footprint of an in-house data center is also hard to anticipate. A data center planned in 2008 will be vastly too large for today's needs, thanks to virtualization's effect on server consolidation. On the other hand, if you plan and build for 5,000 square feet, but end up needing 6,000 square feet, the costs will skyrocket.
Colos also buy their networking, cooling and power in bulk. This saves money over internal IT. Data centers work better at the scale of a colo. Up until 2008, you built a data center to be 10,000 square feet. Today, you can manage a bigger data center with the same resources, thanks to monitoring software and predictive analysis [for] identify[ing] growth areas and problems.
Is there a common mistake that a lot of IT teams make in the data center?
Swenson: Many companies will overestimate their power needs based on the information they find on the servers' backplates.
We see colo customers making good decisions on storage, database equipment and similar purchases. But they should pay close attention to critical data and less important data, and treat these groups differently. Certainly, bifurcate mission-critical data, but maybe you don't need to for tier-II and tier-III information. Some businesses shouldn't pay premium for storage on non-critical, two-year-old data. It could handle uptime below "five 9s." That cuts down data storage costs: You can cut costs in half by eliminating one or two nines -- bringing uptime for old data to 99.99, or 99.9% of the time.
Do you see any long-term trends developing?
Swenson: With application-layer failover and high data redundancy, I think we're on the path to self-healing clouds. When this occurs, it will displace power redundancy for workload availability. That high cost of power redundancy -- batteries, backup power sources -- will dissipate.
Overall understanding of compute and the capabilities of the data center has increased greatly in the past five years. Businesses have learned what they don't want, and need help getting in place the infrastructure to support what they want to achieve. Customers usually come to colo with an experience of, 'We ran out of XYZ when we needed it,' or a similar story. They want experts, service levels and accountability to support operations.