Containerized data centers: Is a box a good fit?

Data centers in shipping containers caught on as Microsoft deployed a data center in the equivalent of a parking garage. But should you be considering this option in your future data center design? Learn what two experts think about data center containers, and how and where they are a good fit to meet the needs of a data center facility.

Traditional data center facilities come with their own array of complexities, customizations and considerations. So, it may be hard to imagine getting a shipping-container-like box with server racks and cabling inside and plugging it into a power source and being happy with the result. But that's just what Microsoft did in its 700,000-square-foot Chicago-area data center. Another tech titan, Google, also has containerized data centers. has asked if this technology is the wave of the future or just a niche solution suited for specific use cases. They found some companies utilizing the boxes to deal with some specific data infrastructure demands.

In this face-off, two data center infrastructure experts explain the merits and drawbacks of containerized or modular data centers.


Christian Belady: Modular data centers: Experience and outlook


Chuck Goolsbee: Stick to stick-built data centers: (in most cases)



Modular data centers: Experience and outlook
By Christian Belady, Principal Infrastructure Architect, Global Foundation Services, Microsoft

When would containerized data centers be better than building or leasing space in stick-built facilities?
One of the struggles the industry is dealing with today is that as the performance of devices rapidly increases and the ubiquity of the Web unfolds, our strategies around building the IT infrastructures to support them needs to rapidly evolve too. Today, I call this evolving approach modularization. Containerization is too narrow and elicits an image of shipping containers.

If you look at our Chicago data center it did use shipping containers, but that was only our first iteration and was the best solution then from a time to market perspective. Long term, we are looking at essentially everything but the concrete pad being pre-manufactured and then assembled on site: the IT, mechanical and electrical components are all part of pre-assembled components that we call an "ITPAC."

It makes sense to ensure we are providing the lowest total cost of ownership (TCO) enabling us to deliver the infrastructure in the right time to market (RTM). Frankly, we are always looking at what drive lowest TCO which may or may not be modularization as software and hardware technology continues to evolve. However, at this point in time, we see modularization as a way to commoditize the data center. Instead of custom facilities we move to more of a supply chain problem…this is the transformation we are seeing….it's really about "commoditization" as opposed to "Containerization". The example I give is it's the equivalent of moving from building a 1912 Rolls Royce to mass, cost effective manufacturing of the Model-T. The costs and time efficiencies, and carbon savings are huge.

What advantages do containerized data centers have over stick-built?
Again, let's evolve the conversation to modularization. There are many benefits and most of them stem from the fact that modules are an integrated system. One could consider them a hyper-optimized micro data center or you can consider it a server. The key is traditional data center designs are somewhat fragmented since the various pieces of equipment (servers, HVAC, power distribution) are all designed assuming worst case scenarios so there is significant cost in the design margins. However, with modularized designs, you can build an optimized ecosystem with the right amount of power, cooling, capacity needed, etc. because of its balanced system design. There are a whole slew of other advantages:

  1. Can be deployed at scale: A module can be delivered with 400 to 2,000 servers pre-wired, tested and ready to go in a couple of hours since all of the tests and networking happened at the factory.
  2. Energy and material conservation: There is no boxes or other packing materials for each server. Servers are delivered installed and ready to go.
  3. Plug and play: Installation of a module is easy with nothing more than a power connection, a water connection (for cooling) and a data connection.
  4. Removes religious wars (such DC vs AC power distribution or liquid vs air cooling): Vendors are constantly trying to sell their technology claiming they are more efficient or lower cost. Now the ITPAC module is a data center in itself and when module vendors compete against each other, they will come up with the best possible solution and prices will decrease as supply and standardization increases. And from Microsoft's perspective, we don't really care whether AC or DC was used because all that matters to us is that the module connects to a standard power, water and data connection. All we care about is who can provide us the most compute most efficiently at lowest cost. With the three connections, it's easy to measure.
  5. Moves cost from upfront investment to server deployment: One of the big advantages for modular data centers is the fact that you pay as you go since with modules the power, cooling and IT scale together. This really delays capital costs but also eliminates the unused capacity as the data center is filling up. This has huge cost savings.

The concept of containerized data center arose from a need – what is the need that they fill and why are they the best solution?
In the case of Chicago, the container approach was driven primarily on the idea that it was the most efficient way to deploy a higher density of computing capacity (servers) quickly, while minimizing carbon wastes and increasing ROI. In the case of our Gen 4 modular concept, it's all about time to market, sustainability, efficiency, and meeting customer demands. However, these benefits are only there if you build at scale. This is why we always say that this is not necessarily the right strategy for everyone but we can say this is the right strategy for us at this point in time.

Christian Belady is Microsoft's Lead Infrastructure Architect for Global Foundation Services where his role is to improve both efficiency and cost in their online services infrastructure. His responsibilities include driving initiatives for sustainability in the data center and infrastructure space, and he is one of the key architects for the Generation 4 modular data centers. He is a member of the advisory board.



Stick to stick-built data centers (in most cases)
By Chuck Goolsbee, Data Center Manager, digital.forest

Why is building or leasing space in stick-built facilities better than investing in a containerized data center?
The answer can be found in an age-old real estate term: "Build-to-suit."

In my years in the colocation industry I have yet to see any two companies with the exact same needs from a datacenter project. Literally every installation is completely different from the one before it, and the one after. Sure there are a subset of fixed variables, but how they mix is always new for every installation.

Containerization is a fine solution for some specific data center needs, but there is no real one-size-fits-all solution for data centers. Containers literally come in one size. Maybe two or three if you shop around. Their capacity is rigidly fixed by their external shape. They make sense for a small stand-alone "datacenter in the parking lot", or for huge companies deploying servers like Army divisions invading a continent. Those scenarios are at either end of a very large bell curve.

The rest of us have datacenter needs that lie somewhere else in the continuum. For example, our facilities are used for providing colocation, and our clients' space and power requirements vary dramatically. One could be a large company with space needs for fifty-two cabinets and adjacent expansion space for a dozen more. Another may be a small start-up company who starts with just one cabinet, and rapidly grows to twenty-five within two years. In a traditional data center facility these dramatically varying needs can be simply satisfied. Cage walls are very easy to erect, and reconfigure to accommodate growth and change. That growth and change can be measured in far smaller discreet units that what is available in a containerized model. Containerized datacenters must grow or change in very specific units of measure.

If your needs lie somewhere between one parking space and the Googleplex, it is likely wiser to spend your capital on traditional data center space. Leasing is the most cost-effective way of acquiring data center space that fits your exact needs today, and can accommodate whatever change the future may bring to your particular installation.

What advantages do stick-built data centers have over containerized modules?
Beyond meeting the "build-to-suit" level of customizing, they require far less actual real estate to function. The lowest common spacial denominator of the traditional data center is the server cabinet. The cabinet can be installed or repositioned easily by one person. With containers you need a lot of space to actually move them around and deploy. The container requires large scale specialized heavy machinery, with a lot of empty space in order to move at all.

Traditional facilities can be sub-divided far more effectively and efficiently than containers: There are endless variations on how you can design and build a traditional datacenter: Raised floor or slab. Water-based chillers or free cooling. Overhead infrastructure, under-floor, or a mixture of both. Traditional datacenters can be made to fit any need, in almost any space available.

Traditional datacenters can be made to be beautiful as well as functional: There are some seriously sexy facilities that have been built when the need for appearance is a priority for the design. No matter how you look at them containers have the same sex appeal as grimy rail yard or a rough waterfront. Sure, containers can be painted, but no matter how much lipstick you smear on the pig, nobody but the farmer is going to want to kiss it. The concept of containers is sexy, but the reality of them is anything but.

What do you see is the major weakness of the containerized data center concept and why don't you think that the problem they conceivably solve (need for fast, modular, flexible data centers) is worth the effort?
Containerized datacenters solve the very narrow and very specific needs of the edge cases on the bell curve. The vast majority of businesses in today's world don't have the problems that containers solve. However, there is certainly room in the growing datacenter market for a wide array of design solutions, so I do not believe that the container model is "not worth the effort". On the contrary I believe they are a great idea, for solving a very small and specific set of needs. The technology industry should drop the fallacious thinking of one product, or design "killing" another. As markets grow and mature they should evolve into wider, not narrower solutions. Rather than view the options as mutually exclusive we, as data center managers need to view them as options to be deployed when the situation warrants. There are situations where I would seriously consider deploying a containerized solution, but the vast majority of the time a traditional datacenter design will be the most cost-effective for the vast majority of companies.

Chuck Goolsbee is an executive at colocation provider digital.forest in Seattle, Wash. He has achieved notoriety blogging about the obsolescence of raised floor in the data center and for threatening to gas server designers from Dell with FM200. He is regular blogger for and a member of the site Advisory Board.

Next Steps

Learn about the major benefits of containerization in this ask the editor piece. 

Dig Deeper on Data center budget and culture