This content is part of the Essential Guide: Server uptime and hardware failure guide

Three novel approaches to building a data center

Factors from PUE and build time, to accessibility and amenities, will influence your data center design and construction decisions. Here are three routes to consider.

Data center building options have diversified to accommodate a range of locations, business needs, power supplies and expansion.

These different approaches to building a data center each demonstrate characteristics suited to a particular use. The traditional facility meets modern IT needs with expandability and flexibility at its heart, while containers and modular prefabricated units enable data centers in new locations, industrial and wild.

An IT pied-a-terre without footprints

The full modular approach was the right call for a data center located in the woods in Greece, deployed as a backup location for a government agency in Athens. The Louros Project facility had strict orders not to disrupt the ecological surroundings of the river for which it is named, said Ioannis Noulis, business unit director at LAMDA Hellix, which ran the project.

The build adhered to a minimal footprint, and packs 14 kw racks at goal utilization of 100% into a 400 square meter (around 4,300 square foot) facility. It draws power from hydroelectric dams on the river to containerized uninterruptible power supplies (UPSes), and operates with hot aisle containment and a special cooling system that pulls water from the river without disturbing the fish. Underground water supplements the river for cooling in the hottest months.

Because the site is located far from population centers, everything from the servers to the filtration systems must operate without human intervention. Creature comforts were unimportant; small footprint, blending into the environment, and operating reliably on its own were musts.

Building management and data center infrastructure management systems ensure the facility meets its green mandates of advanced power usage effectiveness (PUE) of around 1.18 and renewable power operations, which are called for in the data center's service-level agreements.

While conventional power hookups are available, the facility has not needed to use them, said Alexandros Bechrakis, director of commercial operations for the data center company.

The project took about six months to deploy and will serve as proof of concept for other modular data centers.

"From our experience, it would take about twice as long to build a traditional data center, and even longer with the additional permits and site location scouting," Bechrakis said. "For this project, containerized data center was the only viable option."

Cost savings weren't considered in this scenario -- the modular approach instead enabled a build that otherwise would have consumed too much space and resources to be viable. The containers were built custom, unlike standardized prefab units, to meet the particular demands of the project, which is estimated at 2.5 million euros (about 3 million U.S. dollars).

Traditional building; nontraditional build

Another option for new capacity is to overhaul brownfield space. Colocation provider Keystone NAP, rather than build a data center facility, occupied a 60,000-square-foot former U.S. steel mill building outside of Philadelphia (See Figure 1) and began installing modular IT capsules on three floors.

"The existing facility -- the size of the beams, thickness of the concrete and the steel in the building -- can't be matched in a greenfield build today," said Shawn Carey, a co-founder of Keystone. Power comes from three existing grid feeds, and cooling from river water and an underground aquifer at the site.

Failess Steel Mill
Figure 1. The Keystone NAP data center was once a steel mill complex.

Instead of installing data halls in the building, Keystone NAP treats the space as a "chassis" for modular prefabricated units made by Schneider Electric in 22 or 44 rack (100-400 kw) increments, the smaller of which takes up just under 1,000 square feet (see Figure 2). Keystone expects most users to remotely manage servers in the racks, although they can access the physical systems when necessary. As the colocation center scales up, the company will connect more of these modular units to power, network and cooling. Different modular units can operate to different service-level agreements.

Figure 2. Schneider Electric prefabricated data center.
Figure 2. Keystone's KeyBlock modular data center blends conventional IT setup with space- and time-saving prefabrication.

Prefabricated units aren't the same as the "shipping container" modular data centers most IT folks have seen, Carey said. While self-contained and operational with an isolated cooling infrastructure, UPS, fire suppression and security, the units are not meant to be mobile or dropped out in the elements. It simply saves materials, increases segregation and privacy, and improves design-to-build time to about three months for the IT space. Prefabrication reduces costs, according to research firm DCD Intelligence, by about 14% compared to stick built.

A new twist on a classic

Data protection and colocation provider Iron Mountain Inc.'s facility in Northborough, Mass. is designed for enterprise IT teams to evoke the same control and closeness to equipment that on-premises facilities offer. This includes areas to rack and stack or repair IT equipment outside of the data hall, and a break room and shower facilities for IT teams on a prolonged visit (see Figure 3). Other details like integrated real-time asset inventory from delivery to install reinforce the sense of ownership over operations and facility.

Figure 3. Iron Mountain data center break room.
Figure 3. A break room and other amenities make Iron Mountain's data center feel like it belongs to the colocation customers, mimicking traditional on-site builds.

"Any deployments below six megawatts should really go to colocation centers," said Sam Gopal, director of product management at the company, referring to the PUE and upgrades that colocation facilities offer, in addition to 24/7 operations. "But colocation providers aren't serving enterprises as well as they could be."

Figure 4. Iron Mountain colocation server room.
Figure 4. The data halls are designed for easy replication, cutting materials waste and construction time.

Although it is a traditional "stick built" facility, the design is infused with modularity and repeatability for scaling up as demand increases. The 10,000-square-foot, 1.2-megawatt data hall is all white space, with mechanical, cooling and electrical operations kept out of the space to make room for IT equipment (See Figure 4). Twelve-inch concrete walls wrap the space. Power and other pipes are all color-coded, including the redundant power lines.

The entire campus took just under 12 months to build. Iron Mountain designed it with expansion as part of the plan, enabling duplicate data halls to spin up without affecting any operations in the existing space, within six months. Sophisticated temperature controls keep the PUE flat around 1.5 regardless of capacity filled. With raised-floor cooling, the space can handle up to 20 kw per rack without hot/cold containment, enabling more flexible layouts.

Dig Deeper on Data center design and facilities