Facebook's Forest City, North Carolina data center isn't "cool" just because of the outside-air economizer or the colorful local art in the offices. The cool part is the Open Compute Project racks, servers and other equipment -- all of which could be in your data center in the future.
"Open source is the way to go," said Keven McCammon, data center manager for the location. "We've seen that in the software side of IT -- you get more out of it than you individually put in."
Open Compute Project (OCP) servers are standardized and stripped down to the necessities for easier repair and cooling -- no vanity cases here -- but that doesn't mean every row of racks is exactly the same. There are several generations of Open Compute racks in the data halls. Facebook refreshes IT hardware roughly every three years, depending on application and capacity needs.
Kevin McCammondata center manager, Facebook
"Open Compute servers are easier to cool and [are] built to tolerate humidity," McCammon said.
The cold aisle averages 83 degrees Fahrenheit and relative humidity is around 65%. The hot aisle can reach 120 degrees F.
The architecture is also built for failure. During a Facebook tour this month, McCammon popped off network cables on a server and pulled it out of the rack. Any application attempting to use the resources on that blade will simply reroute to an available server. Even if a cluster were to go down, end users would probably not notice any lag, according to one employee at the data center.
"[Web traffic routing servers] can take a failure with no impact because of their redundancy," McCammon said. He then put the server back in place, prompting an automatic scan to verify nothing was broken. "The system then reaccepts it."
Facebook's homegrown data center management software also works in concert with the vanity-free OCP servers for fast repairs. The system diagnoses potential problem sources before the technician pulls the server. Memory, CPUs and networking cards come off the motherboard without any tools beyond a screwdriver for the heatsinks.
"Even if diagnosis is wrong, with so few parts, it's very quick to find the real problem," McCammon noted.
For provisioning, Facebook doesn't waste time installing one blade at a time. An integrator puts all the servers into a rack, and then the team rolls it into place, connects the power, connects the fiber and boots the servers.
The site, which represents about one-third of Facebook's hundreds of thousands of servers globally, uses one system administrator per roughly 20,000 servers.
One of the three buildings in Forest City is dedicated to cold storage for very old user data. That photo of your breakfast in 2006, McCammon explained, remains available, but moves to an archival tier of storage.
Cold storage saves a lot of power on the back end. Since these racks are filled with standard OCP storage disks only active when writing or retrieving data, they require less aggressive cooling. The cold aisles here are even warmer than Facebook's standard. Facebook miniaturized its dehumidifying, filtration and air flow system developed on the traditional data halls into modular units for the cold storage halls.
Between its Forest City and Prineville, Oregon locations, all of Facebook's archival data is moving off higher-performance storage and into this energy-saving tier.
Continue your Facebook tour with a look at the data center's facilities, power and cooling.
Meredith Courtemanche is the site editor for SearchDataCenter.com. She edits tips and other content for the site, writes news stories and creates editorial guides.