News Stay informed about the latest enterprise technology news and product updates.

Q&A: Data center design with Sun's Rob Snevely

Building or retrofitting a data center takes more than planning what hardware and software to buy. In addition to a lot of hard work, it requires a usable planning methodology and a sound philosophy. Sun's data center design expert gave us a few pointers on how to develop good data center design strategies.

What are the first things someone planning a data center needs to determine? The first thing to determine is how...

much physical space you have. The second thing is what we call your in-feed capacities:

  • How much power do you have coming from your utility?
  • How much HVAC capacity?
  • How much bandwidth do you have coming into your building?
  • What are the structural requirements of the building?

    If you have to go to the fourth floor, how much load can that floor take? If you're in a high-rise building, you need to make sure there's a freight elevator.

    More on data center design:
    San Francisco data center builds on solid foundation

    Weighing centralized versus modular UPS in the data center

    Then you need to figure out the actual equipment load -- power, cooling, bandwidth, weight -- so you can figure out if the amount of in-feed will match the load going into the room. If it matches or exceeds, then life is pretty easy. You don't want to get two thirds of the way through the design process and realize that you need another megawatt of electricity to the room and you can't have it. So how do you get a handle on capacity planning?
    In the book, we talk about RLUs or rack location units. It's a way to describe any particular location on the floor -- including requirements for power, cooling, bandwidth, weight -- so you can do capacity planning. That data allows you to figure out if you're going to exceed your limitation in in-feed resources. It becomes a very modular construct that you can easily replicate. Oftentimes, companies have generic pieces of equipment or generic kinds of racks that they use. If you're setting up a disaster recovery center your design criteria can remain exactly the same. Doing your first one is a little more legwork, but it can easily be replicated into the next one. Sun just released project Blackbox, which uses chilled water cooling. Is liquid cooling going to replace more traditional cooling methods?
    I think you may see a number of locations moving away from raised floor environments. One reason is cost. Two is that you start to get very, very dense, high power machines and it's difficult to deliver enough air to them solely through the raised floor. Is there a way to plan for future needs during the design process?
    Anyone that tells you what the future of technology is going to be in the next five years is probably not going to have an accurate estimate. But you can do some things.

    Let's look at [wiring] for example. If you're delivering a single phase 30-amp circuit, depending on code for your location, you would need a 10 gauge wire. You also need a 10 gauge conduit to run that wire. One of the things you can do is oversize that conduit. So six years from now, if you need to go from single phase 30-amp power to a three phase 50-amp, you would have a larger conduit size to be able to easily run that wire through. It's going to cost you a few cents more per 10 or 20 feet when you're doing your construction, but it's a very easy way to hedge your bet on what you're going to need in the future.

    The same thing is true with HVAC systems. When you are doing your initial build out, you may only need 12 30-ton CRAC units, for example. But in three years you may need 16 30-ton units. When you're doing your initial construction, you can figure out the placement of those units, run the plumbing and electrical that they would need, and probably even cut the hole in the raised floor and put a piece of thick diamond plate that would hold it structurally.

    If you have a pipe, it could be water, it could be glycol, or some other refrigerant, do all the plumbing you think you're going to need before hand when you're doing your initial build-out.

    You want the minimal amount of construction once the data center is live. Can virtualization be figured into data center design?
    Virtualization can be a really huge cost saver. For example, if you have a machine that is two-unit machine that is only 20% utilized and you have four of them and each of them is drawing five hundred watts per hour. That's 2000 watts for 8U of rack space. If you can virtualize that workload, you can do the same amount of work in a 2U box only consuming 500 watts.

    If you look at the example we were just talking about, you've just reduced your thermal and power consumption by 75%.

    You can go to a white-box retailer or build your own x64-based or x86 machine for about $700. Generally at eight cents per kilowatt hour, which is what we pay in the Bay Area, you're going to pay as much in power and cooling over one year as you did for the box. If you extrapolate this over a three to five year lifespan of the piece of equipment, often times the actual cost of power and cooling can be more expensive than the box. That's a recurring cost. That's money you're going to see every year. Haven't these things that you're talking about traditionally been facilities' responsibility?
    Yeah, they have. But now you're seeing that the data center's computing requirements are starting to drive so much more power and cooling relative to the entire rest of the building. And there are some people who aren't necessarily thrilled about this. IT and facilities have a symbiotic relationship now, whether they want it or not. They are joined at the hip.

    I think that one of the reasons for having this simple modular construct that we talk about is to have everybody, IT and facilities, on the same page when you're talking about what's going into the data center. That changes the skill-set a bit for your traditional data center manager.
    And for your traditional facilities manager. They both have to understand, at least to a limited extent, the other's world. Because if they don't, IT is going to bring in equipment and facilities is going to say we've got to call the utility and get another three megawatt feed. So this project that needed to be up in four weeks just got pushed to 18 months. It's a real simple equation. If you don't have the power, you don't have the cooling, you can't connect it to the net, then it doesn't really matter; those machines are just big paper weights.

  • Dig Deeper on Data center backup power and power distribution

    PRO+

    Content

    Find more PRO+ content and other member only offers, here.

    Start the conversation

    Send me notifications when other members comment.

    By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

    Please create a username to comment.

    -ADS BY GOOGLE

    SearchWindowsServer

    SearchServerVirtualization

    SearchCloudComputing

    Close