When did Cisco start getting interested in green data centers, and why?
It's actually a funny story. About two years ago I was hosting a data center technical advisory board meeting, and I asked people, "What is your biggest problem in the data center?" Being a network guy talking to a bunch of network guys, I was expecting to hear bandwidth, security, something we could correlate to our business. Over 80% of the room said power, space and cooling.
Right around the same time, we asked this question: "How big is your data center?" You can find out a lot about the person by how they answer this. Network guys will say how many ports they have. Server guys will say how many systems. But the metric changed. It was, I have this many megawatts.
So we're spending a lot of time working on the improvement of the ability of the network to improve itself. Providing efficient communications and connectivity can improve efficiency of all the adjacent assets. Recently you've held events with customers about making their data centers more efficient. What are you saying during these sessions?
We talk about how we can design a data center to be efficient. What can we do today with technologies we have available, and what can we do in the future. We need to take a systemwide view, including the power, the networking, the silicon. The ability to leverage efficiency through connectivity and virtualization technologies is massive. Teams need to start working together. Getting those teams to work together and talk together can drive significant efficiencies. We talk about power supplies, what is the efficiency of a power supply at a particular utilization? We also talk about air flow, cooling systems, front to back vs. side to side. What are some things Cisco will be doing starting this year to solve those problems?
Without speaking of specific products, I can talk about concepts. Servers have become like arachnids. With all the different ports, you end up with seven or eight different legs coming out of each server. Cabling costs start getting expensive. What if you can consolidate those down. What if we can combine and unify these networks over time so what we see is a server, not with seven different ports coming out, but where a link can handle multiple networking types. Anywhere you see a server with multiple ports coming out, can they be combined. That's a tremendous opportunity not just for Cisco but for the entire industry. Also, what if virtualization allows us in a very science fiction way to decouple the hardware from the operating system more efficiently. What if during the day I have your Exchange server running its own server, but then at night I move that server over transparently, and now your Exchange server is sitting on a Dell server with 100 others. Then, in the morning, your server is powered back up. That sounds like server reprovisioning.
Coupled with network provisioning, coupled with storage provisioning. These things have to link. Cisco and EMC, which owns VMware, have a tremendous relationship. Having that relationship allows us to engage in conversations we wouldn't have done four years ago. What's going on with Cisco's investment in Nuova Systems?
While we own a majority stake in the company with an option on the other 20% based on the performance of milestones of the product line, I really cannot publicly comment on the specifics of the technology. They are developing. It is a complementary and corollary technology that is strategic to Cisco's technology. It's something you would see as adjacent to our other product offerings. A lot of data center managers I've talked to complain about Cisco equipment blowing hot air out the sides. Can you explain what's going on there?
The industry came out with a specification called ASHRAE TC 9.9. What it details is that the front-to-back airflow model begets the hot-aisle, cold-aisle design. What you'd like to do today is have the egress temperature of the air to be around room temperature. So you're generally ingesting the air at 60 degrees Fahrenheit and exhausting it at 70 degrees. With the side-to-side model, if you have the next server ingesting air that is 70 degrees, it's going to warm it up even more, and that's where you run into problems.
The problem customers are detailing is the Catalyst 6500. It's our core LAN switching product and is commonly used in data centers. We give customers the choice. We have a front-to-back and a side-to-side chassis. Less than 10% are buying the front-to-back chassis.
The reason is that density is another factor for customers. You can fit more ports into a box that has horizontal airflow than front-to-back airflow. With side-to-side, you can have the whole side of the chassis as the air inlet. There's no waste on a side-to-side system. With a front-to-back, I need to have a big air inlet in the front and that takes up 50% of the vertically available real estate in the chassis. We also have partners that can create plenums that will turn the air so it comes in the front and leaves the back right side, but the downside is that it requires space between the racks -- about three or four inches.
If people don't know about the option, then frustration sets in. I think that some people don't know about the options we have. We do provide a technology migration program. We can also recommend a cabinet enclosure. There's also nothing that says you can't turn that rack 90 degrees.
Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer.