News Stay informed about the latest enterprise technology news and product updates.

Q&A: Vern Brownell, founder and CTO of Egenera

Vern Brownell has a vested interest in solving complexity problems in the data center. He may have had a hand in creating them.

Vern Brownell, founder of blade server manufacturer Egenera, feels your pain. In fact, he feels somewhat responsible for the predicament your IT department is in. He helped create it.

Brownell, chief technical officer and executive vice president of the Marlboro, Mass.-based Egenera, hasn't always been on the vendor's side of the fence. He spent 11 years running the IT department at Goldman Sachs, installing Sun's first large scale Unix environment on Wall Street in 1989.

Vern Brownell

That Sun infrastructure eventually was used to manage not just the trade floor, but Goldman Sachs' business end to end, and included over 10,000 mission critical servers with 250 Sun Solaris administrators on staff to manage them. The system was functional and could roll out applications quickly, but Brownell knew he'd opened a Pandora's Box of management issues.

Brownell recently took the time to explain these feelings of guilt for his role ushering in the distributed computing environment and his attempts, through Egenera, to clean up the mess.

What's the philosophy behind Egenera's blade server design.

Vern Brownell: The reduction of complexity in the data center.

What about the technical aspect?

Brownell: Reducing the elements that are typically around the servers.

How is that accomplished?

Blade servers, besides our own, are common in that they are really just miniaturized servers. They have all of the traditional components that a normal server would have. There are obviously CPUs and memory, but there are also disks, Ethernet controllers, NICs, and other things.

Whereas our blades are just CPUs and memory -- all of the other components are software entities. We are the only folks who have taken that approach. It's a lot more difficult and software-intensive [to develop], but I think it does provide the real value in the data center in reducing complexity.

This came to me while doing an analysis of where fault points where in the processing environment we had [at Goldman Sachs]. We were looking at roughly 10,000 mission critical Unix servers and we found that a lot of the faults and costs were related not to the CPUs and memory themselves, but more to the components around them.

The data center evolved over time. First there were a few servers. Then they needed to be connected to LANs. Then they needed to be connected to SANs Each of those brought their own bit of complexity. And then came the ridiculous things like having a separate network to manage the console ports for each of those servers so you could get a keyboard-video-mouse access to each server in case something went wrong. It just seemed to me that all of that stuff could be built in or eliminated somehow.

What is the blade's role in the data center?

Brownell: We got lucky or maybe guessed right in the beginning when we chose a less dense form factor. We didn't think a data center would be able to power these gazillions of CPUs in a very small space. If you talk to most data center managers, they can't power these things anyway.

So while our product is very dense compared to non-blade architectures, we tried to design at what we call a usable density level where you can actually have the appropriate power and cooling.

By choosing a larger blade form factor we were able to do four-way Xeon right out of the gate. We just announced our support for the new Opteron product and we'll have multi-core support too. So it's relatively easy for us to enter the high-end of the Intel market space very quickly in the highest performing Intel and AMD chips.

Where does Egenera compete in the market?

Brownell: In the whole bladed market I couldn't give you a relevant statistic, but I do believe in the mission-critical blade space we're the leaders, even against IBM and HP. But we don't really compete in the blade space per se. We really replace more of what would have been running in what I call the Punix environment -- the proprietary Unix environment.

The space we play in is not the low-end, doesn't matter if it breaks or not, compute farm HPC-type applications that you see a lot of blades being deployed in today. We play more in the core of the data center in the truly complicated applications. We do a lot of work with Oracle and Sybase and the other database partners we have.

From a former customer's perspective, what other IT vendors are doing something interesting right now?

Brownell: I think there is a lot of interesting things going on in the storage space. We're storage agnostic, so we work with all of the major storage vendors. Information lifecycle management, the recognition that data has different archival purposes -- there are a number of companies that I'm excited about in that space.

How has the data center changed since you were CTO at Goldman Sachs?

Brownell: Whether it's number of servers, amount of storage, number of processors, whatever metric you want to use -- there has been an explosion in the amount of equipment, despite all the best consolidation efforts.

For more information:

Cooler rooms or cooler servers?

Look before you leap into consolidation

Every data center manager I talk to is trying their best to consolidate, and it's very difficult to do in a traditional architected environment. Every one of those collections of CPUs in a server is an operating system that has to be patched, managed, upgraded and so on. That is what has got everyone worried.

It used to make me wake up in a cold sweat. You really don't have any control over these environments because of the scale of them. In the past five years it's only gotten worse. In the larger enterprises, you're talking about numbers in the thousands of servers. It becomes overwhelming to people. That's been happening for a long time, but it's become more and more acute in the last five years.

How so?

Brownell: Just to give you some scale, at Goldman they had 250 Sun Solaris systems administrators just to maintain those operating system images and upgrade those servers. You look at those numbers and they're just staggering.

To really reduce the complexities, you have to reduce the number of servers -- which we do through the utility computing aspect of the product. The other side of it is to reduce the glue, the puzzle piece complexity, around the servers. I look at it as a one-two punch. Without doing both of those, you're not really maximizing the benefit of trying to simplify. And we're just stemming the tide.

What's next for Egenera?

Brownell: From a product perspective, we will continue to provide the leading edge of the Intel and AMD roadmap, very quickly and in a mix-and-match way. The 25 blades we've made over the course of three years of production are all compatible and can be mixed and matched in any configuration, running any operating system, Windows, Linux or Solaris. We'll continue to go down that chip agnostic, OS agnostic path.

We're also considering taking the software investment -- the secret sauce of our product -- and allowing it to be used on other people's hardware.

Let us know what you think about the story; e-mail: Matt Stansberry, News Editor

Dig Deeper on Server hardware strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.