How can a SPEC server virtualization benchmark measure how good a server is at virtualization?
Those are some of the things we'll be struggling to find out. There are really different interests that people will have in this benchmark, both end users and members, because you can look at it from a hardware-centric point of view where you're interested in someone operating in a virtualization environment, what's the best iron to do that. Then you have software companies that are looking to compete from that standpoint. You have a particular server; what is the best virtualization software to use for it? Does that mean there will be multiple virtualization benchmarks?
I think most likely it will be the same benchmark, but as you see the results come out, you'll be able to tell whether they were made for software or hardware performance. I expect it would work a lot like the SPECjAppServer benchmark. You'll see hardware companies that have their own software they may test, but they may also test third-party software. Then you'll see software companies test on a range of hardware products.
The hope with that is if you have both the hardware vendors and the software vendors wanting to show what they can do in a range of configurations, then the end user will get the bottom line information that they really need. How will the group decide how many virtual machines can be used for the benchmark test?
I guess that's something that the working group will be grappling with, all the details of the run rules, what constitutes a fair methodology for doing the tests. How long do you plan on taking feedback from IT managers?
We'll want to keep the lines open at any time if someone is willing to come in and give us input. Since SPEC is a nonprofit organization of primarily hardware and software companies, we get our end-user orientation from how each of us view our customer base. We often don't get a lot of direct end-user input, so I think it's good when we can get it.
End-user organizations typically can't afford the time commitment to spend on something outside of their core business. In some cases they're willing to tell industry analysts what they think and analysts tell us what they're hearing. Sometimes, like with the power benchmarking committee, a group like the Lawrence Berkeley Labs is dealing with it. They're a useful conduit for us there. What's the group's timeline for getting this benchmark out the door?
We haven't really set the timeline yet. What SPEC does with a new benchmark is we form the working group first. Then we have an initial three-month target for the report to come back to SPEC that looks at the obstacles to overcome and how we might go about it. Then at that time we would set a schedule for the benchmark to be produced.
Let us know what you think about the story; e-mail: Mark Fontecchio, News Writer