Talk about dense.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Dell’s new PowerEdge C6145 server stuffs eight AMD Opteron processors in a single 2U enclosure, making it a standout for high-performance computing (HPC) and, potentially, virtualization.
The C6145 consists of two four-processor AMD-based servers with shared power supplies, fan and sheet metal. Taken together, the two servers supply 96 cores, 10 PCIe expansion slots (six of which are available) and more than 1 TB of memory using 16 GB DIMMs.
HPC configuration calculus
Packaging two four-processor systems in a single 2U enclosure is preferable for HPC, as opposed to using two separate four-processor 1U systems or a single eight-processor box, Dell said.
“The problem with a four-processor 1U box is that the PCI slots aren’t there,” said Tim Carroll, Dell director and global lead for research computing.
And when it comes to HPC, two four-processor systems are more efficient than a single eight-way system, said Armando Acosta, PowerEdge C product manager.
It’s a natural evolution of processor capabilities, as well as the continued push for extended memory and bus capabilities on servers.
Rick Vanover, software strategy specialist, Veeam Software
“HPC customers have massively parallel applications, and chunk workloads into smaller packets, distributed over lots of core, memory and I/O,” Carroll said. “They don’t need a big eight-way.”
Further, most eight-processor systems have much bigger footprints than the C6145. By way of comparison, Dell called out Hewlett-Packard’s eight-way ProLiant DL 980 G7, which has 8U and takes up four times as much space as the Dell box. This is especially important in HPC environments, which, in their scope, tend to put a premium on footprint, Carroll said.
“Because of their densities, HPC tends to push up against the thresholds of data centers’ space and power limits,” Carroll said. The question therefore becomes, he added, “How do you reduce my space and power and cooling needs, while still getting the compute we need?”
Calling all Opterons, GPGPUs
The C6145 is designed around the AMD Opteron 6100 processor, including three new models, or with older models. Systems shipped today can be upgraded with the upcoming Bulldozer chips, which are expected to begin shipping in the third quarter.
Users should not expect an Intel-based equivalent to the C6145, Acosta said.
“We don’t see an Intel four-processor that can fit into this form factor, because of their thermal footprint,” he said. “AMD’s core count and thermal footprint made this possible.”
The C6145 also has the ability to do general-purpose computation on graphics processing units (GPGPU) by attaching up to two Dell PowerEdge C410x PCIe expansion chassis, for up to 16 GPGPUs.
“GPGPUs are very good at doing things that have to be done over and over again,” Carroll said. He specifically mentioned certain aspects of genomics sequencing and oil and gas exploration.
Virtualization in view?
The C6145 also has the potential to be a good fit for virtual environments, where big memory footprints and network expansion capabilities are important, Dell said.
At first glance, the C6145 would appear to be a fit for virtual environments, said Rick Vanover, a software strategy specialist at Veeam Software and former virtualization architect.
“It’s a natural evolution of processor capabilities, as well as the continued push for extended memory and bus capabilities on servers,” Vanover said. Whitebox manufacturers such as MDS Micro have delivered servers with similar densities, he added.
VMware is planning for these increases in density; the company disclosed at last week’s Partner Exchange show that future vSphere releases will support virtual machines with up to 32 vCPUs and 1TB of RAM.
As it stands, the C6145 offers better than twice the density of a blade system such as Dell’s own M1000e, which supports up to 16 two-processor blades in 10U. And if deployed in sufficient numbers, most IT directors won’t balk at that kind of density, Vanover said.
“Are people concerned about putting all their eggs in one basket? No,” he said. “If you’re buying this model, you’re probably going to be deploying rows and rows and rows of them, and will design domains of failure so that if one goes down, it doesn’t affect things too badly.”