It's no surprise that most data center admins are confused about what to buy when it comes to the server hardware...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
configuration. From 0.5U single-processor systems through 4U GPU-accelerated monsters, the key to choosing new hardware is in identifying preferred machine classes, and trying not to be too granular in tying servers to current needs.
Since you're running a cloud infrastructure in the data center, you might consider two capacity methodologies. One option is to run a mix of large and small servers for different workloads -- large, SSD-heavy servers for databases and small, inexpensive systems for Web servers. The other is to buy uniform large machines without specifying the server hardware based on workload type.
A lot of general-purpose computing can be done on relatively small machines with a 0.5U single-CPU server or a 1U dual-CPU server. The larger 1U unit is a good choice: can easily handle 16 CPU cores with normal data center cooling schemes while allowing some flexibility in storage drives and connections.
Motherboards in 1U servers now come with two 10 Gb Ethernet (GbE) links. This is enough to support all the VMs installed on the machine. If you're moving your private cloud from virtualized servers to containers, it could put some pressure on to use 25 GbE when it's available.
How much will that cost me?
The 1U server configuration sells in huge quantities and at lower prices than specialized setups. The low volumes on heavy processing units -- a 4U database server with GPU acceleration for example -- adds a substantial premium. ODMs may well be an option to consider -- after all, AWS, Google and Microsoft all buy these products for their clouds.
You also need to address storage hardware configurations, whether local on the server or networked in an array.
For 1U boxes, avoid local storage. Networked storage is cheaper and faster with each new release. There are arguments for localized direct instance storage, but this inhibits workload orchestration and causes problems when individual servers fail.
Disk storage is a somewhat contentious hardware issue for cloud data centers. A cloud server is stateless, but OS images could reside on a small direct-attached drive, such as an m-SATA. This is especially true for containers.
Diskless or single drive per server configuration comes in a 0.5U or twin form factor. These configurations with up to 1TB of DRAM can dramatically reduce your server footprint.
Many twin units come with shared power supplies, in clusters of two or four servers. The larger supply needed for a four-unit quad cluster is more efficient than a single supply with lower power. Many twins come with redundant power, but this redundancy is less valuable for cloud infrastructures with orchestrated failover.
Any of these compact units can easily support 64 typical VMs, each with 4 GB of DRAM, and containers will reduce the DRAM requirement. Servers can handle 128 container instances in the same hardware configuration that only supported 64 VMs.
Databases and big data setups have different needs.
Adding CPU horsepower increases thermal design power. If you need a tier of powerful processors for working with large databases and unstructured data, start with 2U servers, which can handle quad processors with 16 cores. Generally, these need local instance drives -- non-volatile memory express PCIe units. Interconnects could be more 10 GbE links than the general-purpose server configurations, or InfiniBand or 40 GbE links.
The ultimate big data player is the GPU-based server. Specialized units optimized around the cooling issues of a 175W GPU card are available, typically from 1 to 3U server form factors. After balancing network, memory and horsepower, the 1U configuration is likely the most compact and cost effective.
About the author:
Jim O'Reilly is a consultant focused on storage and cloud computing. He was vice president of engineering at Germane Systems, where he created ruggedized servers and storage for the U.S. submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC Brand and Metalithic; and led major divisions of Memorex-Telex and NCR, where his team developed the first SCSI ASIC, now in the Smithsonian.
Get started building a cloud infrastructure
How to spec hardware for a hybrid cloud deployment
Dig Deeper on Server hardware strategy
Related Q&A from Jim O'Reilly
Replacing all your HDDs with SSDs won't solve the storage issues associated with in-memory databases. Look to hyper-convergence and NVDIMMs instead.continue reading
Storage snapshots act almost like a rewind feature for admins, enabling them to roll back to uncorrupted versions of data. Unfortunately, they aren't...continue reading
Containers are a hot technology, and the OpenStack platform continues to evolve to support them. But where does the Kolla service fit into the ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.