In just a few decades, servers have gone from large, UNIX-based systems to smaller, generic, standards-based commodity computing platforms.
The types of servers that rule the data center today wouldn't recognize early computing systems. The IBM AS/400 Advanced 36 Model 436 exemplified 1990s server technologies, with one single-chip processor and nearly 18 W power. Today's midrange servers, like the Dell PowerEdge 420, use multicore processors at 80+ W. On-server storage memory quadrupled and gained resiliency features. Current high-end x86 servers run multiple 10-core processors, hundreds of gigabytes of memory and far more internal storage. Moore's Law marched on from the '90s to today, but that upward trajectory is leveling out.
The next frontier is to match the server architecture to the workload. With raw computing power accelerating more slowly, the principal expectations for tomorrow's enterprise server types are better scalability and efficiency. Every workload imposes unique computing demands.
The complex instruction sets of x86 processors will yield to reduced instruction set computing (RISC) processors for workloads such as Web servers. Reducing the instruction set speeds processor performance while using considerably less energy than commodity servers for the same workload. RISC servers deliver vastly more computing power to workloads when they need it than scaling back. This is a core requirement for scalable cloud computing, and experimental systems like Hewlett-Packard Co.'s Project Moonshot show promise.
Future server technologies will enable a modular paradigm, replacing complete rack or blade systems with independent functional modules for processing, memory, I/O and more. This disaggregated approach allows organizations to change out specific computing elements rather than replacing complete servers.