This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
1. - Oracle engineered systems news and trends: Read more in this section
- Increased use of integrated systems due to enthusiasm for private cloud
- Smaller entities unable to get on board with Oracle Sun hardware
- Oracle neglects wants of Commodity Sun x86 server users
- With SPARC T4 servers, Oracle invites back interest from Sun shops
- All-in-one data center systems going mainstream
Explore other sections in this guide:
- 2. - Executive insights into Oracle engineered systems
- 3. - Oracle server appliances: Exadata/Exalogic/Exalytics
- 4. - Oracle Database Appliance, Sparc SuperCluster, Big Data Appliance info
Data center systems are poised to change. These converged boxes bring together all the components required to manage a specific workload and are highly tuned to do one thing very well.
Modular data center systems hope to attract commercial IT shops to a completely tailored all-in-one system with a single contract and known performance envelopes for defined workloads. Called "data centers in a box," these systems have developed over time into an ideal environment for remote computing.
A brief history of data center hardware
Data center hardware has evolved from the old-style mainframe with its strict temperature, humidity and particulate envelopes. Towers of distributed x86 servers with broader capabilities but still narrow temperature requirements have recently dominated the market.
Even smaller, higher density rack-mounted systems that concentrate heat in specific areas have their niche. Blade computers have allowed data center designers to put together computer resources as required, with CPU, storage and networking built as separate "blocks" that could be built to meet any specific needs.
But blade computing never really took off as expected. As virtualization became mainstream and cloud computing entered organizations' roadmaps, the focus returned to scaling out using commodity hardware, or "pizza box" servers, to build large estates of compute capability in a data center.
Appliances in the mainstream computing world
Even in scale-out data center strategies, certain workloads require a scale-up approach, supported by systems that include IBM System p and i boxes, Sun's (now Oracle's) UltraSPARC/Solaris systems and HP's Superdome.
But if neither scale-out nor scale-up was the answer, what was?
The majority of organizations ended up with a hybrid estate of scale-out based on commodity boxes with islands of scale-up that weren't quite peer members of the rest of the environment. IBM's mainframe, for example, provides a different approach to specific workloads.
The logical evolution of engineered systems is a "data center in a box." All the technical bits are installed in a standard road container, with power and water plumbed in when the container is delivered on site. Containerized data centers initially attracted attention from remote sites needing computer capability or for temporary use to extend an existing data center facility.
The first vendor to come up with a converged infrastructure approach was Cisco, with its 2009 debut of Unified Computing System (UCS), which combines CPUs, storage and networking engineered as a single tuned appliance for certain Windows Server workloads. Along with its partners VMware and EMC, Cisco formed VCE, which then created the Vblock reference architectures. Dell also introduced its vStart appliances for virtualization and private cloud computing. Each of these engineered systems pulls together the CPU, storage and networking components to create an overall system tuned to a specific workload.
Since each of these appliances is x86-based, the workloads tend to be only Windows or Linux, which left a gap open for IBM. In 2010, IBM developed zEnterprise -- a mainframe that just happened to have some Power CPUs alongside it and software that could intelligently assign a workload to the right platform. Users could add x86 systems to create a multi-workload engine. Unfortunately, this innovation sat in the mainframe camp ignored by the majority of distributed computing-based organizations.
IBM then uncrated the PureFlex range of computers -- a mix of x86 and Power CPUs, again engineered with storage and networking in the same box -- with the intelligent workload management software required to ensure the right workload is placed on the right platform at the right time.
Integrated architectures like these are more self-contained today than earlier versions, with targeted cooling and tidy wiring. Proprietary connections inside the systems reduce latencies and boost performance, as long as all external connections remain standardized. Systems can be prepared before it arrives at the data center, taking installation-to-production time down to a few hours.
These containerized systems are now being used as engineered systems in their own right. Microsoft uses a mix of containerized and modular systems in many of its data centers globally. Intel is investigating data centers that require no cooling: a completely sealed container with around 50% more equipment in it than is seen to be necessary. Running the container at high temperatures will result in higher failure rates in the equipment, but with smart design and engineering, the over-supply of equipment should ensure a suitably long life before the container needs replacing.
All workloads are not the same, and the most important metric for data center performance is user experience. Ensure each workload is run on the platform best suited for it at the right time. There will still be workloads that better suit a commodity scale-out platform, but in dynamic workload environments like cloud, the defined and flexible converged data center system will attract higher adoption.
About the author:
Clive Longbottom is the co-founder and service director of IT research and analysis firm Quocirca, based in the U.K. Longbottom has more than 15 years of experience in the field. With a background in chemical engineering, he's worked on automation, control of hazardous substances, document management and knowledge management projects.