Sergey Nivens - Fotolia

Get started Bring yourself up to speed with our introductory content.

Nano server innovation extends beyond the data center

What's the difference between a nano server and a PC? Timing.

Data center servers depreciate over several years until they hit the e-waste pile, but there's a new concept for compute that can have multiple incarnations.

Nano servers -- not to be confused with micro servers -- combine industrialized, modular computing components for use as data center servers. Then they can be used as high-performance desktops or tablets, and finish their days as PCs. This means the data center's computing infrastructure refreshes at a rapid pace and combines the purchasing power of end user computing and data center IT teams.

Jacob Hall, innovation wrangler for Wells Fargo, a U.S.-based multinational banking and financial services provider, leads and manages the company's internal technology incubator, Wholesale Labs. He's explored the benefits of nano servers and spoke with SearchDataCenter.com about facility operations and performance payoffs of this innovation, server lifecycles and more.

Why nano servers?

Jacob Hall: Organizations buy hundreds of thousands of desktops every year. Many companies refresh end-user devices each year. This number is growing even more for tablets.

Computers get faster and more powerful quickly. Nano servers combine modular blocks of core computing architecture. With this idea, IT shops could buy the newest computers and put them in the place with the best energy efficiency, where computers run 24/7: the data center. They can then take these components from the server out of the data center and use as desktop, a tablet, smart display, wearable computer, et cetera.

Nano servers modularize the "guts" and combine them into [these other form factors] over time.

In what ways are nano servers more efficient than using x86 servers? Aren't tablets and mobile devices usually based on RISC ARM devices and servers on CICS x86 processors?

Hall: It's far easier to maintain the same computing architecture -- all x86 or all-ARM -- that an enterprise already uses. The architectural change of modularizing the server units is abstracted from processing architecture.

Jacob HallJacob Hall

You wouldn't try to replace everything in the data center with nano servers. There's still a place for big iron for heavier workloads. But for virtual desktops, Web servers, high-performance computing environments [and so on], these devices make a lot of sense. A database server can run on a laptop, which means nano servers can even run small database servers, or you can spread data over many nano servers.

Is that enough computing power? Consider how virtualization carves a processor up into many virtual servers. Virtualization fractionalizes compute resources, and it needs all this hypervisor software to manage the process. Nano servers do the reverse, putting smaller amounts of processing power together to support workloads without having to worry about workload sharing.

Is it more cost effective or does cost not factor in to the equation?

Hall: This is a method to run a more energy-efficient data center, which means better costs. At better performance per watt, [the end] customer experience is better. Essentially, you're transferring the spend on energy and cooling to better performance-per-watt technology. All tech companies should want this to happen.

If I had a blank check and someone asked: 'How would you revolutionize computing?', this is what I'd do.

I call this cold computing, because if you add up the total amount of heat created by traditional compute designs and compare that to the total heat created by a modular compute design, the modular design is colder overall. With modular computing, the workloads that demand the most [resources and fastest response times] rapidly receive better hardware upgrades, rather than waiting years for [them].

Typically, companies will buy a server and depreciate it -- leave the hardware running in the data center for a long time, because it can't be repurposed for another use -- so they run less-efficient devices that output a lot of waste heat and have a higher energy cost to operate.

With [the nano server] concept, you transfer what you would have spent on cooling costs into higher performance sooner. ... A modular compute design that works across many form factors increases the feasibility that compute devices are repurposed out of data centers faster. Waterfalling off devices will also give a wider reach of people access to the Internet faster -- getting the next billion users online sooner. The processor [could go] from server to desktop to home device to personal device in emerging countries to school computer.

Where do you see traditional IT hardware vendors in this new design?

Hall: Industrialized computing is the next step of commodity computing -- an array of slots, plug in a new piece when an old piece breaks.

This could help turn around the hardware vendors' roadmaps -- an inflection point that changes where money is allocated and what people buy.

It is already happening to an extent with microservers. So what's wrong with the microserver design? You can't repurpose them. The rapid investment doesn't continue to pay back.

Why go with consumer-grade over enterprise-grade hardware?

Hall: There are other ways to handle reliability in enterprise data centers than hardware. People run servers without ECC RAM and it works just fine. It is possible to build reliability into the software, and not make everything one-to-one redundant. My [operating system] can move to another machine if the hardware fails, for example. You used to spend a lot of money on a highly reliable server with mirrored RAM and ECC -- for most use cases that isn't necessary, particularly with modern software design. This is a direct way to reduce cost.

How do you compare this data center vision to cloud?

Hall: From a corporate perspective, there are challenges with moving to the cloud. Nano servers are the answer to maintaining your own [data center] infrastructure and performing closer the level of efficiency of public cloud.

There aren't many ways to become more efficient, but buying commodity compute at rapid refresh cycles and reducing energy/cooling cost is one. By combining two purchasing teams -- end-user devices and data center -- an IT organization can get more volume from the same IT vendor, improving prices.

If I had a blank check and someone asked: 'How would you revolutionize computing?', this is what I'd do.

Next Steps

Discover data center innovations from around the world

Data center architects share their latest innovations

This was last published in February 2015

Dig Deeper on Server hardware strategy

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

5 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What do you think of the nano server approach?
Cancel
Leaked slides detail Microsoft's 'Nano server' - the future nucleus of Windows Server
http://www.neowin.net/news/leaked-slides-detail-microsofts-nano-server---the-future-nucleus-of-windows-server
Cancel
Nano servers are said to combine industrialized modular computing components as part of their data center serving for relevant high performance desktops performances. Such computing infrastructure is refreshed rapidly and combined in the purchasing power of users in computing data. They serve as internal technological incubators that facilitate operations and performance payoffs relevant for innovation, server lifecycles and other correlations. Nano servers tend to package data in small amounts and process them that way.
Cancel
Deeply skeptical at the extreme ends of the spectrum.  How many tablet vendors even allow you to crack open the case, much less swap parts in and out?

At the same time, deeply hopeful.  I proposed a similar, modular approach informally to IBM more than 20 years ago.  With the right split between "the module" and the interconnect between modules at both hardware and software layers, all kinds of things become possible.

For those still using traditional laptops, why is the docking station only a paperweight when you're not docked?  It could be doing backups, staging patches, joining the corporate grid, etc.  When you dock, it could double (or much more) the capacity of your device (in all IT dimensions) seamlessly.

With different grades of interconnects and tweaked modules, the same thing can scale to very powerful servers with the potential for mainframe-like macro-architectures.

The same core could live in a single-module form in tablets or even phones.  The same docking-to-add-capacity paradigm could create all kinds of new possibilities.  But, swapping them around between the large/mid/small scales still seems somewhat problematic.
Cancel
This is one way to re-purpose the server after it becomes useless in the data center. Remove the motherboard and put it in a desktop.
Cancel

-ADS BY GOOGLE

SearchWindowsServer

SearchServerVirtualization

SearchCloudComputing

Close