Sergey Nivens - Fotolia
- Alex Barrett, Modern Infrastructure Editor-in-Chief
Thinking about buying a new storage area network? With hyperconverged systems' increased popularity, you might want to reconsider.
Data center infrastructure mainly consists of standalone servers running virtualization and scale-up storage arrays connected over a network, usually Fibre Channel or iSCSI. But a new generation of hyperconverged infrastructure is challenging that model, creating virtual storage area networks (SANs) out of locally attached flash and hard disk drive storage.
Alternately dubbed virtual SAN, server SAN and SAN-free storage, this new approach is causing many IT professionals to rethink their assumptions about how to approach their on-premises infrastructure needs -- especially storage.
Today's hyperconverged systems range from software-only products targeted at small and medium-size businesses, enterprise-grade hardware appliances designed to take on mission-critical workloads, and everything in between. Players include early- and late-stage startups, tier-one server and storage OEMs, and name-brand enterprise software vendors.
And the market is booming. In 2014, analyst firm Wikibon predicted hyperconvergence sales of $487 million. Actual sales turned out to be in the $500 million to $600 million range -- virtually nothing compared with the overall enterprise storage systems market, which IDC put at $36.2 billion in 2014. But virtual SAN more than doubled last year. "And this year is when the big players are starting to get into it," said Stu Miniman, Wikibon senior analyst.
The early adopters
Virtual SAN adoption usually starts small, designed to improve IT capabilities at a remote location. Driscoll's, for example, a multi billion-dollar-a-year berry distributor based in Watsonville, Calif., has 50 distribution centers across North America, and the servers need to connect back to enterprise resource planning and supply chain systems running in the central data center. Having servers or storage fail at the distribution centers is not an option, said Soumitra Ghosh, vice president of infrastructure.
"In a lot of ways, our business is a lot harder than Amazon's because we're dealing with a perishable commodity," Ghosh said. "If we go down, that means we cannot ship the berries, and they have to be thrown away."
So all infrastructure at the remote site must be highly available even if there's no IT staff. To that end, Driscoll's upgraded the server and storage at its distribution centers last year. It considered pre-packaged hyperconverged appliances. Because it already had some servers, the team chose software from Maxta, running on a four-node cluster.
Ghosh is pleased with the performance and manageability of the Maxta stack, and the initial deployment may bear fruit back in the primary data center. "We have seen enough performance out of them that we would consider putting them in to our data center," he said, as the infrastructure behind the development environment.
Hyperconverged appliances also find a home in small IT environments that don't have the stomach for a full-fledged SAN. That's what the City of West Chicago found a couple of years ago when it tried VMware virtualization.
"We bought some servers, we bought a SAN, we bought the virtualization licenses," at about $60,000, recalled Peter Zaikowski, IT manager for the city, which virtualized six of its 25 servers. Everything worked fine, but then Zaikowski realized he needed to invest another $100,000 to virtualize the remaining systems. "We're a small city. Having to go back to the City Council and ask for that money wasn't something I wanted to do."
For one-fifth the cost, Zaikowski implemented a three-node cluster of hyperconverged appliances from Scale Computing. Later this year, when the VMware licenses run out, he'll migrate those VMs to the Scale cluster, and repurpose the SAN storage as a backup-to-disk target.
Even some SAN stalwarts choose the hyperconverged path. San Mateo County in California bought a Nutanix cluster for a virtual desktop infrastructure deployment, but has migrated over 500 VMs in its VMware farm to the environment, retiring a variety of server and SAN platforms along the way. "We really liked Fibre [Channel], but it's just a lot more work, a lot more management complexity," said Erik Larson, the storage and virtualization architect for the county. Making matters worse is that people with SAN skills are increasingly hard to come by.
"We just don't have that many people comfortable adding an HBA [host bust adopter] or zoning a LUN [logical unit number]," Larson said. When the project concludes, the only servers still connected to a SAN will be legacy applications running on Unix and IBM System i, he predicted.
The vendor mix
In the short term, the biggest problem potential hyperconverged customers are likely to face is determining from whom to buy. Most, if not all, top IT vendors have systems to compete with offerings from start-ups, many based on VMware's EVO:RAIL offering.
"We're on volley number two," said Arun Taneja, founder of the storage-centric analyst firm Taneja Group. "There are a lot of good products on the market, each with their own differentiator, and they all have a good shot at the market."
And while the hyperconvergence market is by no means mature, "We're not in the infant phase either -- more of a school child. You can start to see their strengths and personalities."
Storage, compute and DR too
Some hyperconvergence plays give more than just on-premises compute and storage capacity -- they double as backup and disaster recovery (DR) solutions too. A couple of years ago, The Neenan Co., a design and build construction company in Fort Collins, Colo., needed to upgrade its VMware ESX server farm and its aging Dell EqualLogic SAN. At the same time, George Dial, IT manager at the firm, knew that its DR posture wasn't great.
"If we suffered a loss, we could have a three- or four-day loss if things went wrong," he said.
Dial decided to solve all of those problems at once by purchasing a pair of SimpliVity OmniCubes for its office. It put another pair at a sister real-estate firm in Denver, and set the two clusters to replicate to one another.
Improved DR was the deciding factor. "I looked at buying new HP servers and another EqualLogic, and on paper it was actually less expensive, but [the system] wasn't very smart," Dial said. There was no deduplication of data, the performance was average, and it meant having to administer backups through a separate interface. In contrast, SimpliVity includes deduplication as a baseline feature, performance is great, and all administration -- even backup and DR -- happens through VMware vCenter.
"It was sort of a leap of faith," Dial said. But even though SimpliVity was a very new company at the time, the system has proved itself, and "everybody is over the moon."
Alex Barrett is editor in chief of Modern Infrastructure. Email her at firstname.lastname@example.org.