Making big data platforms work with current IT systems is a fine line that system administrators must learn to walk.
It’s an old joke, but an apt one: How do you make God laugh? Tell him your plans.
You’re training for a marathon, when you slip on a patch of ice, twist your ankle and are sidelined for weeks. Or you’ve finally saved enough money to go on an exotic vacation, but your boiler blows up. You’re settling down to watch the big game, and the power goes out. You get the idea.
That’s the way a lot of infrastructure and operations pros view big data, which has landed like a fly in IT’s proverbial soup. In her story “Big Data, Big Changes,” senior news writer Beth Pariseau describes how enterprises have spent the better part of a decade systematically virtualizing, consolidating and generally hardening IT systems. But the sudden popularity of big data analytics has caused an influx of one-off physical servers and local storage, with little to none of the reliability and redundancy that infrastructure professionals have come to expect. Hadoop-based systems are especially vulnerable, she finds, because the directory namespace server, which tracks where all the data throughout the system resides, is architected as a single point of failure.
Nor do most big data platforms play well with existing management stacks and processes like monitoring or backup. And while outsourcing big data applications to the cloud can alleviate some of these issues, running workloads there comes with its own security and compliance concerns. Meanwhile, IT is scrambling to accommodate these changes, but it’s unlikely that old approaches will fit the bill.
Of course, many failures are IT’s own doing. When Superstorm Sandy struck this fall, we learned just how many businesses ignore disaster recovery best practices and rely on a single Internet provider, said Frances Poeta, president of P&M Computers Inc., an IT consultancy based in Cliffside Park, N.J. That’s just one of the mistakes outlined in my story “Sandy Provides DR Refresher Course” that organizations won’t repeat anytime soon.
But barring disasters, change is happening constantly, rendering most capacity planning, in a word, pointless. In his column “When Chaos Reigns,” contributor Jonathan Eunice schools us on Chaos Theory, which teaches us that “small, seemingly random changes can now greatly change outcomes, and completely reshape longer-term trends.” That has powerful implications for IT professionals, who should set their sights more realistically. “We can rationally extrapolate only over very short time frames: a few months, or a few quarters,” he writes—not the years that IT planners usually gear up for.
On the bright side, new operational philosophies such as Agile development and DevOps are based on the assumption of constant change, and provide support for handling it gracefully. So when your boss wants to see a long-term capacity plan that incorporates a big data platform, tell him not to worry. You’ve got God on your side.
Dig Deeper on Data center capacity planning