Big data platforms may not play well under current IT rules

IT must learn to juggle current systems and new big data platforms or face the chaos of disparate systems.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Bracing yourself for big data:

Alex BarrettMaking big data platforms work with current IT systems is a fine line that system administrators must learn to walk. 

It’s an old joke, but an apt one: How do you make God laugh? Tell him your plans.

You’re training for a marathon, when you slip on a patch of ice, twist your ankle and are sidelined for weeks. Or you’ve finally saved enough money to go on an exotic vacation, but your boiler blows up. You’re settling down to watch the big game, and the power goes out. You get the idea.

That’s the way a lot of infrastructure and operations pros view big data, which has landed like a fly in IT’s proverbial soup. In her story “Big Data, Big Changes,” senior news writer Beth Pariseau describes how enterprises have spent the better part of a decade systematically virtualizing, consolidating and generally hardening IT systems. But the sudden popularity of big data analytics has caused an influx of one-off physical servers and local storage, with little to none of the reliability and redundancy that infrastructure professionals have come to expect. Hadoop-based systems are especially vulnerable, she finds, because the directory namespace server, which tracks where all the data throughout the system resides, is architected as a single point of failure.

Nor do most big data platforms play well with existing management stacks and processes like monitoring or backup. And while outsourcing big data applications to the cloud can alleviate some of these issues, running workloads there comes with its own security and compliance concerns. Meanwhile, IT is scrambling to accommodate these changes, but it’s unlikely that old approaches will fit the bill.

Of course, many failures are IT’s own doing. When Superstorm Sandy struck this fall, we learned just how many businesses ignore disaster recovery best practices and rely on a single Internet provider, said Frances Poeta, president of P&M Computers Inc., an IT consultancy based in Cliffside Park, N.J. That’s just one of the mistakes outlined in my story “Sandy Provides DR Refresher Course” that organizations won’t repeat anytime soon.

But barring disasters, change is happening constantly, rendering most capacity planning, in a word, pointless. In his column “When Chaos Reigns,” contributor Jonathan Eunice schools us on Chaos Theory, which teaches us that “small, seemingly random changes can now greatly change outcomes, and completely reshape longer-term trends.” That has powerful implications for IT professionals, who should set their sights more realistically. “We can rationally extrapolate only over very short time frames: a few months, or a few quarters,” he writes—not the years that IT planners usually gear up for.

On the bright side, new operational philosophies such as Agile development and DevOps are based on the assumption of constant change, and provide support for handling it gracefully. So when your boss wants to see a long-term capacity plan that incorporates a big data platform, tell him not to worry. You’ve got God on your side.

About the author:
Alex Barrett is the editor in chief of Modern Infrastructure. Write to her at abarrett@techtarget.comor tweet her at @aebarrett.

This was first published in March 2013

Dig deeper on Data center capacity planning

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchWindowsServer

SearchEnterpriseLinux

SearchServerVirtualization

SearchCloudComputing

Close