This article can also be found in the Premium Editorial Download "Modern Infrastructure: Software-defined networking (SDN) may face obstacles in data center adoption."
Download it now to read this article plus other related content.
Anyone in IT knows there’s a fine line between forward-thinking and foolish, and here at Modern Infrastructure, we love to explore that boundary. In fact, sometimes I think that that’s my professional raison d’etre: ferreting out interesting technologies and projects, and presenting them in the cold light of day so readers can assess and debate their merits.
This month’s issue is filled—I think—with articles about new takes on traditional technology challenges. I’ll let you judge whether they fall into the forward-looking or foolish bins.
In this month’s cover story, contributing editor David Strom takes on software-defined networking, the latest form of network virtualization to whip IT pundits into a frenzy. SDN is being driven by larger-than-life data center operators like Google and Facebook. It faces considerable obstacles from the status quo — but it still has plenty to offer the little people toiling away in regular enterprise data centers. Chief among SDN’s touted benefits: dramatically reducing the time it takes to provision (and de-provision and re-provision) virtual machines, and a possible end to VLAN scalability limitations. Both sound like worthy goals. David also describes some early SDN use cases, some promising products and some important questions to consider as you start thinking about next-generation network architecture.
Another IT bugbear is how to architect applications for better uptime. In her feature, senior news writer Beth Pariseau considers application uptime through the lens of public cloud, IT’s latest silver bullet. Building applications on a public cloud foundation, she writes, has moved the availability conversation from simply automating backup and restores to application resilience. This happens when you start with the premise that everything fails eventually, and, from there, distribute applications across multiple loosely coupled clouds. But building apps this way requires making fundamental changes to how IT operates—for example, switching from standalone relational databases to more distributed ones, or finding creative ways to get around tall compliance hurdles. It’s possible that this model is overkill for most traditional applications—but it sure does present an interesting alternative to the same old failover clustering and high availability tools.
In our final installment of stories about wacky ideas that just might have something to them, I look at whether it makes sense to re-platform legacy applications on the public cloud. The number of existing applications that can easily move to the cloud probably isn’t high, but for those that can, cloud’s resilience and low cost could gracefully and cost-effectively extend these aging apps’ useful life, while minimizing IT’s management burden. Or so the theory goes.
But enough about what I think is interesting in data centers today. What about you—what’s your vote for most forward-looking technology that you’d like to see us tackle? Are you involved in some bordering-on-foolish projects that you think just might work? By all means, drop me a line at firstname.lastname@example.org.
This was first published in November 2012