ORLANDO, Fla. -- Emerging trends require data centers to adapt, sometimes to major disruptions.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
This might call for new systems, tools, processes and procedures to keep the business competitive or enhance user activities. Here at Gartner ITxpo 2014, Gartner's research vice president David J. Cappuccio examined the top 10 emerging IT trends that will change data center infrastructure and operations (I&O) in the near future.
1. More software-defined infrastructure
Whether it's software-defined networking, storage or data centers, software-based tools that connect computing resources and components are dispensing with traditional physical devices hard-wired or hand-configured across the data center. Software-defined anything concentrates I&O management in a single place or tool, either on- or off-premises. These technologies also share a common goal of enhancing workload mobility and traffic flow based on logical rules, allowing workloads to be provisioned and run where they are most effective or efficient.
The move to cloud computing and agile application development makes software-defined infrastructure essential. Organizations will simply be unable to devote IT staff time to manually provisioning and migrating these workloads. Businesses must deploy systems and tools to support software-defined infrastructure elements, and implement processes or procedures to use those tools.
2. Continuity rather than recovery
Backup and recovery plans previously were based on business needs and reasonable cost/performance tradeoffs for recovery time and point objectives. As the pace of IT accelerates and users demand always-on performance, the trend is moving toward a seamless integration of business continuity and disaster recovery.
The focus is shifting from the data center to the workload. I&O staff need to protect and migrate workloads to mitigate risk and speed recovery. This often means using multiple data center facilities connected by private backbones across disparate geographical areas. For example, if a network disruption occurs outside of Dallas, critical workloads will migrate to another facility in Seattle, where performance might be just as good.
3. More integrated systems
Converged infrastructure (CI) is a trend that has been building for several years, and continues to gain attention. The goal is to move away from heterogeneous systems built from individually sourced components and acquire entire infrastructures (server, network and storage systems) that a vendor already integrates, optimizes and vets, such as Cisco UCS.
CI offers simpler deployment and service, and can scale higher and perform faster than heterogeneous systems cobbled together in-house. CI can be a particularly disruptive technology because it displaces all of the data center's current hardware. You'll see these integrated deployments more often in greenfield builds or remote data centers.
4. Disaggregated systems
Disaggregated systems are the opposite of integrated systems, breaking down the traditional packaged server architecture into core functional components that are tied together with optical interconnects (such as Intel's silicon photonics). The idea is to create separate processor, storage, network and memory modules that can be scaled up as the data center's workloads require it. Upgrades cover only select modules that need to be replaced, rather than scrapping an entire system, so capital isn't wasted.
Disaggregation is a trend first embodied by Facebook's Open Compute Project. Open software is already well-established; open source software is common in today's enterprise, Cappuccio said. Open hardware should follow a similar path.
5. Bimodal software development
With so much emphasis on DevOps and agile software development, it's easy to forget that mission-critical enterprise applications are not agile. An agile team focuses on speed, while the operations team focuses on stability, and this poses two distinctly different sets of software requirements. It doesn't have to be one way or the other -- the two different software models can coexist productively, Cappuccio said.
Slower and more refined development supports mission-critical legacy applications in the traditional data center. Agile or DevOps practices yield faster, more incremental updates for applications where stability isn't so important. This makes it easier to try creative computing platforms (such as public cloud) without jeopardizing critical workloads.
6. A growing Internet of Things
In 2012, there were 17 billion Internet devices. By 2020, there will be 50 billion Internet devices, Cappuccio said. The future of Internet devices centers on small sensors that are autonomous, self-discover and form their own peer networks, possess location information, and operate without batteries. The ultimate goal of this Internet of Things is to offer real-time support and machine learning capabilities.
This growth requires dramatic rethinking of the data center infrastructure that captures, stores, processes and reports on data from all of these Internet devices.
7. Hyperconnected users
Users are far more demanding of computing capabilities than in years past, increasingly using social media, mass collaboration, personal networks, collective information (such as Yelp reviews) and location independence in a trend Cappuccio calls hyperconnectivity.
Each hyperconnected user demands increasingly rich media and bandwidth. IT professionals must consider the range of services offered and how those services are provisioned and delivered reliably to satisfy the need for instant and impressive results.
8. Distributed data centers
There is a pendulum in the IT industry that moves between distribution and consolidation of data center sites. Some data centers are trending toward a dispersed model -- particularly in highly distributed businesses like large retail chains, Cappuccio said. Rather than a single central data center, the organization's principal computing operations are performed using a few servers at each site. This helps to spread the power and cooling burden and ensures overall organizational resilience -- a fault at one site does not cripple the entire organization (all the sites). These micro data centers are often supported with a single local staffer with little (if any) formal I&O knowledge.
9. Nonstop demand
Users have access to more applications and content than ever before. There are 14.4 billion Web pages, 1.3 million iPhone apps, 1.1 million Android apps and an average of four computing devices per user.
The desire for more content and applications has driven up server work, network bandwidth, storage and power use, and this will invariably affect data center capacity and resilience. Yet Cappuccio notes an average of only 1% growth in IT budgets. This means new technologies, more capacity and better automation will be essential to meet future user demand for computing services.
10. Scarcer IT skills
Reduced budgets, cloud services, increasing IT complexity, demands for 24/7 support, faster change cycles, and shorter development times all erode the pool of qualified IT professionals for today's workloads, Cappuccio said. Rather than simply ensuring that skills match upcoming project requirements, fostering a healthy, productive IT environment requires cross-discipline training and engaging IT staff, as well as promoting horizontal thinking across traditional IT silos.
"It's okay to think outside the box," Cappuccio said. "IT people like to learn. They get better and add business value."
To address these emerging challenges, he suggests that IT professionals work to update their skills base and explore software-defined technologies in lab settings or even limited production environments. Don't overlook the explosion of IP addresses spawned by the Internet of Things; IT must be prepared to support a vast number of devices with a high degree of speed and automation. And when moving to cloud technologies such as hybrid clouds, evaluate offerings based on factors like need, price, value and viability. These strategies can help to address IT complexity that will only increase in the years ahead.
Want more insights from Gartner researchers? Check out the trends that are expected to drive IT through 2017.
Learn the best communication strategy for IT and business partners
Monitor, automate, and get I&O on track