Definition

data center

What is a data center?

A data center -- also known as a datacenter or data centre -- is a facility composed of networked computers, storage systems and computing infrastructure that businesses and other organizations use to organize, process, store and disseminate large amounts of data. A business typically relies heavily upon the applications, services and data contained within a data center, making it a focal point and critical asset for everyday operations.

Enterprise data centers increasingly incorporate facilities for securing and protecting cloud computing resources, as well as in-house, on-site resources. As enterprises increasingly turn to cloud computing, the boundaries between cloud providers' data centers and enterprise data centers become less clear-cut.

How do data centers work?

A data center facility enables an organization to collect its resources and infrastructure for data processing, storage and communications, which include:

  • systems for storing, sharing, accessing and processing data across the organization;
  • physical infrastructure to support data processing and data communications; and
  • utilities such as cooling, electricity, network access and uninterruptible power supplies (UPS).

Gathering all these resources in a data center enables the organization to:

  • protect proprietary systems and data;
  • centralize IT and data processing employees, contractors and vendors;
  • apply information security controls to proprietary systems and data; and
  • realize economies of scale by consolidating sensitive systems in one place.

Why are data centers important?

Data centers support almost all computation, data storage and business applications for the enterprise. To the extent that the business of a modern enterprise is run on computers, the data center is the business.

Data centers enable organizations to concentrate their processing power, which in turn enables the organization to concentrate their:

  • IT and data processing personnel;
  • computing and network connectivity infrastructure; and
  • computing facility security.

What are the core components of data centers?

Elements of a data center are generally divided into three categories:

  1. Computation
  2. Enterprise data storage
  3. Networking

A modern data center concentrates an organization's data systems in a well-protected physical infrastructure, including:

  • servers;
  • storage subsystems;
  • networking switches, routers and firewalls;
  • cabling; and
  • physical racks to organize and interconnect IT equipment.

Data center resources usually include:

  • power distribution and supplemental power subsystems;
  • electrical switching;
  • UPSes;
  • backup generators;
  • ventilation and data center cooling systems, such as in-row cooling configurations and computer room air conditioners; and
  • adequate provisioning for network carrier (telcom) connectivity.

All of this demands a physical facility with physical security access controls and sufficient square footage to house the entire collection of infrastructure and equipment.

How are data centers managed?

Data center management requires administering several different topics that relate to the data center, including:

  • Facilities management. Managing the physical data center facility can include duties related to the real estate of the facility, utilities, access control and personnel.
  • Data center inventory or asset management. Data center facilities include the hardware assets, as well as software licensing and release management.
  • Data center infrastructure management. DCIM lies at the intersection of IT and facility management and is usually accomplished through monitoring of the data center's performance to optimize energy, equipment and floor space use.
  • Technical support. The data center provides technical services to the organization, and as such it must also provide technical support to enterprise end users.
  • Operations. Data center management includes day-to-day processes and services that are provided by the data center.
Rack-mounted systems in a data center
This image shows an IT professional installing and maintaining high-capacity rack-mounted systems in a data center.

Data center infrastructure management and monitoring

Modern data centers make extensive use of monitoring and management software. Software, including DCIM tools, allows remote IT data center administrators to oversee the facility and equipment, measure performance, detect failures and implement a wide array of corrective actions without ever physically entering the data center room.

The growth of virtualization has added another important dimension to data center infrastructure management. Virtualization now supports the abstraction of servers, networks and storage, allowing every computing resource to be organized into pools without regard to their physical location. Administrators can then provision workloads, storage instances and even network configuration from those common resource pools. When administrators no longer need those resources, they can return them to the pool for reuse. All the actions network, storage and server virtualization accomplish can be implemented through software, giving traction to the term software-defined data center.

Energy consumption and efficiency

Data center designs also recognize the importance of energy efficiency. A simple data center may need only a few kilowatts of energy, but enterprise data centers can require more than 100 megawatts. Today, the green data center, which is designed for minimum environmental impact through the use of low-emission building materials, catalytic converters and alternative energy technologies, is growing in popularity.

Data centers can also maximize efficiency through their physical layout using a method known as hot aisle and cold aisle layout. Server racks are lined up in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. The result is alternating hot and cold aisles, with the exhausts creating a hot aisle and the intakes creating a cold aisle. The exhausts are pointed toward the air conditioning equipment. The equipment is often placed between the server cabinets in the row or aisle and distributes the cold air back to the cold aisle. This configuration of the air conditioning equipment is known as in-row cooling.

Organizations often measure data center energy efficiency through a metric called power usage effectiveness (PUE), which represents the ratio of total power entering the data center divided by the power used by IT equipment. However, the subsequent rise of virtualization has allowed for much more productive use of IT equipment, resulting in much higher efficiency, lower energy use and energy cost mitigation. Metrics such as PUE are no longer central to energy efficiency goals, but organizations may still gauge PUE and employ comprehensive power and cooling analyses to better understand and manage energy efficiency.

Data center tiers

Data centers are not defined by their physical size or style. Small businesses may operate successfully with several servers and storage arrays networked within a closet or small room, while major computing organizations -- such as Facebook, Amazon or Google -- may fill an enormous warehouse space with data center equipment and infrastructure. In other cases, data centers can be assembled in mobile installations, such as shipping containers, also known as data centers in a box, which can be moved and deployed as required.

However, data centers can be defined by various levels of reliability or resilience, sometimes referred to as data center tiers. In 2005, the American National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) published standard ANSI/TIA-942, "Telecommunications Infrastructure Standard for Data Centers," which defined four tiers of data center design and implementation guidelines.

Each subsequent tier is intended to provide more resilience, security and reliability than the previous tier. For example, a Tier I data center is little more than a server room, while a Tier IV data center offers redundant subsystems and high security.

Tiers can be differentiated by available resources, data center capacities or by uptime guarantees. The Uptime Institute defines data center tiers as follows:

  • Tier I. These are the most basic type of data center and they incorporate a UPS. Tier I data centers do not provide redundant systems but should guarantee at least 99.671% uptime.
  • Tier II. These data centers include system, power and cooling redundancy and guarantee at least 99.741% uptime.
  • Tier III. These data centers provide partial fault tolerance, 72 hours of outage protection, full redundancy and a 99.982% uptime guarantee.
  • Tier IV. These data centers guarantee 99.995% uptime -- or no more than 26.3 minutes of downtime per year -- as well as full fault tolerance, system redundancy and 96 hours of outage protection.
Data center outages
Most data center outages can be attributed to these four general categories.

Data center architecture and design

Although almost any suitable space could conceivably serve as a data center, the deliberate design and implementation of a data center requires careful consideration. Beyond the basic issues of cost and taxes, sites are selected based on a multitude of criteria, such as geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications and even the prevailing political environment.

Once a site is secured, the data center architecture can be designed with attention to the mechanical and electrical infrastructure, as well as the composition and layout of the IT equipment. All these issues are guided by the availability and efficiency goals of the desired data center tier.

Data center security and safety

Data center designs must also implement sound safety and security practices. For example, safety is often reflected in the layout of doorways and access corridors, which must accommodate the movement of large, unwieldy IT equipment, as well as permit employees to access and repair the infrastructure.

Fire suppression is another key safety area, and the extensive use of sensitive, high-energy electrical and electronic equipment precludes common sprinklers. Instead, data centers often use environmentally friendly chemical fire suppression systems, which effectively starve a fire of oxygen while mitigating collateral damage to the equipment. Because the data center is also a core business asset, comprehensive security measures and access controls are required. These can include:

  • badge access;
  • biometric access control; and
  • video surveillance.

Properly implemented, these security measures can help detect and prevent malfeasance by employees, contractors and intruders.

What is data center consolidation?

There is no requirement for a single data center, and modern businesses may use two or more data center installations across multiple locations for greater resilience and better application performance, which lowers latency by locating workloads closer to users.

Conversely, a business with multiple data centers may opt to consolidate data centers, reducing the number of locations in order to minimize the costs of IT operations. Consolidation typically occurs during mergers and acquisitions when the majority business doesn't need the data centers owned by the subordinate business.

What is data center colocation?

Data center operators can also pay a fee to rent server space in a colocation facility. Colocation is an appealing option for organizations that want to avoid the large capital expenditures associated with building and maintaining their own data centers. Today, colocation providers are expanding their offerings to include managed services, such as interconnectivity, enabling customers to connect to the public cloud.

Because many service providers today offer managed services along with their colocation facilities, the definition of managed services becomes blurry, as all vendors market the term in a slightly different way. The important distinction to make is this:

  • Colocation. The organization pays a vendor to house their hardware in a facility. The customer is paying for the space alone.
  • Managed services. The organization pays a vendor to actively maintain or monitor the hardware in some way, whether it be through performance reports, interconnectivity, technical support or disaster recovery.

What is the difference between a data center vs. the cloud?

Cloud computing vendors provide the same facilities as enterprise data centers. And cloud computing vendors offer these services through data centers of their own. The greatest difference between a cloud data center and a typical enterprise data center is one of scale. Because cloud data centers serve many different organizations, they can be huge.

Google data center in Douglas County, Ga.
Very large enterprises like Google can require very large data centers, like this Google data center in Douglas County, Ga.

Because enterprise data centers increasingly implement private cloud software, they increasingly look, to the end users, like the services offered by commercial cloud providers.

Private cloud software builds on virtualization to add cloudlike services, including:

  • system automation;
  • user self-service; and
  • billing/chargeback to data center administration.

The goal for is to allow individual users to provision workloads and other computing resources on demand without IT administrative intervention.

Further blurring the lines between an enterprise data center and cloud computing is the growth of hybrid cloud environments. As enterprises increasingly rely on public cloud providers, they must incorporate connectivity between their own data centers and their cloud providers.

For example, platforms such as Microsoft Azure emphasize the hybrid use of local data centers with Azure or other public cloud resources. The result is not an elimination of data centers, but rather, the creation of a dynamic environment that allows organizations to run workloads locally or in the cloud or to move those instances to or from the cloud as desired.

Evolution of data centers

The origins of the first data centers can be traced back to the 1940s and the existence of early computer systems like the Electronic Numerical Integrator and Computer (ENIAC). These early machines were complex to maintain and operate and had a slew of cables connecting all the necessary components. They were also in use by the military -- meaning specialized computer rooms with racks, cable trays, cooling mechanisms and access restrictions were necessary to both accommodate all the equipment and implement the proper security measures.

However, it was not until the 1990s, when IT operations started to gain complexity and inexpensive networking equipment became available, that the term data center first came into use. It became possible to store all of a company's necessary servers in a room within the company. These specialized computer rooms were dubbed data centers within the organizations, and the term gained traction.

Around the time of the dot-com bubble in the late 1990s, the need for internet speed and a constant internet presence for companies necessitated larger facilities to house the large amount of networking equipment needed. It was at this point that data centers became popular and began to resemble the ones described above.

Over the history of computing, as computers become smaller and networks become bigger, the data center has evolved and shifted to accommodate the necessary technology of the day.

To find out how to build a green, sustainable data center read "Considerations for sustainable data center design."

This was last updated in October 2021

Continue Reading About data center

Dig Deeper on Data center design and facilities

SearchWindowsServer
SearchServerVirtualization
SearchCloudComputing
Close