JumalaSika ltd - Fotolia

Tip

Mitigate data center storage capacity miscalculations

You can use cloud storage and deduplication to expand storage capacity, prevent application failure and maintain data center performance.

When storage shortages occur, organizations must immediately free up space, then re-evaluate storage usage and data center storage capacity to prevent future shortfalls. Serious or unanticipated shortages can lead to application failures and costly service disruptions.

One short-term fix for exhausted storage capacity is to shift lesser-used data to cloud storage. IT administrators can store data in the cloud to free up space until they can increase their on-premises storage capacity. Migrating duplicated and redundant data to the cloud frees up valuable storage for mission-critical applications.

Another option is to prevent the future proliferation of duplicate, old or unnecessary files that are often shared, modified, backed up, archived and redistributed. To avoid this, admins should limit any shared copies of data between environments.

If an organization must share a data set across many environments -- such as production, quality assurance and development -- then it may be beneficial to keep the original data on a disk and use it for the production environment. Admins can use a copy of that data as the main source for other environments.

Long-term technology for data center storage capacity

Many organizations must add more disks to build storage capacity, but there are some long-term actions that can prevent reoccurring storage shortages. Admins should monitor their data centers to ensure that they add drives that meet their space, performance and scalability needs, and they must observe their storage tiering requirements. Monitoring can also enable an organization to gain insights on storage resource utilization.

For example, Nagios XI can monitor file size, disk usage and file count. Additional options that track data center storage include AppDynamics' Storage Performance Monitoring product and SolarWinds' Data Center Monitoring tool. After installing one of these monitoring tools, admins should continue to use it because it can alert them when storage capacities are low.

Admins must decide what type of storage devices their organization needs. Most applications don't need a top-of-the-line solid-state drive; an application may only need a traditional hard disk drive to run properly.

Many applications work with Serial-Attached SCSI drives, whereas snapshots, backups and archival storage are often better-suited for a Serial Advanced Technology Attachment. Purchasing a traditional drive can save an organization money and offer increased storage space.

Other long-term alternatives for data center storage capacity include data deduplication or thin provisioning.

Data deduplication eliminates redundant data; only one instance of data is stored on a disk. Post-processing deduplication removes copied data after sorting. Redundant data is replaced with a pointer path to the original data. Dedupe is often available as a native feature of modern storage arrays, but admins can implement it on different storage resources after purchasing a deduplication tool.

Thin provisioning allocates disk storage based on the minimum space each application requires, whereas traditional provisioning allocates the amount of storage an application may need over time. Because thin provisioning allocates only the amount of storage an application needs to run, admins must add more physical storage over time. Thin provisioning leaves some free space initially, but it's a measure to save money rather than increase data center storage capacity.

Dig Deeper on Data center hardware and strategy

SearchWindowsServer
Cloud Computing
Storage
Sustainability and ESG
Close