This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
1. - Managing storage for virtual environments: Best practices: Read more in this section
- Virtual server data storage management: Best practices
- Data storage management in a virtual environment
- Top management difficulties for storage for virtual environments
- Examining storage for virtual workload management
- Best methods for improving storage for virtual environments
Explore other sections in this guide:
- 2. - Top tools for virtual server storage management
- 3. - Using performance metrics to enhance storage for virtual environments
- 4. - Hypervisor-specific data storage management
IT engineers understand that storage is a valuable asset in any environment. One of the biggest problems facing virtual deployments is storage space. The issue is not the lack of storage, but data storage management. This results in wasted time, wasted space and wasted capital. This tip helps administrators to understand storage needs, as well as what they can do to monitor and manage their data center's storage capacity.
By knowing and understanding what the virtual machines (VMs) and workloads in an environment require, an IT engineer can see their storage infrastructure start to last longer and more more efficiently. However, virtualization is constantly growing, and allocating storage is a daunting task. Fortunately, there are new technology advancements that can help with today’s data storage management tasks, including better integration of storage and server virtualization, easier workload movement, on-the-fly formatting, better data security, superior thin provisioning and better dynamic provisioning tools.
But the underlying challenge remains. How do you manage and allocate storage with these new storage capabilities, and how can you dynamically manage workload storage requirements within a virtual environment?
Understanding VM storage requirements
Planning a deployment will help save time and money, and avoid headaches in the future. Before deploying a physical storage environment, it is essential to understand what is being done to the environment. Let’s examine how to work with space requirements within a virtual infrastructure. Each environment is unique and therefore, some simple questions can help outline a data storage management plan:
- The business needs to understand the scope of virtualization in the environment. Will it have a majority of its system virtualized, or will it run just a few VMs?
- More users, more services and more applications require computing resources that the environment will need to accommodate. Where do you want to go with the environment, now and in the future?
Once a plan is established, the engineering team needs to understand what type of storage solution they will be rolling out. Some VMs require a set parameter for their storage requirements, while others can operate more dynamically. Two schemes are seen as predominant across most virtual machine monitor (VMM) implementations: pre-allocated storage – fat provisioning – and flexible, on-demand storage – thin provisioning. Fat provisioning sets aside a large amount of storage that rarely gets used fully while thin-provisioning doesn't allocate space until it's needed on a per-use basis.
Once a workload is established and all appropriate research has been done, it’s time to determine how storage will be assigned to it. Even more importantly, now will be the time to determine how much storage the workload will need, because this is where you’ll start the allocation process.
Dynamic storage distribution
Administrators can now monitor, allocate and manage their space requirements for all VMs through a hypervisor’s interface. VSphere, XenServer and Hyper-V are now deployed with very sophisticated graphical user interfaces (GUIs), each providing a great deal of information. For example, an administrator can see the connected storage repository, how it is being utilized and the space requirements for each VM. Each new update to the hypervisor expands this storage-link capability to include more vendors, more features and more control over storage directly at the GUI level.
Keeping track of your resource pool is key to maintaining systems. Dynamic space allocation is nothing new. This feature has been available in most leading hypervisors for a few versions now. However, there are certain best practices for this data storage management tactic:
- Set an alarm for your space requirements. Adding additional space is not difficult. In reality, it can be accomplished with about three mouse clicks. The challenge is knowing how much space there is to allocate, and if the data store is running low. To resolve this problem an engineer should set alarms within the hypervisor to properly manage thin provisioning. For some hypervisors, the alarm feature is new, but extremely important. These alarms can be customized to trigger alerts at certain thresholds so that an IT administrator can take the actions required to prevent an out-of-space issue. Alarms can be set to trigger when a data store has reached a percentage, or is a percent overcommitted.
- Document and monitor the environment. Every major hypervisor’s GUI is advanced enough that any IT engineer should be able to look at the storage repository and have a solid idea of where they stand on space. However, when working with space requirements, data storage management is a never-ending process that requires attention at all times. Running out of space is not a pleasant issue to deal with and, for the most part, can be avoided by auditing and maintaining the storage environment.
- Keep the storage and hypervisor infrastructure updated. Watching over the workload is an important ongoing task, and keeping an eye on the storage hardware and hypervisor software is just as vital. New hardware and software releases promise better support and feature sets that help IT engineers manage their environments. Small changes, such as alerts and alarms, can go a long way in managing space needs.
Best practices and cautions
Every environment is unique; therefore, space requirements can span a tremendous range. However, there are some best practices and notes of caution for data storage management that every IT engineer should keep in mind:
- Nothing is ever set in stone. Modifying the size of a VM is very common. Some VMs cannot be changed and their space requirements are preset either by the IT manager or by the vendor. However, for the most part, a VM running in a storage pool has the ability to have its storage space modified. Administrators have the ability to add disk space as needed.
- Always monitor your VMs. As mentioned earlier it’s important to know which resources VMs are using at any given moment. Workload management that involves watching VMs perform over time and seeing when storage demands fluctuate allows an engineer to properly distribute resources when needed.
- Know your workloads. Never assume that an application or workload will always run the same. With service packs, additional users and changes in the overall environment, certain workloads can require more storage at different times.
Keep your eyes on the storage prize
Allocating storage dynamically, or utilizing thin provisioning, is a great way to conserve storage resources by giving a workload only what it needs at the time, and then modifying the requirement later. There is no automatic solution for dynamically pushing out storage needs to a VM or workload. Using alerts, alarms and notifications within a hypervisor GUI, administrators can constantly monitor their storage repositories and know what they can utilize and what they cannot. While dynamically assigning storage on a per-need basis is a wonderful technology, it can potentially lead to errors, causing out-of-space conditions that could affect VM availability. But, when storage requirements are understood and managed correctly, IT engineers will become more effective at allocating dynamic space to their workloads.
ABOUT THE AUTHOR: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.