olly - Fotolia

Data center monitoring tools and tips to keep IT in the know

It's important to monitor your data center to ensure the highest security, performance and scalability. Learn how to make the process as efficient as possible with these five tips.

Data center administrators have a long to-do list when it comes to infrastructure monitoring. From server and equipment monitoring -- and in some cases, mainframe monitoring -- it's a practice that's often difficult to juggle, especially if you work in a large data center. But monitoring is an essential task. By obtaining the data you need, you can increase security and scalability, efficiently automate and better align resources with capacity needs.

Here are five tips to bring you up to speed on different data center monitoring tools and strategies.

Form a proactive IT monitoring strategy

Instead of scrambling to fix a problem after it occurs, data center admins should strive to be proactive, anticipating issues before end users even notice. But that can be difficult to do without the right data center monitoring tools and strategy, according to expert Jim O'Reilly. To minimize disruption, find the optimal time to upgrade a legacy system by gathering and analyzing data on storage, networking and server operations. When it comes to storage upgrades, do your research while choosing between solid-state drives, Serial Advanced Technology Attachment bulk drives or networked storage. And, when it's finally time to purchase, use trend data to justify pouring money into more drives. It's important to closely monitor networking -- look for latency and carrying capacity trends to reveal a need to restructure workloads.

Choose your mainframe monitoring tools wisely

Instead of scrambling to fix a problem after it occurs, data center admins should strive to be proactive, anticipating issues before end users even notice.

Since the mainframe produces an enormous amount of detailed data, its monitoring presents a special type of challenge. Three types of mainframe tools can help to overcome that challenge, according to mainframe expert Robert Crawford. Real-time monitors offer an obvious advantage: live views into the mainframe system. Real-time monitors, however, can produce overhead and, if used incorrectly, can skew performance results. Admins can use near-time monitors to retroactively troubleshoot issues, but these monitors are not always best to troubleshoot issues that require finer data summarization intervals. Lastly, post-processors, such as MXG, can track trends, summarize, and plan capacity -- but they use data that isn't available immediately and is typically one day old.

Monitor servers remotely

As ROBO and edge data centers increase in popularity, remote server monitoring becomes a viable option to many admins. There are three data center monitoring tools to enable this remote capability, according to expert Stephen Bigelow. The Intelligent Platform Management Interface (IPMI) – which is supported in server offerings from companies like Hewlett Packard Enterprise (HPE) and Dell -- is tailored for remote server monitoring, and gives admins insight into a system's temperature, power supply and other metrics, as well as inventory data for replaceable units.

The integrated Dell Remote Access Controller supports basic IPMI functionality, but also includes advanced features and integrated options, including system inventory, health monitoring and remote storage device configuration for Dell servers. The third option, Integrated Lights-Out, builds on IPMI technology and incorporates features that enhance remote server monitoring and administration, such as API access and advanced health checks, for HPE products.

Data center infrastructure and design 101

This brief quiz on data center design and infrastructure tests your knowledge of server virtualization technology, virtualization deployment and hardware considerations for a virtual environment.

Use daily event logging tools to track server activity

Windows event log files contain valuable information, but they're tedious to comb through if you have multiple servers. Fortunately, a large offering of log file monitoring tools exist -- but IT teams should choose one carefully to meet their needs, according to expert Adam Fowler. Free options include the Microsoft Windows Event Viewer -- a basic tool that allows you to collect and read log files from multiple machines -- and syslog and the ELK stack, which are more complex and have more advanced features. For example, the ELK stack, which includes tools like Elasticsearch and Logstash, lets admins collect app-specific logs from SQL and other sources.

If you're willing to pay, SolarWinds Log & Event Manager and Splunk on premises or as a service provide detailed information and alerts, but may require a larger IT team to manage.

Take advantage of Linux tools

When properly configured, Linux data center monitoring tools can help you to operate a data center smoothly. For example, Cacti is a free open source tool that allows users to check services at any interval of time, presenting the information in a readable graph format. Nagios is another free tool that is a good fit for data center admins in a complex environment that need to monitor data, such as network traffic and temperature. However, neither tool is a drop-in system, according to expert Jack Wallen. You'll need to put in hours of work to configure them properly, but if you do put in the effort, you'll end up with a tool that is customized to your specific needs.

Next Steps

Get to know your data center monitoring system

Here's how a data center monitoring tool can help C-level execs

Integrate infrastructure monitoring tools

Dig Deeper on Data center capacity planning