Check out the rest of our Server Month resources.
New server technology has helped transform data centers from big and bulky to slick and streamlined.
Smaller-sized IT shop administrators have begun to replace older desktop-style machines with compact 1U server systems. These smaller boxes can do much more than their predecessors. They have faster processor speeds, better memory allocation, the ability to handle more storage and better overall hardware performance.
But how does an administrator know he’s gotten the most out of his servers? When deploying a new server, an administrator can’t simply push the Power button. Advancements in underlying hardware and its capabilities require data center admins to consider the ramifications of several key configuration changes and their impact on the surrounding data center environment.
Optimizing servers with BIOS technologies
BIOS technologies have evolved from just setting the date, time and boot drive to a comprehensive control system that allows an administrator to make major modifications in underlying system components. When deploying a server -- even a simple 1U machine -- it’s important to power on the device and verify the BIOS settings. By examining the default settings, an administrator can familiarize himself with the technology and identify overlooked options, which are helpful in smaller data center environments.
Management support within BIOS. Whether a data center is equipped with PowerEdge, ProLiant or even Super Micro systems, an administrator can certainly use modern, server-class motherboards. These newer motherboards almost always support the access of the BIOS through a serial (RS-232) connection or an IPMI network port.
For an environment with the strong performance and fast configuration times, this feature is key. With a serial port connection, an administrator now has a perfect view of the POST (or Power-on Self-Test) output, which assists with configuring BIOS settings remotely and quickly. But when making modifications to the BIOS and the BIOS menu, data center managers should be careful. If you make changes to system settings within the BIOS Setup itself or through the serial port, note that some mistakes can undermine the device.
BIOS boot-up on power-on. Server-class motherboards can manage power options. Lower-end machines (which are not recommended for data center deployments) can turn off automatically. From an IT perspective, automatic shutdown features are a major liability. Conversely, other motherboard manufacturers, and the BIOS written to accompany these motherboards, are designed never to turn themselves off when power is on. From a deployment point of view, this can quickly become a nuisance. If an IT administrator needs to power-cycle a machine remotely using a master switch, for example, this setting makes it difficult -- if not impossible -- to do. While inside the BIOS, verify that the settings on the machine allow your hardware to boot and start automatically when connected to power. This configuration helps avoid complications.
BIOS security. To prevent unauthorized changes to a server's BIOS configuration, set a BIOS password. For very small data centers with five or fewer servers, these devices are ever more critical to company operations. Newer server technology now has BIOS, which allows an admin to password-protect a machine such that it is more difficult to break into. But remember, a BIOS password is not foolproof. There are other methods of bypassing BIOS passwords; however, they are more difficult than circumventing the standard operating system credentials.
Optimizing memory and processor speed. Newer BIOS software now includes extensive feature sets that allow admins to make significant changes to a machine’s processor and memory-timing characteristics. Theoretically, this can make your computer work faster. But caution is required; excessive timing tweaks can create server processor(s) or memory instability and cause a server crash. In extreme cases, a system can overheat, which can damage the processor(s) or other system board components.
There may be dozens of CPU-related BIOS settings to work with. Examples include hardware prefetcher, which can stream data and instructions from the main memory to L2 cache to improve CPU performance. Intel processors with VT (e.g., virtualization technology) extensions can be enabled to improve processor performance under virtualization. EIST (Enhanced Intel SpeedStep Technology) can be enabled to allow automatic processor voltage and clock changes to reduce power consumption with Intel processors.
And there are also BIOS settings that affect memory performance. The Memory Mode setting, for example, can cause memory to work independently or enter a mirrored state for greater resilience. Demand scrubbing is a memory error-correction scheme that allows a processor to write corrected data back into the memory where it was read. Interleave options affect the way that data is spread across the memory installed in the system.
Remember that all of these CPU and memory settings can have a profound impact on server performance and stability, so it's important to understand how each feature affects the system and change only one setting at a time so that unexpected BIOS configuration problems can be isolated, identified and corrected quickly.
Working with server heating and cooling
Newer server technology now comes with sensors that read the temperature of certain parts of the machine, primarily the processor and its respective heat sink. Depending on the need, these sensors may instruct server fans to turn on or off or adjust their speed. You may also be able to dictate when these fans turn on, how often and at what heat tolerance.
Sensors will detect when a computer reaches dangerous temperatures; anything over 175 degrees (Fahrenheit) over a period of time is considered dangerous. In these cases, sensors can be configured to immediately shut down a computer to avoid damaging internal server components. When a server's thermal stress approaches dangerous levels, IPMI and other management tools can alert an administrator. Early warning and intervention from a technician is better than allowing an unexpected server shutdown.
Security at the motherboard level
Prior to deploying a new server, an administrator should understand the potential risks of making a machine live. Viruses and rootkits have become advanced enough that they attack computer components on the motherboard to render a server nearly unrecoverable. New microcontroller technology embedded on motherboards is a higher level of security used to protect a server. Hardware solutions like the Trusted Platform Module (TPM) are controllers that store keys, passwords and digital certificates that protect data from external software attacks and physical theft. Motherboard manufacturers are now placing these chips directly on the board. Enabling TPM allows a server to be more secure and create a platform requiring authentication with an integrated security solution -- a 2048 bit key in this case.
Other server configuration options
There can be numerous other important settings, not discussed above, requiring an engineer’s attention prior to server deployment. These can include RAID configurations (e.g., RAID 0 versus RAID 5), enabling or disabling the virtualization features on the processors, verifying RAM settings, and running scheduled system checks from within BIOS.
BIOS technology has taken a huge leap forward allowing an engineer to make granular changes to the hardware on the server. Referencing user guides and white papers directly related to the data center server in use will help an environment make the best use of their devices. Server hardware configurations are unique to the environment at which they are deployed. Taking the time to learn and understand the machine will enhance the performance, and most importantly, the life of the server.
Check out the rest of our Server Month resources.
Dig deeper on x86 commodity rackmount servers