Maximizing your investment in server hardware often means configuring the machine to deliver the best possible performance. One way to maximize a server’s performance is to adjust the BIOS settings. This tip explains which BIOS settings are most important to configure to optimize system performance and improve power management.
A few disclaimers on BIOS settings
Of course, if it is possible to get better performance by adjusting a few BIOS settings, then that raises the question of why the manufacturer did not design the machine to use the best possible BIOS settings by default. In some cases, high-performance settings may affect the server’s stability. In other cases, improving performance may increase the server’s temperature, energy consumption, or both. In any case, you should always remember that additional performance may come at a price.
Before I discuss BIOS settings that can be adjusted, I need to point out that every server is different. The make, model, architecture and age of a server all affect the BIOS settings that are available. As such, the settings that I will point out may not be available on every server.
Non-uniform memory access
Non-uniform memory access (NUMA) is a technology that links a series of nodes together via a high-speed interconnect. The basic idea is that each CPU has its own built-in memory controller that directly links to memory that is considered to be local to that CPU. A CPU can access memory within its own node (local) or within another node (remote). Local memory access is faster than remote memory access, because remote memory access requires data to be transferred across a NUMA interconnect.
A technology called node interleaving offsets the performance hit associated with remote memory access by striping data across both memory controllers. Some systems enable node interleaving by default within the system BIOS, but servers acting as virtualization hosts usually perform better with memory interleaving disabled.
Few BIOS settings have as big of an affect on overall performance as the power management settings. Unfortunately, many power management settings are vendor specific, so you may have to check your server vendor’s website for their recommendations.
The first power management feature that you should look for is demand-based scaling (DBS). DBS automatically adjusts the processor’s clock speed to increase performance when additional processing power is needed and saves power during periods of low CPU usage.
Many servers control DBS through power management profiles. The default behavior is usually to let the operating system (OS) control the processor frequency scaling, but doing so requires a bit of CPU overhead. Not all OSes support this type of power management, which can be especially problematic for servers running low-level hypervisors. If you are trying to get the best possible server performance, then look for a power management profile geared toward performance rather than power conservation.
Many servers with Intel Corp.’s Xeon processors support simultaneous multithreading technology (SMT). SMT is an Intel feature that tricks the OS into thinking that the CPU has twice as many cores as it actually does. SMT treats each physical core as two logical cores.
While Intel claims that SMT improves performance by as much as 30%, SMT may actually hurt performance if the server is used as a virtualization host. This is particularly true for VMs that are only allocated a single logical processor or for environments in which CPU cores are overcommitted.
Most servers that support SMT have this feature enabled by default, but it can be disabled at the BIOS level. You might consider benchmarking your server with SMT enabled, and then with it disabled, to determine which setting yields the best performance.
There are a few different BIOS features that affect the speed of a server’s CPU cores. One such feature is Turbo Boost, which is found on some Intel Xeon servers. Turbo Boost works similar to overclocking, in that it allows CPU cores to run faster than their base frequency.
Turbo Boost, which is sometimes disabled by default, tends to be a safe feature to use because it will only increase CPU core frequency if the CPU is consuming less than its rated power and is operating below its rated temperature.
The actual amount of additional processing power that Turbo Boost will yield depends on the number of CPU cores that are active, but often provides two to three frequency steps. In any case, all of the active cores within the CPU will run at the same frequency.
If you are considering using Turbo Boost, you should check to make sure that the BIOS C-state feature is disabled. C-state is a power-saving feature found on some Intel Xeon servers. It works by dropping the voltage of CPU cores, thereby reducing the core frequency. When the frequency of a core is reduced, the frequencies of all the active cores on a CPU are reduced. Therefore, if you are trying to get the maximum processing power from your server, you should avoid any configuration that could result in a cores running at a reduced frequency.
As you can see, there are a number of different BIOS settings that can be configured to optimize a server’s performance. Of course, note that doing so may increase power consumption and cause the server to run at a higher temperature.
About the expert
Brien Posey is a seven time Microsoft MVP with two decades of IT experience. During that time he has published many thousands of articles and has written or contributed to dozens of IT books. Prior to becoming a freelance writer, Posey served as CIO for a national chain of hospitals and healthcare facilities. He has also worked as a network administrator for some of the nation’s largest insurance companies and for the Department of Defense at Fort Knox.
This was first published in July 2011