Problem solve Get help with specific problems with your technologies, process and projects.

Solving Linux server hangs stemming from kernel issues

When Linux kernel issues cause functional machines to hang, determining the type of hang -- interruptible or non-interruptible -- is imperative to troubleshooting the situation.

Real kernel problems are relatively rare, but they do occur. The Linux kernel itself is very stable, mostly due...

to the thorough testing procedures new kernels undergo before release. But you can still run into trouble if, for instance, you try to load a closed source module that is not supported by the kernel developers.

When a kernel problem occurs on a functional machine, the machine will appear frozen and won't respond. If this happens, the first step is to find out what kind of hang it is. There are two different kinds of hangs: Interruptible hangs and non-interruptible hangs. To find out which it is, press the Caps Lock key. If the Caps Lock light switches on or off, you have an interruptible hang, which is good news because it allows you several options. If it doesn't, you have a non-interruptible hang.

The best thing to do when you have an interruptible hang is to dump a stack trace of the responsible process. By analyzing such a stack trace, your support team might be capable of finding out what exactly has happened. To do this, you must have the Magic SysRq feature enabled. This feature allows the use of several key sequences that help in obtaining the stack trace. To see if it is enabled, read the file /proc/sys/kernel/sysrq. If it is enabled, it has the value 1. If it's not, it has the value 0. If it is disabled, enter /etc/sysctl.conf and reboot your server to make sure that sysrq is enabled by default:


If an interruptible hang is occurring, use the "Alt+Print Screen+t" key sequence to tell your system to dump a stack trace. The stack trace will be written to syslog, so you can safely reboot and read it from there. Listing 1 gives an example of what a stack trace typically looks like:

Listing 1: A stack trace can help you in troubleshooting interruptible hangs

[ 1451.314592]  [
   ] do_ioctl+0x78/0x90
[ 1451.314596]  [
    ] vfs_ioctl+0x22e/0x2b0
[ 1451.314599]  [
     ] rwsem_wake+0x4d/0x110
[ 1451.314603]  [
      ] sys_ioctl+0x56/0x70
[ 1451.314607]  [
       ] sysenter_past_esp+0x6b/0xa1
[ 1451.314616]  =======================
[ 1451.314617] console-kit-d S f3ddbde8     0  6784      1
[ 1451.314619]    f3d64b80 00000086 00000002 f3ddbde8 f3ddbde0 00000000 c04980e0 c049b480
[ 1451.314623]    c049b480 c049b480 f3ddbdec f3d64cc4 c35a3480 ffffd253 00000000 000000ff
[ 1451.314626]    00000000 00000000 00000000 0000003a 00000001 c35aa000 00005607 c027858a
[ 1451.314630] Call Trace:
[ 1451.314640]  [
        ] vt_waitactive+0x5a/0xb0
[ 1451.314643]  [
         ] default_wake_function+0x0/0x10
[ 1451.314123]   .jiffies                       : 114039
[ 1451.314124]   .next_balance                  : 0.114020
[ 1451.314126]   .curr->pid                     : 0
[ 1451.314127]   .clock                         : 247950.082330
[ 1451.314128]   .idle_clock                    : 0.000000
[ 1451.314140]   .prev_clock_raw                : 1451264.185399
[ 1451.314141]   .clock_warps                   : 0
[ 1451.314142]   .clock_overflows               : 92068
[ 1451.314143]   .clock_deep_idle_events        : 0
[ 1451.314145]   .clock_max_delta               : 9.999478
[ 1451.314146]   .cpu_load[0]                   : 0
[ 1451.314147]   .cpu_load[1]                   : 0
[ 1451.314148]   .cpu_load[2]                   : 0
[ 1451.314149]   .cpu_load[3]                   : 0
[ 1451.314140]   .cpu_load[4]                   : 0
[ 1451.314141]
[ 1451.314141] cfs_rq
[ 1451.314142]   .exec_clock                    : 0.000000
[ 1451.314143]   .MIN_vruntime                  : 0.000001
[ 1451.314145]   .min_vruntime                  : 9571.283382
[ 1451.314146]   .max_vruntime                  : 0.000001
[ 1451.314147]   .spread                        : 0.000000
[ 1451.314149]   .spread0                       : -3276.906118
[ 1451.314150]   .nr_running                    : 0
[ 1451.314151]   .load                          : 0
[ 1451.314152]   .nr_spread_over                : 0
[ 1451.314153]
[ 1451.314153] cfs_rq
[ 1451.314154]   .exec_clock                    : 0.000000
[ 1451.314156]   .MIN_vruntime                  : 0.000001
[ 1451.314157]   .min_vruntime                  : 9571.283382
[ 1451.314158]   .max_vruntime                  : 0.000001
[ 1451.314160]   .spread                        : 0.000000
[ 1451.314161]   .spread0                       : -3276.906118
[ 1451.314162]   .nr_running                    : 0
[ 1451.314163]   .load                          : 0
[ 1451.314164]   .nr_spread_over                : 0
[ 1451.314166]
[ 1451.314166] runnable tasks:
[ 1451.314167]   task  PID   tree-key  switches  prio   exec-runtime   sum-exec  sum-sleep
[ 1451.314168] -----------------------------------------------------------------------------------
[ 1451.314172]

The best thing to do with this stack trace is to find an organization specialized in this kind of troubleshooting. Doing it yourself requires extensive knowledge of the C programming language and goes far beyond the scope of this article. The supporting company behind your Linux distributor should be able to find the offending process or kernel module and tell you why it caused a system hang.

In many cases, system hangs are caused by tainted, or non-supported, kernel modules. It's easy to find out if your kernel is tainted: The cat /proc/sys/kernel line will return the value 1. Many kernel modules that come from commercial organizations and do not fall under the GPL license are considered tainted modules. Try to avoid such modules , and use open source modules instead.

If you have an interruptible hang, consider yourself lucky, as the returnable stack trace dump should be enough for your support organization. A hang where your server is entirely non-responsive makes it very hard to acquire debugging information. If your system hangs this way often, you can force your kernel to generate an oops and dump its stack trace to stdout. To obtain this, you need to pass the boot option nmi_watchdog to the kernel when booting it with GRUB. This will poll your CPU every five seconds. If the CPU responds, nothing happens. If it doesn't respond, the NMI handler kernel component will generate an oops and dump to stdout. To obtain this information, it is useful to connect a serial console to your server. Be aware, though, that starting your kernel with the nmi_watchdog option is bad for performance. Do it only if you have no other choice.

If non-interruptible hangs never occured until you've added a new piece of hardware, it is very likely that the hardware is causing the hang. Try to configure your server without this piece of hardware to avoid the problems.

You now know how to distinguish between the different kinds of hangs that can occur in Linux, along with how to get debugging information in such a situation. In the next article in this series, you'll learn how to handle severe problems with your file system.

This was last published in September 2009

Dig Deeper on Linux servers



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.