The kernel is the essential center of a computer operating system (OS). It is the core that provides basic services for all other parts of the OS. It is the main layer between the OS and hardware, and it helps with process and memory management, file systems, device control and networking.
A kernel is often contrasted with a shell, which is the outermost part of an OS that interacts with user commands. Kernel and shell are terms used more frequently in Unix OSes than in IBM mainframe or Microsoft Windows systems.Content Continues Below
A kernel is not to be confused with a basic input/output system (BIOS), which is an independent program stored on a chip within a computer's circuit board.
Typically, a kernel includes an interrupt handler that carries out all requests or completed input/output (I/O) operations that compete for the kernel's services; a scheduler that determines which programs share the kernel's processing time and in what order; and a supervisor that actually gives use of the computer to each process when it is scheduled.
A kernel might also include a manager for the OS' address spaces in memory or storage. The manager shares the address spaces among all components and other users of the kernel's services. Other parts of the OS, as well as application programs, request a kernel's services through a set of program interfaces known as system calls.
Device drivers help kernels execute actions. Device drivers are pieces of code that correspond to each device and execute when devices connect to the OS or hardware through a USB or software download.
Device drivers help close the gap between user applications and hardware, as well as streamline the code's inner workings. To ensure proper functionality, the kernel must have a device driver embedded for every peripheral present in the system.
There are several types of device drivers. Each addresses different data transfer types. A few main types are:
- Character device drivers implement, open, close, read and write data, as well as grant data stream access for the user space.
- Block device drivers provide device access for hardware that transfers randomly accessible data in fixed blocks.
- Network device drivers transmit data packets for hardware interfaces that connect to external systems.
Because the OS needs the code that makes up the kernel continuously, the code is usually loaded into computer storage in an area that is protected so that it will not be overlaid with less frequently used parts of the OS.
History and development of the kernel
Before the kernel, developers coded actions directly to the processor, instead of relying on an OS to complete interactions between hardware and software.
The first attempt to create an OS that passed messages via kernel was in 1969 with the RC 4000 Multiprogramming System. Programmer Per Brinch Hansen discovered it was easier to create a nucleus and then build up an OS, instead of converting existing OSes to be compatible with new hardware. This nucleus -- or kernel -- contained all source code to facilitate communications and support systems, eliminating the need to directly program on the CPU.
After RC 4000, Bell Labs researchers started work on Unix, which radically changed OS development and kernel integration. The goal of Unix was to create smaller utilities that do specific tasks well instead of having system utilities try to multitask. From a user standpoint, this simplifies creating shell scripts that combine simple tools.
As Unix adoption increased, the market started to see a variety of Unix-like computer OSes, including Berkeley Software Distribution (BSD), NeXTSTEP and Linux. Unix's structure perpetuated the idea that it was easier to build a kernel on top of an OS that reused software and had consistent hardware, instead of relying on a time-shared system that didn't require an OS.
Unix brought OSes to more individual systems, but researchers at Carnegie Mellon expanded kernel technology. From 1985 to 1994, they expanded work on the Mach kernel. Unlike BSD, the Mach kernel is OS-agnostic and supports multiple processor architectures. Researchers made it binary-compatible with existing BSD software, enabling it to be available for immediate use and continued experimentation.
The Mach kernel's original goal was to be a cleaner version of Unix and a more portable version of Carnegie Mellon's Accent interprocess communication (IPC) kernel. Over time, the kernel brought new features, such as ports and IPC-based programs, and ultimately evolved into a microkernel.
Shortly after the Mach kernel, in 1986, Vrije Universiteit Amsterdam developer Andrew Tanenbaum released MINIX (mini-Unix) for educational and research use cases. This distribution contained a microkernel-based structure, multitasking, protected mode, extended memory support and an American National Standards Institute (ANSI) C compiler.
The next major advancement in kernel technology came in 1992, with the release of the Linux kernel. Founder Linus Torvalds developed it as a hobby, but he still licensed the kernel under general public license (GPL), making it open source. It was first released with 176,250 lines of code.
The majority of OSes -- and their kernels -- can be traced back to Unix, but there is one outlier: Windows. With the popularity of DOS- and IBM-compatible PCs, Microsoft developed the NT kernel and based its OS on DOS, which is why writing commands for Windows differs from Unix-based systems.
Types of kernels
The Linux kernel is constantly growing, with 20 million lines of code in 2018. From a foundational level, this kernel is layered into a variety of subsystems. These main groups include a system call interface, process management, network stack, memory management, virtual file system, arch and device drivers.
Administrators can port the Linux kernel into their OSes and run live updates. These features, along with the fact that Linux is open source, make it more suitable for server systems or environments that require real-time maintenance.
Beyond Linux, Apple developed the XNU OS kernel in 1996 as a hybrid of the Mach and BSD kernels and paired it with an Objective-C application programming interface (API). Because it is a combination of the monolithic kernel and microkernel, it has increased modularity, and parts of the OS gain memory protection.
Microkernels vs. Monolithic kernels
Kernels fall into two main architectures: monolithic and microkernel. The main difference between these types is the number of address spaces they support.
A microkernel delegates user services and kernel services in different address spaces, whereas monolithic kernels implement services in the same address space.
The microkernel has all of its services in the kernel address space. The monolithic kernel is larger because it houses both kernel and user services in the same address space.
Communication protocol also differs between the two, with monolithic kernels using a faster system call to execute processes between the hardware and software. Microkernels use message passing, which sends data packets, signals and functions to the correct processes.
Microkernels provide greater flexibility; to add a new service, admins can modify the user address space. Monolithic kernels require more work because admins must reconstruct the entire kernel to support the new service.
Because of their isolated nature, microkernels are more secure and remain unaffected if one service within the address space fails. Monolithic kernels pose a greater security risk to systems because, if a service fails, then the entire system shuts down.
Monolithic kernels don't require as much source code as a microkernel, which means they are less susceptible to bugs.
Overall, these kernel implementations present a tradeoff -- either admins get the flexibility of more source code or increased security without customization options.