This topic has a total weight of 9 points and contains the following three objectives:
Candidates should be able to utilize kernel components that are necessary to specific hardware, hardware drivers, system resources and requirements. This objective includes implementing different types of kernel images, understanding stable and longterm kernel kernels and patches, as well as using kernel modules.
Candidates should be able to properly configure a kernel to
include or disable specific features of the Linux kernel as
necessary. This objective includes compiling and recompiling
the Linux kernel as needed, updating and noting changes in a
new kernel, creating an
initrd image and
installing new kernels.
Candidates should be able to manage and/or query a 2.6.x or 3.x kernel and its loadable modules. Candidates should be able to identify and correct common boot and run time issues. Candidates should understand device detection and management using udev. This objective includes troubleshooting udev rules.
Candidates should be able to utilise kernel components that are necessary to specific hardware, hardware drivers, system resources and requirements. This objective includes implementing different types of kernel images, identifying stable and development kernels and patches, as well as using kernel modules.
The Linux kernel was originally designed to be a monolithic kernel. Monolithic kernels contain all drivers for all the various types of supported hardware, regardless if your system uses that hardware. As the list of supported hardware grew the amount of code that was never used on any given system grew too. Therefore a system was introduced that allowed the kernel to load some hardware drivers dynamically. These loadable device drivers were named "kernel modules".
Though the Linux kernel can load and unload modules it does not qualify as a microkernel. Microkernels are designed such that only the least possible amount of code is run in supervisor mode -- this was never a design goal for Linux kernels. The Linux kernel is best described as a hybrid kernel: it is capable of loading and unloading code as microkernels do, but runs almost exclusively in supervisor mode, as monolithic kernels do.
It is still possible to build the Linux kernel as a monolithic kernel. But it is rarely done, as updating device drivers requires a complete recompile of the kernel. However, building a monolithic kernel has its advantages too: it may have a smaller footprint as you can download and build just the parts you need and dependencies are clearer.
When stored on disk most kernel images are compressed to save space. There
are two types of compressed kerneltypes:
bzImage files have
different layouts and loading algorithms. The maximum allowed kernelsize
zImage is 520Kb, where a
does not pose this limit. As a result the
bzImage kernel is
the preferred image type for larger kernels.
zImage will be
loaded in low memory and
bzImage can also be loaded in high memory
bzImage use gzip
compression. The “bz” in
bzImage refers to
“big zImage ” -- not to the “bzip”
There are three different numbering schemes in use for Linux kernels: the original scheme, valid for all kernels up to version 2.6.0, the intermediate scheme for kernels version 2.6.0 up to 3.0 and the new scheme, for kernels 3.0 and later. In the next sections we discuss each of them.
Initially, a kernel version number consisted of three parts: major release number, minor release number and the patch level, all separated by periods.
The major release was incremented when a major change was made to the kernel.
The minor release was incremented when significant changes and additions were made. Even-numbered minor releases, e.g. 2.2, 2.4, were considered stable releases and odd-numbered releases, e.g. 2.1, 2.3, 2.5 were considered to be development releases. They were primarily used by kernel developers and people that preferred bleeding edge functionality at the risk of instability.
The last part of the kernel version number indicated the patch level. As errors in the code were corrected (and/or features were added) the patch level was incremented. A kernel should only be upgraded to a higher patch level when the current kernel has a functional or security problem.
In 2004, after the release of 2.6.0, the versioning system was changed, and it was decided that a time-based release cycle would be adopted. For the next seven years the kernel remained at 2.6 and the third number was increased with each new release (which happend every two or three months). A fourth number was added to account for bug and security fixes. The even-odd numbering system was no longer used.
On 29 May 2011, Linus Torvalds announced the release of kernel version 3.0.0 in honour of the 20th anniversary of Linux. This changed the numbering scheme yet again. It would still be a time-based release system but the second number would indicate the release number, and the third number would be the patch number.
kernel modules are object files (
.ko files) produced by the C
compiler but not linked into a complete executable. Kernel modules
can be loaded into the kernel to add functionality when needed. Most
modules are distributed with the kernel and compiled along with
it. Every kernel version has its own set of modules.
Modules are stored in a directory hierarchy under
where kernel-version is the string reported by
uname -r or found in
such as 2.6.5-15smp. Multiple module
hierarchies are available under
case multiple kernels are installed.
Subdirectories that contain modules of a particular type exist
kernel-version directory. This grouping is convenient for
administrators, but also enables important functionality within the
Typical module types:
Modules for a few block-specific devices such as RAID controllers or IDE tape drives.
Device driver modules for nonstandard CD-ROM drives.
Drivers for filesystems such as MS-DOS (the msdos.ko module).
Includes modular kernel features having to do with IP processing, such as IP masquerading.
Anything that does not fit into one of the other subdirectories ends up here. Note that no modules are stored at the top of this tree.
Network interface driver modules.
Contains driver modules for the SCSI controller.
Special driver modules for video adapters.
Module directories are also referred to as tags within the context of module manipulation commands.