This topic has a total weight of 9 points and contains the following three objectives:
Candidates should be able to utilize kernel components that are necessary to specific hardware, hardware drivers, system resources and requirements. This objective includes implementing different types of kernel images, understanding stable and longterm kernel kernels and patches, as well as using kernel modules.
Candidates should be able to properly configure a kernel to
include or disable specific features of the Linux kernel as
necessary. This objective includes compiling and recompiling
the Linux kernel as needed, updating and noting changes in a
new kernel, creating an
initrd image and
installing new kernels.
Candidates should be able to manage and/or query a 2.6.x, 3.x or 4.x kernel and its loadable modules. Candidates should be able to identify and correct common boot and run time issues. Candidates should understand device detection and management using udev. This objective includes troubleshooting udev rules.
Candidates should be able to utilise kernel components that are necessary to specific hardware, hardware drivers, system resources and requirements. This objective includes implementing different types of kernel images, identifying stable and development kernels and patches, as well as using kernel modules.
The Linux kernel was originally designed to be a monolithic kernel. Monolithic kernels contain all drivers for all the various types of supported hardware, regardless if your system uses that hardware. As the list of supported hardware grew the amount of code that was never used on any given system grew too. Therefore a system was introduced that allowed the kernel to load some hardware drivers dynamically. These loadable device drivers were named "kernel modules".
Though the Linux kernel can load and unload modules it does not qualify as a microkernel. Microkernels are designed such that only the least possible amount of code is run in supervisor mode – this was never a design goal for Linux kernels. The Linux kernel is best described as a hybrid kernel: it is capable of loading and unloading code as microkernels do, but runs almost exclusively in supervisor mode, as monolithic kernels do.
It is still possible to build the Linux kernel as a monolithic kernel. But it is rarely done, as updating device drivers requires a complete recompile of the kernel. However, building a monolithic kernel has its advantages too: it may have a smaller footprint as you can download and build just the parts you need and dependencies are clearer.
When stored on disk most kernel images are compressed to save space. There
are two types of compressed kerneltypes:
bzImage files have
different layouts and loading algorithms. The maximum allowed kernelsize
zImage is 512Kb, where a
does not pose this limit. As a result the
bzImage kernel is
the preferred image type for larger kernels.
zImage will be
loaded in low memory and
bzImage can also be loaded in high memory
bzImage use gzip
compression. The “bz” in
bzImage refers to
“big zImage” – not to the “bzip”
The numbering schemes in use for Linux kernels has changed several times over the years: the original scheme, valid for all kernels up to version 2.6.0, the scheme for kernels version 2.6.0 up to 3.0, the previous scheme, for kernels 3.0 and later, and the current scheme starting with version 4.0. In the next sections we discuss each of them.
Initially, a kernel version number consisted of three parts: major release number, minor release number and the patch level, all separated by periods.
The major release was incremented when a major change was made to the kernel.
The minor release was incremented when significant changes and additions were made. Even-numbered minor releases, e.g. 2.2, 2.4, were considered stable releases and odd-numbered releases, e.g. 2.1, 2.3, 2.5 were considered to be development releases. They were primarily used by kernel developers and people that preferred bleeding edge functionality at the risk of instability.
The last part of the kernel version number indicated the patch level. As errors in the code were corrected (and/or features were added) the patch level was incremented. A kernel should only be upgraded to a higher patch level when the current kernel has a functional or security problem.
In 2004, after the release of 2.6.0, the versioning system was changed, and it was
decided that a time-based release cycle would be adopted. For the next seven years
the kernel remained at 2.6 and the third number was increased with each new release
(which happend every two or three months). A fourth number was added to account for
bug and security fixes. An example of this scheme is kernel
126.96.36.199. The even-odd numbering system was no longer used.
On 29 May 2011, Linus Torvalds announced the release of kernel version 3.0.0 in honour
of the 20th anniversary of Linux. This changed the numbering scheme yet again. It
would still be a time-based release system but the second number would indicate the
release number, and the third number would be the patch number.
For test releases the -rc designation is used. Following this scheme,
3.2.84 would refer to a stable kernel release.
3.2-rc4 on the other hand would point to the fourth
release candidate of the
In April 2015 kernel version 4.0.0 was released. The versioning
system didn't change this time. At the time of this writing,
4.9.2 is the latest stable
version available through
https://kernel.org. The 4.x kernel did however introduce
a couple of new features. The possibility to perform
“Live Patching” being one of the more
noteworthy ones. Live patching offers the possibility
to install kernel patches without the need to reboot
the system. This can be accomplished by unloading and
loading appropriate kernel modules. Every time a new Linux
kernel version gets released, it is accompanied by a
changelog. These changelog files
hold detailed information about what has changed in
this release compared to previous versions.
Every Linux distribution comes with a kernel that
has been configured and compiled by the distribution
developers. Most Linux distributions also offer
possibilities to upgrade the kernel binary
through some sort of package system. It is however
also possible to compile a kernel for your system using
kernel sources from the previously mentioned
These kernel sources are packed using tar and
compressed using the XZ
XZ is the successor to LZMA and LZMA2. Recent
Linux kernels offer built-in support for XZ. Depending
on the Linux distribution in use, it might be necessary
to install a
equivalent package to uncompress xz compressed files.
After having downloaded the kernel sources for one of the
available kernels as a
these source files may be unpacked using the following command line:
$tar xvf linux-4.10-rc3.tar.xz
GNU tar needs to be at least version 1.22 for the above command to work.
kernel modules are object files (
.ko files) produced by the C
compiler but not linked into a complete executable. Kernel modules
can be loaded into the kernel to add functionality when needed. Most
modules are distributed with the kernel and compiled along with
it. Every kernel version has its own set of modules.
Modules are stored in a directory hierarchy under
where kernel-version is the string reported by
uname -r or found in
such as 2.6.5-15smp. Multiple module
hierarchies are available under
case multiple kernels are installed.
Subdirectories that contain modules of a particular type exist
kernel-version directory. This grouping is convenient for
administrators, but also enables important functionality within the
Typical module types:
Modules for a few block-specific devices such as RAID controllers or IDE tape drives.
Device driver modules for nonstandard CD-ROM drives.
Drivers for filesystems such as MS-DOS (the msdos.ko module).
Includes modular kernel features having to do with IP processing, such as IP masquerading.
Anything that does not fit into one of the other subdirectories ends up here. Note that no modules are stored at the top of this tree.
Network interface driver modules.
Contains driver modules for the SCSI controller.
Special driver modules for video adapters.
Module directories are also referred to as tags within the context of module manipulation commands.