How does Linux kernel compare to microkernel architectures?

Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels. This has many aspects, such as:

  • Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules.
  • Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. A buggy filesystem or device driver can crash a Linux system. Linux doesn't have any way to mitigate these problems other than coding practices and testing.
  • Microkernels have a smaller trusted computing base. So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk).
  • A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel.

Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). Many modern unices allow ordinary users to load filesystem drivers through FUSE. Some of the Linux network packet filtering can be done in userland. However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only.

A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate. Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go.

Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes.


A microkernel limits the time the system is in kernel mode, as opposed to userspace, to the absolute minimum possible.

If a crash happens in kernel mode, the entire kernel goes down, and that means the entire system goes down. If a crash happens in user mode, just that process goes down. Linux is robust in this regard, but it's still possible for any kernel subsystem to write over the memory of any other kernel subsystem, either purposefully or accidentally.

The microkernel concept puts a lot of stuff that is traditionally kernel mode, such as networking and device drivers, in userspace. Since the microkernel isn't really responsible for a lot, that also means it can be simpler and more reliable. Think of the way the IP protocol, by being simple and stupid, really leads to robust networks by pushing complexity to the edges and leaving the core lean and mean.


You should read the other side of the issue:

Extreme High Performance Computing or Why Microkernels suck

The File System Belongs In The Kernel

Tags:

Linux

Kernel