Disadvantages of linux kernel module?

With a monolithic kernel, in theory a single contiguous block of memory can be allocated for the kernel. If modules are loaded (and unloaded) on demand, then it's improbable that all kernel memory will be contiguous, and hence by definition it will be fragmented. The trade off is that a modular kernel will usually use far less memory that a monolithic kernel. This is certainly true with a default distro monolithic kernel, which likely has many unused drivers, though less true if you build your own monolithic kernel.

Some subsystems simply don't lend themselves to being modular, memory management is clearly one, SMP another. Process scheduling isn't modular, but I/O scheduling strategy is. Other subsystems aren't modular likely because of complex interdependencies, like TCP/IP.

Another problem with modules is that they're not so good if they drive the device you need to boot from, solutions like initrd work around this.

A final consideration is security, allowing loadable modules is a potential risk, e.g. kernel loadable rootkit like knark. See http://www.la-samhna.de/library/rootkits/index.html and http://www.sans.org/security-resources/malwarefaq/Ptrace.php . You can reduce this risk by enforcing signed modules (since kernel 3.7), loading a lockdown module last, or other hardening.


Yes, the reason that essential components (such as mm) cannot be loadable modules is because they are essential -- the kernel will not work without them.

I can't find any references claiming the effects of memory fragmentation with regard to loadable modules is significant, but this part of the LLKM how-to might be interesting reading for you.

I think the question is really part and parcel of the issue of memory fragmentation generally, which happens on two levels: the fragmentation of real memory, which the kernel mm subsystem manages, and the fragmentation of virtual address space which may occur with very large applications (which I'd presume is mostly the result of how they are designed and compiled).

With regard to the fragmentation of real memory, I do not think this is possible at finer than page size (4 KB) granularity. So if you were reading 1 MB of virtually contiguous space that is actually 100% fragmented into 1024 pages, there may be 1000 extra minor operations involved. In that bit of the how-to we read:

The base kernel contains within its prized contiguous domain a large expanse of reusable memory -- the kmalloc pool. In some versions of Linux, the module loader tries first to get contiguous memory from that pool into which to load an LKM and only if a large enough space was not available, go to the vmalloc space. Andi Kleen submitted code to do that in Linux 2.5 in October 2002. He claims the difference is in the several per cent range.

Here the vmalloc space, which is where userspace applications reside, would be that which is potentially prone to fragment into pages. This is simply the reality of contemporary operating systems (they all manage memory via virtual addressing). We might infer from this that virtual addressing could represent a performance penalty of "several percent" in userland as well, but in so far as virtual addressing is necessary and inescapable in userland, it is only in relation to something completely theoretical.

There is the possibility for further compounding fragmentation by the fragmentation of a process's virtual address space (as opposed to the real memory behind it), but this would never apply to kernel modules (whereas the last paragraph apparently could).

If you want my opinion, it is not worth much contemplation. Keep in mind that even with a highly modular kernel, the most used components (fs, networking, etc) will tend to be loaded very early and remain loaded, hence they will certainly be in a contiguous region of real memory, for what it is worth (which might be a reason to not pointlessly load and unload modules).


I don't know what other disadvantages there might be to compiling code as a loadable kernel module rather than directly into the kernel image, but to use your particular example, suppose that memory management was built as a kernel module rather than built into the kernel binary image itself.

  • How would the kernel allocate the memory for an initial RAM disk?
  • How would the kernel allocate the memory for the structures necessary to mount a file system?
  • How would the kernel allocate the memory in which to load the memory management module?
  • How would the memory management module know what memory has already been claimed by other parts of the kernel (possibly other loadable kernel modules which were needed to access the memory management module)?

As you can see, this immediately opens a whole potential can of worms. Task scheduling is another similarly core kernel concept.

From another perspective, what useful work could the kernel do without the memory management module? It's not like a hardware driver which can simply be disabled if the hardware isn't installed on the system; it's a basic feature that the kernel itself needs to even bootstrap.

While true microkernels have certain benefits from a separation of concern and code readability/understandability point of view, even they need certain things inside the kernel itself to work at all. Memory and task handling are among the core concepts of a multitasking operating system kernel -- and as is illustrated by the above list, are for all intents and purposes required for anything to work at all. Trying to separate it into a separately loaded component would, if nothing else, add a large amount of complexity for absolutely no gain (since everyone would be loading that module anyway).