What is the difference between kernel drivers and kernel modules?

A kernel module is a bit of compiled code that can be inserted into the kernel at run-time, such as with insmod or modprobe.

A driver is a bit of code that runs in the kernel to talk to some hardware device. It "drives" the hardware. Most every bit of hardware in your computer has an associated driver.¹ A large part of a running kernel is driver code.²

A driver may be built statically into the kernel file on disk.³ A driver may also be built as a kernel module so that it can be dynamically loaded later. (And then maybe unloaded.)

Standard practice is to build drivers as kernel modules where possible, rather than link them statically to the kernel, since that gives more flexibility. There are good reasons not to, however:

  • Sometimes a given driver is absolutely necessary to help the system boot up. That doesn't happen as often as you might imagine, due to the initrd feature.

  • Statically built drivers may be exactly what you want in a system that is statically scoped, such as an embedded system. That is to say, if you know in advance exactly which drivers will always be needed and that this will never change, you have a good reason not to bother with dynamic kernel modules.

  • If you build your kernel statically and disable Linux's dynamic module loading feature, you prevent run-time modification of the kernel code. This provides additional security and stability at the expense of flexibility.

Not all kernel modules are drivers. For example, a relatively recent feature in the Linux kernel is that you can load a different process scheduler. Another example is that the more complex types of hardware often have multiple generic layers that sit between the low-level hardware driver and userland, such as the USB HID driver, which implements a particular element of the USB stack, independent of the underlying hardware.


Asides:

  1. One exception to this broad statement is the CPU chip, which has no "driver" per se. Your computer may also contain hardware for which you have no driver.

  2. The rest of the code in an OS kernel provides generic services like memory management, IPC, scheduling, etc. These services may primarily serve userland applications, as with the examples linked previously, or they may be internal services used by drivers or other intra-kernel infrastructure.

  3. The one in /boot, loaded into RAM at boot time by the boot loader early in the boot process.


To answer your specific question about the lspci output, the "kernel driver" line refers to which driver is currently bound to the card, in this case the proprietary nvidia driver. The "kernel modules" line lists all of the drivers known to be capable of binding to this card. Here, the proprietary driver shows up it a different name, probably due to how lspci found the driver and its filename versus the name coded into the driver itself.


A kernel module may not be a device driver at all

"Kernel driver" is not a well defined term, but let's give it a shot.

This is a kernel module that does not drive any hardware, and thus could not be reasonably considered a "device driver":

#include <linux/module.h>
#include <linux/kernel.h>

MODULE_LICENSE("GPL");

static int myinit(void)
{
    printk(KERN_INFO "hello init\n");
    return 0;
}

static void myexit(void)
{
    printk(KERN_INFO "hello exit\n");
}

module_init(myinit)
module_exit(myexit)

After build, you can use it with:

insmod hello.ko

and it prints hello init to dmesg.

There are, however, kernel modules that are not device drivers, but are actually useful, e.g., modules that expose kernel debugging / performance information.

Device drivers are usually also kernel modules.

An example of something that is a "device driver" is a bit harder to generate, since it requires a hardware to drive, and hardware descriptions tend to be complicated.

Using QEMU or other emulators however, we can construct software models of real or simplified hardware, which is a great way to learn how to talk to hardware.  Here is a simple example of a minimal PCI device driver: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/6788a577c394a2fc512d8f3df0806d84dc09f355/kernel_module/hello.c

We then see that in x86, talking to hardware comes down to:

  • in and out instructions, e.g., https://stackoverflow.com/questions/3215878/what-are-in-out-instructions-in-x86-used-for/33444273#33444273
  • handling interrupts by registering handlers with the CPU

Those operations cannot in general be done from userland, as explained at: What is difference between User space and Kernel space? There are however some exceptions: https://stackoverflow.com/questions/7986260/linux-interrupt-handling-in-user-space.

The kernel then offers higher level APIs to make such hardware interaction easier and more portable:

  • request_irq to handle interrupts
  • ioreadX and IO memory mapping
  • even higher level interfaces for popular protocols like PCI and USB