Are there any architectures currently out there that use hardware-enforced process isolation? What would it take to add that to x86?

Actually, almost all of the CPU on the market, save for the very small ones meant for low-power embedded devices, offer "hardware-enforced isolation". This is called a MMU. Synthetically, the MMU splits the address space into individual pages (typically 4 or 8 kB each; it depends on the CPU architecture and version), and whenever some piece of code accesses a page, the MMU enforces access rights, and also maps the access to a physical address (or not -- this is how "virtual memory" works). At any time, the CPU informs the MMU whether the current code is "user code" or "kernel code", and the MMU uses that information to know whether the access shall be granted or not.

The access rights and mapping to physical addresses of each page is configured in special tables in memory, that the kernel shows to the MMU (basically by writing the start address in physical RAM of the main table in a dedicated register). By switching the MMU configuration, the kernel implements the notion of process: each process has its own address space, and when the kernel decides that the CPU shall be granted to a process, it does so by making the MMU configuration for that process the active configuration.

This is about as hardware-enforced as these things can get. If you wanted a software-only isolation enforcement, then you would have to look at things like Java or C#/.NET: strong typing, array bounds check and garbage collection allow for cohabitation of distinct pieces of code with isolation and without the help of a MMU.


MMU-based process isolation works well in practice -- processes cannot alter or even see the pages of other processes. The last major operating systems where this was not done properly were the Windows-95 family (up to and including the infamous Windows Millenium Edition, in 2001).

The trouble begins when you begin to understand that complete isolation is useless: application processes must, at some point, be able to interact with the hardware, to save files or send data over the network or display images. Therefore, there must be some specific gateways that allow some data to flow in and out of the isolated address space of each process, under strict control of an arbitration system that maintains coherence and allocation of hardware resources to process; that arbitration system is exactly what is known as an "Operating System". The "gateways" for escaping isolation are often called system calls.

Right now, the OS is software, and has bugs, because every significant piece of software has bugs. Some of these allow for maliciously written process to impact other process in bad ways; this is known as "security holes". However, making a "fully hardware OS" would not solve anything; in fact, it would probably makes things worse. Hardware has bugs too; and the source of bugs is that what the developer is trying to do is complex. Doing it in hardware only makes bug-fixing a lot harder, so it does not improve the security situation at all.

Thus, to make a better isolation between process, the solution is not to throw more hardware at the problem. There is already enough of the stuff (and maybe too much). What is needed is a reduction in complexity, which really is a thorough pruning and redesign of the list of system calls. A basic Linux kernel offers more than 300 different system calls ! This makes for a lot of work when trying to prevent security holes. Unfortunately, removing system calls breaks compatibility with existing code.