Is the "page-to-disk" feature Linus talks about in his autobiography essentially the concept of swapping we use today?

Yes, this is effectively swapping. Quoting the release notes for 0.12:

Virtual memory.

In addition to the "mkfs" program, there is now a "mkswap" program on the root disk. The syntax is identical: "mkswap -c /dev/hdX nnn", and again: this writes over the partition, so be careful. Swapping can then be enabled by changing the word at offset 506 in the bootimage to the desired device. Use the same program as for setting the root file system (but change the 508 offset to 506 of course).

NOTE! This has been tested by Robert Blum, who has a 2M machine, and it allows you to run gcc without much memory. HOWEVER, I had to stop using it, as my diskspace was eaten up by the beta-gcc-2.0, so I'd like to hear that it still works: I've been totally unable to make a swap-partition for even rudimentary testing since about christmastime. Thus the new changes could possibly just have backfired on the VM, but I doubt it.

In 0.12, paging is used for a number of features, not just swapping to a device: demand-loading (only loading pages from binaries as they’re used), sharing (sharing common pages between processes).


Yes, that's exactly the concept known as paging or swapping. (A long time ago these terms had slightly different meanings, but in the 21st century, they're synonymous except perhaps in the context of some non-Unix operating systems.)

To be clear, swapping wasn't an innovative feature: most “serious” Unix systems had it, and the feature is older than Unix. What swapping did for Linux was to turn it into a “serious” Unix, whereas MINIX was meant for educational purposes.

Swapping today is still the same concept. The heuristics for deciding which pages to save and when to save them have become a lot more complex, but the basic principle remains.


Swapping is a concept predating virtual memory and even memory protection: it just means putting a process on disk to make room for another. The original Unix had two quirks in that regard: "shared text" programs that kept the program code only once in memory and swapped out the data section only. And it had the "fork" system call that swapped out a process to disk while not replacing the memory image and instead keeping a copy (the child) running.

Page-to-disk, as opposed to swapping, allows for processes to run that do not fit the physical memory. It requires all of protectable memory, memory mapping of virtual addresses to physical addresses, and a restartable page fault mechanism that will allow to change the mapping from an unmapped virtual address to a reasonably selectable physical address and resuming the command that had to be aborted because of the missing mapping.

UNIX was able to run on 68000 processors (including swapping) without MMU, and it made good use of an MMU if available for memory protection, but it took the 68010 to actually have the mechanisms allowing for resuming a program after a page fault.

The 80386 was in many respects a crummy and outdated design. But its built-in MMU and the ability to properly page-fault made it immediately more viable for UNIX-like systems that were not merely able to swap but to page-to-disk.

It is sort of a historical irony that this great sacrifice of silicon (a full-fledged MMU and virtual-capable CPU design took quite a bit of die space) to the gods of modern systems was mainly taken up by a hobbyist, and the "big fish" like Xenix and OS/2 fell to the wayside eventually.

While you can call "nothing paged in and not scheduled to run" the same as "swapped", it's not really an all-or-nothing proposition like the original meaning of "swapped" was.

The difference got lost in the decades since then since demand-paging was so much more useful and scaled better than ordinary swapping that it replaced it once the necessary CPU and MMU features became commonplace, but the slowdown and thrashing associated with either made for a similar look-and-feel.

Tags:

Linux

Swap

Minix