Run a computer without RAM?

At some point this gets into the question of what even counts as "RAM." There are many CPUs and microcontrollers that have plenty of on-chip memory to run small operating systems with no separate RAM chips attached. In fact, this is actually relatively common in the world of embedded systems. So, if you're just referring to not having any separate RAM chips attached, then, yes, you can do it with many current chips, especially those designed for the embedded world. I've done it myself at work. However, since the only real difference between addressable on-chip memory and separate RAM chips is just the location (and, obviously, latency,) it's perfectly reasonable to consider the on-chip memory to itself be RAM. If you're counting that as RAM, then the number of current, real-world processors that would actually run without RAM is greatly reduced.

If you're referring to a normal PC, no, you can't run it without separate RAM sticks attached, but that's only because the BIOS is designed not to attempt to boot with no RAM installed (which is, in turn, because all modern PC operating systems require RAM to run, especially since x86 machines typically don't allow you to directly address the on-chip memory; it's used solely as cache.)

Finally, as Zeiss said, there's no theoretical reason that you can't design a computer to run without any RAM at all, aside from a couple of registers. RAM exists solely because it's cheaper than on-chip memory and much faster than disks. Modern computers have a hierarchy of memories that range from large, but slow to very fast, but small. The normal hierarchy is something like this:

  • Registers - Very fast (can be operated on by CPU instructions directly, generally with no additional latency,) but usually very small (64-bit x86 processor cores have only 16 general-purposes registers, for instance, with each being able to store a single 64-bit number.) Register sizes are generally small because registers are very expensive per byte.
  • CPU Caches - Still very fast (often 1-2 cycle latency) and significantly larger than registers, but still much smaller (and much faster) than normal DRAM. CPU cache is also much more expensive per byte than DRAM, which is why it's typically much smaller. Also, many CPUs actually have hierarchies even within the cache. They usually have smaller, faster caches (L1 and L2) in addition to larger and slower caches (L3.)
  • DRAM (what most people think of as 'RAM') - Much slower than cache (access latency tends to be dozens to hundreds of clock cycles,) but much cheaper per byte and, therefore, typically much larger than cache. DRAM is still, however many times faster than disk access (usually hundreds to thousands of times faster.)
  • Disks - These are, again, much slower than DRAM, but also generally much cheaper per byte and, therefore, much larger. Additionally, disks are usually non-volatile, meaning that they allow data to be saved even after a process terminates (as well as after the computer is restarted.)

Note that the entire reason for memory hierarchies is simply economics. There's no theoretical reason (not within computer science, at least) why we couldn't have a terabyte of non-volatile registers on a CPU die. The issue is that it would just be insanely difficult and expensive to build. Having hierarchies that range from small amounts of very expensive memory to large amounts of cheap memory allows us to maintain fast speeds with reasonable costs.


It would be theoretically possible to design a computer to operate with very little (a few registers' worth) or no RAM (look up the definition of a Turing machine -- which can actually be constructed in a suitably large/fast implementation of Conway's Life simulation).

The reason all real-world computers use RAM is, first, historical: core memory (the prototype for RAM, only semi-volatile) greatly predates mass storage like magnetic drum or disk (though it did come after punch cards and paper tape -- the former of which dates back, in its primitive form, to 1801 (yes, the start of the 19th century; Jacquard looms used punched cards to automatically weave a color pattern of arbitrary complexity decades before even Babbage Difference Engines or Hollerith tabulators); secondly, RAM (like core memory), being electronic, is a great deal faster than any device that depends on physical movement of the storage media to present the data to a read/write mechanism.

A system or similar complexity to a modern Windows or Linux computer running without RAM (similarly to a true Turing machine), would take days just to start up, and hours to update the screen for a graphic interface at modern resolutions. Even a text-only operating system comparable to CP/M or early versions of DOS would take a very long time to reach the initial command prompt.


ALL modern, standard, general-purpose CPUs fundamentally work like this:

  • CPU maintains a register that points in its address space to the next instruction
  • CPU fetches whatever is in that address space and increments that register
  • If instruction needs additional information, like a destination address or other operand, it is also fetched
  • CPU executes instruction
  • If instruction is a jump, call, return, return-from-interrupt or branch, it may modify the register that points to the next instruction.
  • Repeat

CPU fetches whatever is in that address space and increments that register

What can "live" in an address space?

  • Nothing (can return zeros, random data, or cause CPU to lockup)
  • RAM (motherboard RAM, RAM from a PCI device such as a graphics adapter, etc.)
  • ROM
  • Registers of an I/O device (this includes "internal I/O devices" like the CPU's local APIC)
  • Modern CPUs allow "cache as RAM" so a portion of the CPUs cache can appear in the address space

Notice "hard disk" is not in that list. The hard disk is not directly connected to the CPU. Data comes to and forth of the hard disk by way of an I/O device (SATA host adapter) connected to the CPU.

The I/O device uses DMA to load/save data to/from the hard disk. This means the I/O device directly reads/writes RAM itself - without CPU intervention - and also relies on RAM being there. But if the data has not been loaded into RAM by the I/O device the CPU has no chance of seeing it.

So you cannot have the CPU fetch instructions directly from the hard disk.


What happens during a page fault is:

  • CPU attempts to access a page of memory that is marked as swapped out in the local CPU page tables (which are always present in RAM.)
  • This access causes a page fault exception in the CPU.
  • CPU, now in kernel mode, looks at the page the other process was trying to access.
  • The kernel notices a user process is trying to access a swapped out page, and invokes the normal I/O process to swap that page back in from disk. This is the same process that would be used when loading/saving any other data from disk. It's not different just because the CPU is paging in swapped memory.
  • CPU then hands control back to the interrupted process, which continues as though nothing has happened.

So the CPU needing to get data from the disk because memory is swapped out is no different.