What happens in UNIX/Linux when a program is bigger then size of memory?

1. Virtual memory
The system will ensure that processes will get the requested amount of memory despite being greater than physical memory. By this way the kernel allocates a virtual memory space of the maximum physical memory size it can handle. E.g. on a 32bit machine, the kernel will allocate a total of 2^32 i.e. 4GB of virtual addresses to every process by default.
2. Overcommitting
There is also something called overcommit in Linux, wherein the kernel does respond to memory allocation requests way larger than the physical memory available. Overcommitting will make the kernel allocate virtual memory without any guarantee of corresponding physical memory allocation.
3. Swap space
As the process requiring that much of memory starts actually using that much of memory, the kernel starts scanning for unused memory pages, as well as memory pages of processes with lower priority or that are currently not running. It swaps out this data to the swap space on the secondary storage device, and frees up those pages for your process. This is called page stealing.

By continually repeating step 3, i.e. swapping pages in and out, the kernel manages to show the process an illusion of the memory it requested, which may be greater than memory physically available. Now as you mentioned an embedded system, we have to consider whether swap is enabled on the system or not. If yes, the above 3 points apply. If no, the above 3 points still apply, but only thing is your process will probably either crash or may get killed off by the OOM(Out-Of-Memory) killer. There is also a possibility that the kernel uses OOM killer to kill off other processes to free up more pages for your processes if it deems fit. However, This will happen only if there is no swap space.


Nothing particular will happen, just the same as with any process.

Despite popular belief, a program code and data is not loaded as a whole when the program is started. Only a small subset, essentially its entry point (elf table, main function, initial stack) is loaded, and everything else is loaded on demand, i.e. paged in. This will happen when code or data to be accessed are not in a page currently in physical memory.

Similarly, when there is a pressure on RAM, less used pages are swapped out to disk to free space.

If the size of available RAM plus the size of the swap area happen to be too small for all running programs pages to fit, the behavior is OS dependent:

  • Linux and other OSes which overcommit virtual memory will more or less randomly kill some processes to free space.

  • Non overcommiting OSes like Solaris won't allow new processes to start and will refuse new memory reservation (malloc) from existing processes.

Tags:

Memory