Is it possible to allocate large amount of virtual memory in linux?

Is it possible to allocate large amount of virtual memory in linux?

Possibly. But you may need to configure it to be allowed:

The Linux kernel supports the following overcommit handling modes

0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default.

1 - Always overcommit. Appropriate for some scientific applications. Classic example is code using sparse arrays and just relying on the virtual memory consisting almost entirely of zero pages.

2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable amount (default is 50%) of physical RAM. Depending on the amount you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate.

Useful for applications that want to guarantee their memory allocations will be available in the future without having to initialize every page.

The overcommit policy is set via the sysctl `vm.overcommit_memory'.

So, if you want to allocate more virtual memory than you have physical memory, then you'd want:

# in shell
sysctl -w vm.overcommit_memory=1

RLIMIT_AS The maximum size of the process's virtual memory (address space) in bytes. This limit affects calls to brk(2), mmap(2) and mremap(2), which fail with the error ENOMEM upon exceeding this limit. Also automatic stack expansion will fail (and generate a SIGSEGV that kills the process if no alternate stack has been made available via sigaltstack(2)). Since the value is a long, on machines with a 32-bit long either this limit is at most 2 GiB, or this resource is unlimited.

So, you'd want:

setrlimit(RLIMIT_AS, {
    .rlim_cur = RLIM_INFINITY,
    .rlim_max = RLIM_INFINITY,
});

Or, if you cannot give the process permission to do this, then you can configure this persistently in /etc/security/limits.conf which will affect all processes (of a user/group).


Ok, so mmap seems to support ... but it requires a file descriptor. ... could be a win but not if they have to be backed by a file ... I don't like the idea of attaching to a file

You don't need to use a file backed mmap. There's MAP_ANONYMOUS for that.

I did not know what number to put in to request

Then use null. Example:

mmap(nullptr, 256*GB, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)

That said, if you've configured the system as described, then new should work just as well as mmap. It'll probably use malloc which will probably use mmap for large allocations like this.


Bonus hint: You may benefit from taking advantage of using HugeTLB Pages.


The value of 256*GB does not fit into a range of 32-bit integer type. Try uint64_t as a type of GB:

constexpr uint64_t GB = 1024*1024*1024;

or, alternatively, force 64-bit multiplication:

char* p = new char[256ULL * GB];

OT: I would prefer this definition of GB:

constexpr uint64_t GB = 1ULL << 30;

As for the virtual memory limit, see this answer.