MongoDB: RAM requirements

The real reason why you can't do as you ask (limit the memory) is because MongoDB doesn't manage the memory it uses directly - it lets the OS do it. MongoDB just memory maps all its data and then has the OS page it in and out of memory as needed. As a result, there is no direct management of the amount used possible until MongoDB implements this in a completely different way, or the OS allows it (not possible in Linux since the 2.4 days).

The only way to truly segregate resources at present is to use a virtualization solution and isolate MongoDB in its own VM. Yes, there are overheads involved (though hypervisors have gotten a lot better), but at the moment that is the price to be paid for that level of resource control.

In terms of the OOM Killer, even with no other processes on the host, as long as your data set and indexes overall exceed available memory, MongoDB can hit OOM Killer issues. This is because of how the data gets paged out of memory - if there is no memory pressure (nothing else wants Resident memory), and you keep adding/touching new data and indexes, then eventually it will grow to consume all available RAM. Hence the recommendation to always configure some swap when running MongoDB:

https://docs.mongodb.com/manual/administration/production-notes/#swap

Of course, LRU data will be paged out first, other processes can take up the res mem also, but the concept still applies unless you load your data set into memory and then it stays static. Best thing to do if you are worried is get it into MMS and track the usage over time:

http://mms.mongodb.com

Update: August 2015

Since I wrote this answer things have moved on somewhat and the information is a little out of date. For example, Linux now has cgroups and related technologies (Docker containers for example) that have matured to the point that they allow you to better isolate and limit the resources (including memory) consumed by any process in a production environment, even one that uses memory mapping like MongoDB.

Additionally, with the advent of new storage engines beyond MMAP like WiredTiger in MongoDB 3.0+ you can use built-in functionality to limit the cache size for MongoDB. Hence, the RAM requirements now really do depend on how you choose to configure MongoDB, what environment you run it in, and what storage engine you choose.


MongoDB will use available free memory for caching, and swap to disk as needed to yield memory to other applications on the same server. For the best performance you'll want to have enough RAM to keep your indices and frequently used data ("working set") in memory.

Helpful reading:

  • MongoDB FAQ: Does MongoDB require a lot of RAM
  • MongoDB Wiki: Checking Server Memory Usage

Somethings are changed in years about MongoDB.

TL;DR

If MMAPv1 storage engine is used on MongoDB working set size must fit RAM. https://docs.mongodb.com/manual/faq/diagnostics/#must-my-working-set-size-fit-ram

If WiredTiger storage engine is used on MongoDB, not need to concern about RAM is fit for working set or not. https://docs.mongodb.com/manual/faq/diagnostics/#memory-diagnostics-for-the-wiredtiger-storage-engine

Memory Diagnostics for the WiredTiger Storage Engine

Must my working set size fit RAM?

No.

How do I calculate how much RAM I need for my application?

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

Changed in version 3.2: Starting in MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:

60% of RAM minus 1 GB, or 1 GB.

Tags:

Mongodb