Can I run an Oracle server without swap?

Disabling swap is a good idea if

  • your software can deal with out-of-memory conditions gracefully or limits itself to avoid OOM situations
  • having consistent performance is critical (when your system is swapping latency will increase which can be bad enough to make it effectively useless for many applications)

This sort of thing often happens with databases. I see it more with noSQL databases, but relational databases can suffer the same challenge.

There is nothing in the OS that requires swap to be there. Linux deals with this pretty gracefully by killing the last process that asked for memory. You don't want to get to that point so make sure you tune Oracle to use only ~90% of memory so there's some left over for the system daemons and margin for error. "Free" memory also gets used for buffering disk I/O which is a huge performance win so trying to consume more of memory by the database itself will eventually slow overall system performance enough to be counter-productive.

Even with systems that have a fraction of the memory from the question if the application is a database or a cache or similar system I'd default to no swap at this point.

Authorities

So that you're not just relying on my word for it:

Cassandra

Datastax explains for Cassandra:

You must disable swap entirely. Failure to do so can severely lower performance. Because Cassandra has multiple replicas and transparent failover, it is preferable for a replica to be killed immediately when memory is low rather than go into swap. This allows traffic to be immediately redirected to a functioning replica instead of continuing to hit the replica that has high latency due to swapping. If your system has a lot of DRAM, swapping still lowers performance significantly because the OS swaps out executable code so that more DRAM is available for caching disks.

riak

Basho explains for Riak that you should:

Ideally, you should disable swap to ensure that Riak’s process pages are not swapped. Disabling swap will allow Riak to crash in situations where it runs out of memory. This will leave a crash dump file, named erl_crash.dump, in the /var/log/riak directory which can be used to determine the cause of the memory usage.

mysql

Percona is sitting on the fence and provides useful caveats for both sides of the question. MariaDB disagrees with disabling swap:

While some disable swap altogether, and you certainly want to avoid any database processes from using it, it can be prudent to leave some swap space to at least allow the kernel to fall over gracefully should a spike occur. Having emergency swap available at least allows you some scope to kill any runaway processes.

ServerFault

A well received answer here includes:

I personally find a swappy system worse than a crashed system. A crashed system would trigger a standby backup server to take over much sooner. And in an active-active (or load balanced setup) a crashed system would be taken out of rotation much sooner. A win for the no-swap system again.

That answer has 22 upvotes today and is 4 years old. You can also see some other answers there extolling the value of swap, but there's no indication they're running databases. They don't have as many upvotes either. :)

squid

While they don't overtly recommend disabling swap the squid guys say:

Squid tends to be a bit of a memory hog. It uses memory for many different things, some of which are easier to control than others. Memory usage is important because if the Squid process size exceeds your system's RAM capacity, some chunks of the process must be temporarily swapped to disk. Swapping can also happen if you have other memory-hungry applications running on the same system. Swapping causes Squid's performance to degrade very quickly.

That's what you don't want to happen to your database.

redis

While redis officially recommends swap the users don't buy it:

First disable swap - Redis and swap don't mix easily and this can certainly cause slowness.

hadoop

As seen in this answer with the most votes on hortonworks community:

For slave/worker/data hosts which only have distributed services you can likely disable swap. With distributed services it's preferred to let the process/host be killed rather than swap. The killing of that process or host shouldn't affect cluster availability. Said another way: you want to "fail fast" not to "slowly degrade.

[....]

For masters, swap is also often disabled though it's not a set rule from Hortonworks and I assume there will be some discussion/disagreement. Masters can be treated somewhat like you'd treat masters in other, non-Hadoop, environments.

The fear with disabling swap on masters is that an OOM (out of memory) event could affect cluster availability. But that will still happen even with swap configured, it just will take slightly longer. Good administrator/operator practices would be to monitor RAM availability, then fix any issues before running out of memory. Thus maintaining availability without affecting performance. No swap is needed then.

I like this because it is talking about a Java app, but it reaches a lot of the same conclusions mentioned above about databases. Also, it mentions monitoring which is very helpful in tuning high performance applications. If you don't have numbers to compare everything is based on feelings which are harder to compare. Make graphs for every measurable metric - application-level latency and throughput down to CPU, disk, memory, and network graphs. Those provide the bulk of the real data you have to make decisions on.