Burst memory usage in Java

The Sun/Oracle JVM does not return unneeded memory to the system. If you give it a large, maximum heap size, and you actually use that heap space at some point, the JVM won't give it back to the OS for other uses. Other JVMs will do that (JRockit used to, but I don't think it does any more).

So, for Oracles JVM you need to tune your app and your system for peak usage, that's just how it works. If the memory that you're using can be managed with byte arrays (such as working with images or something), then you can use mapped byte buffers instead of Java byte arrays. Mapped byte buffers are taken straight from the system, and are not part of the heap. When you free up these objects (AND they are GC'd, I believe, but not sure), the memory will be returned to the system. You'll likely have to play with that one assuming it's even applicable at all.


Once the program terminates, is the memory usage getting down in taskmanager in windows ? I think the memory is getting released but not shown as released by the default task monitors in the OS you are monitoring. Go through this question on C++ Problem with deallocating vector of pointers


... but it always seems to me that once Java touches some memory, it's gone forever. You will never get it back.

It depends on what you mean by "gone forever".

I've also heard it said that some JVMs do give memory back to the OS when they are ready and able to. Unfortunately, given the way that the low-level memory APIs typically work, the JVM has to give back entire segments, and it tends to be complicated to "evacuate" a segment so that it can be given back.

But I wouldn't rely on that ... because there are various things that could prevent the memory being given back. The chances are that the JVM won't give the memory back to the OS. But it is not "gone forever" in the sense that the JVM will continue to make use of it. Even if the JVM never approaches the peak usage again, all of that memory will help to make the garbage collector run more efficiently.

In that case, you have to make sure your peak memory is never too high, or your application will continually eat up hundreds of MB of RAM.

That is not true. Assuming that you are adopting the strategy of starting with a small heap and letting it grow, the JVM won't ask for significantly more memory than the peak memory. The JVM won't continually eat up more memory ... unless your application has a memory leak and (as a result) its peak memory requirement has no bound.

(The OP's comments below indicate that this is not what he was trying to say. Even so, it is what he did say.)


On the topic of garbage collection efficiency, we can model the cost of a run of an efficient garbage collector as:

cost ~= (amount_of_live_data * W1) + (amount_of_garbage * W2)

where W1 and W2 are (we assume) constants that depend on the collector. (Actually, this is an over-simplification. The first part is not a linear function of the number of live objects. However, I claim that it doesn't matter for the following.)

The efficiency of the collector can then be stated as:

efficiency = cost / amount_of_garbage_collected

which (if we assume that the GC collects all data) expands to

efficiency ~= (amount_of_live_data * W1) / amount_of_garbage + W2.

When the GC runs,

heap_size ~= amount_of_live_data + amount_of_garbage

so

efficiency ~= W1 * (amount_of_live_data / (heap_size - amount_of_live_data) )
              + W2.

In other words:

  • as you increase the heap size, the efficiency tends to a constant (W2), but
  • you need a large ratio of heap_size to amount_of_live_data for this to happen.

The other point is that for an efficient copying collector, W2 covers just the cost of zeroing the space occupied by the garbage objects in 'from space'. The rest (tracing, copying of live objects to 'to space", and zeroing the 'from space' that they occupied) is part of the first term of the initial equation; i.e. covered by W1. What this means is that W2 is likely to be considerably smaller than W1 ... and that the first term of the final equation is significant for longer.

Now obviously this is a theoretical analysis, and the cost model is a simplification of how real garbage collectors really work. (And it doesn't take account of the "real" work that the application is doing, or the system-level effects of tying down too much memory.) However, the maths tells me that from the standpoint of GC efficiency, a big heap really does help a lot.


Some JVMs do not or are not able to release previously acquired memory back to the host OS if it isn't needed atm. This is because it's a costly and complex task. The garbage collector only applies to the heap memory within the Java virtual machine. Therefore it does not give back (free() in C terms) memory to the OS. E.g. if a big object isn't used any more, the memory will be marked as free within the heap of the JVM by the GC and not released to OS.

However, the situation is changing, for example ZGC will return memory to the operating system.