Java: enough free heap to create an object?

freeMemory isn't quite right. You'd also have to add maxMemory()-totalMemory(). e.g. assuming you start up the VM with max-memory=100M, the JVM may at the time of your method call only be using (from the OS) 50M. Of that, let's say 30M is actually in use by the JVM. That means you'll show 20M free (roughly, because we're only talking about the heap here), but if you try to make your larger object, it'll attempt to grab the other 50M its contract allows it to take from the OS before giving up and erroring. So you'd actually (theoretically) have 70M available.

To make this more complicated, the 30M it reports as in use in the above example includes stuff that may be eligible for garbage collection. So you may actually have more memory available, if it hits the ceiling it'll try to run a GC to free more memory.

You can try to get around this bit by manually triggering a System.GC, except that that's not such a terribly good thing to do because

-it's not guaranteed to run immediately

-it will stop everything in its tracks while it runs

Your best bet (assuming you can't easily rewrite your algorithm to deal with smaller memory chunks, or write to a memory-mapped file, or something less memory intensive) might be to do a safe rough estimate of the memory needed and insure that it's available before you run your function.


I don't believe that there's a reasonable, generic approach to this that could safely be assumed to be 100% reliable. Even the Runtime.freeMemory approach is vulnerable to the fact that you may actually have enough memory after a garbage collection, but you wouldn't know that unless you force a gc. But then there's no foolproof way to force a GC either. :)

Having said that, I suspect if you really did know approximately how much you needed, and did run a System.gc() beforehand, and your running in a simple single-threaded app, you'd have a reasonably decent shot at getting it right with the .freeMemory call.

If any of those constraints fail, though, and you get the OOM error, your back at square one, and therefore are probably no better off than just catching the Error subclass. While there are some risks associated with this (Sun's VM does not make a lot of guarantees about what happens after an OOM... there's some risk of internal state corruption), there are many apps for which just catching it and moving on with life will leave you with no serious harm.

A more interesting question in my mind, however, is why are there cases where you do have enough memory to do this and others where you don't? Perhaps some more analysis of the performance tradeoffs involved is the real answer?


Definitely catching error is the worst approach. Error happens when there is NOTHING you can do about it. Not even create a log, puff, like "... Houston, we lost the VM".

I didn't quite get the second reason. It was bad because it is hard to relate SOME_MEMORY to the operations? Could you rephrase it for me?

The only alternative I see, is to use the hard disk as the memory ( RAM/ROM as in the old days ) I guess that is what you're pointing in your "else slower, less demanding approach"

Every platform has its limits, java suppport as much as RAM your hardware is willing to give ( well actually you by configuring the VM ) In Sun JVM impl that could be done with the

-Xmx 

Option

like

java -Xmx8g some.name.YourMemConsumingApp

For instance

Of course you may end up trying to perform an operation that takes 10 gb of RAM

If that's your case then you should definitely swap to disk.

Additionally, using the strategy pattern could make a nicer code. Although here it looks overkill:

if (isEnoughMemory(SOME_MEMORY)) {
    strategy = new InMemoryStrategy();
} else {
    strategy = new DiskStrategy();
}

strategy.performTheAction();

But it may help if the "else" involves a lot of code and looks bad. Furthermore if somehow you can use a third approach ( like using a cloud for processing ) you can add a third Strategy

...
strategy = new ImaginaryCloudComputingStrategy();
...

:P

EDIT

After getting the problem with the second approach: If there are some times when you don't know how much RAM is going to be consumed but you do know how much you have left, you could use a mixed approach ( RAM when you have enough, ROM[disk] when you don't )

Suppose this theorical problem.

Suppose you receive a file from a stream and don't know how big it is.

Then you perform some operation on that stream ( encrypt it for instance ).

If you use RAM only it would be very fast, but if the file is large enough as to consume all your APP memory, then you have to perform some of the operation in memory and then swap to file and save temporary data there.

The VM will GC when running out of memory, you get more memory and then you perform the other chunk. And this repeat until you have the big stream processed.

while( !isDone() ) {
    if (isMemoryLow()) {
        //Runtime.getRuntime().freeMemory() < SOME_MEMORY + some other validations 
        swapToDisk(); // and make sure resources are GC'able
    }

    byte [] array new byte[PREDEFINED_BUFFER_SIZE];
    process( array );

    process( array );
}

cleanUp();

Tags:

Java