High Memory Usage Using Python Multiprocessing

One potential problem here is that results could be coming back in any order, but because you're reading them in order, it has to store all the results coming back from the processes in memory. The higher num_tasks is, the more results it potentially has to store in memory waiting for your for f in tasks loop to process the result.

In the worst case, the results are calculated in exactly reverse order. In that case, all the results must be held by the multiprocessing module in memory for you before your for f in tasks loop will start processing anything.

It does seem like the amount of memory they're using is higher than I'd expect in this case though (more than it should be just for storing the 1000-10000 numbers returned by the calculate() function), but maybe there's just a high constant overhead per worker result that's stored up.

Have you tried specifying the callback parameter to apply_async, so you can process results immediately as they're completed, or using imap_unordered, so it can give you back results as soon as they're ready?


I did a lot of research, and couldn't find a solution to fix the problem per se. But there is a decent work around that prevents the memory blowout for a small cost, worth especially on server side long running code.

The solution essentially was to restart individual worker processes after a fixed number of tasks. The Pool class in python takes maxtasksperchild as an argument. You can specify maxtasksperchild=1000 thus limiting 1000 tasks to be run on each child process. After reaching the maxtasksperchild number, the pool refreshes its child processes. Using a prudent number for maximum tasks, one can balance the max memory that is consumed, with the start up cost associated with restarting back-end process. The Pool construction is done as :

pool = mp.Pool(processes=2,maxtasksperchild=1000)

I am putting my full solution here so it can be of use to others!

import multiprocessing as mp
import time

def calculate(num):
    l = [num*num for num in range(num)]
    s = sum(l)
    del l       # delete lists as an  option
    return s

if __name__ == "__main__":

    # fix is in the following line #
    pool = mp.Pool(processes=2,maxtasksperchild=1000)

    time.sleep(5)
    print "launching calculation"
    num_tasks = 1000
    tasks =  [pool.apply_async(calculate,(i,)) for i in range(num_tasks)]
    for f in tasks:    
        print f.get(5)
    print "calculation finished"
    time.sleep(10)
    print "closing  pool"
    pool.close()
    print "closed pool"
    print "joining pool"
    pool.join()
    print "joined pool"
    time.sleep(5)