Using 100% of all cores with the multiprocessing module

To use 100% of all cores, do not create and destroy new processes.

Create a few processes per core and link them with a pipeline.

At the OS-level, all pipelined processes run concurrently.

The less you write (and the more you delegate to the OS) the more likely you are to use as many resources as possible.

python p1.py | python p2.py | python p3.py | python p4.py ...

Will make maximal use of your CPU.


You can use psutil to pin each process spawned by multiprocessing to a specific CPU:

import multiprocessing as mp
import psutil


def spawn():
    procs = list()
    n_cpus = psutil.cpu_count()
    for cpu in range(n_cpus):
        affinity = [cpu]
        d = dict(affinity=affinity)
        p = mp.Process(target=run_child, kwargs=d)
        p.start()
        procs.append(p)
    for p in procs:
        p.join()
        print('joined')


def run_child(affinity):
    proc = psutil.Process()  # get self pid
    print(f'PID: {proc.pid}')
    aff = proc.cpu_affinity()
    print(f'Affinity before: {aff}')
    proc.cpu_affinity(affinity)
    aff = proc.cpu_affinity()
    print(f'Affinity after: {aff}')


if __name__ == '__main__':
    spawn()

Note: As commented, psutil.Process.cpu_affinity is not available on macOS.