Using python's multiprocessing on slurm

Your current code will run 10 times on 5 processor, on a SINGLE node where you start it. It has nothing to do with SLURM now.

You will have to SBATCH the script to SLURM.

If you want to run this script on 5 cores with SLURM modify the script like this:

#!/usr/bin/python3

#SBATCH --output=wherever_you_want_to_store_the_output.log
#SBATCH --partition=whatever_the_name_of_your_SLURM_partition_is
#SBATCH -n 5 # 5 cores

import sys
import os
import multiprocessing

# Necessary to add cwd to path when script run
# by SLURM (since it executes a copy)
sys.path.append(os.getcwd())

def hello():
    print("Hello World")

pool = multiprocessing.Pool() 
jobs = [] 
for j in range(len(10)):
    p = multiprocessing.Process(target = run_rel)
    jobs.append(p)
    p.start() 

And then execute the script with

sbatch my_python_script.py

On one of the nodes where SLURM is installed

However this will allocate your job to a SINGLE node as well, so the speed will be the very same as you would just run it on a single node.

I dont know why would you want to run it on different nodes when you have just 5 processes. It will be faster just to run on one node. If you allocate more then 5 cores, in the beginning of the python script, then SLURM will allocate more nodes for you.


Just a hint: you need to understand what is core,thread,socket,CPU,node,task,job,jobstep in SLURM.

If there is absolutely no interactions between your script. Just use :

srun -n 20 python serial_script.py

SLURM will allocate resources for you automatically.

If you want to run 4 tasks on 4 nodes, with each task using 5 cores. You can use this command:

srun -n 4 -c 5 -N 4 -cpu_bind verbose,nodes python parallel_5_core_script.py

It will run 4 tasks (-n 4), on 4 nodes (-N 4). Each tasks will have resource of 5 cores (-c 5). The -cpu_bind verbose,nodes option indicates that each task will be run on each node (nodes), and the actual cpu_bind will be printed out (verbose).

However, there might be some weird behavior on CPU binding if your SLURM is configured differently from mine. Sometime it is very tricky. And python's multiprocessing module seems do not work well with SLURM's resource management, as indicated in your link.