Multithreading problems in running Julia in a Slurm cluster

Hi! I am confused about running Julia code in a Slurm cluster.
I am solving a nonlinear PDE so I have to solve it step by step. In each step a linear equation system Ax=b (from FEM so A is sparse) is solved by \ which actually adopts the SuitSparse package. In my personal laptop and the desktop in office, the \ is automatically multithreaded even run with ‘julia -t 1’.
Today I try to run my code in a Slurm cluster. When I applied for 4 cores in a single node, the output file is weird. It looks like my code is running in each core independently. Attached is my script to submit the job to cluster and the output file.

#!/bin/bash
#SBATCH -J Test 
#SBATCH -p intel  
#SBATCH -N 1 
#SBATCH -n 4 
#SBATCH --ntasks-per-node=4   

hostname
srun julia -t 4 StaMmdSys.jl
sleep 100

The weird output file in which each incremental step and iteration step is computed several times

cpui03
Num Ste 1
Num Ste 1
Num Ste 1
Num Ste 1
  Num Ite 1
    ResErr 1.1815291149415338e-26 LoaFac 31.340781306503
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 2
  Num Ite 1
    ResErr 1.1815291149415338e-26 LoaFac 31.340781306503
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 2
  Num Ite 1
    ResErr 1.1815291149415338e-26 LoaFac 31.340781306503
    Max Dam 0.0 Min Deg 1.000000000000001
  Num Ite 1
    ResErr 5.613078015512632e-27 LoaFac 62.681562613006
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 2
Num Ste 3
  Num Ite 1
    ResErr 1.1815291149415338e-26 LoaFac 31.340781306503
    Max Dam 0.0 Min Deg 1.000000000000001
  Num Ite 1
    ResErr 5.613078015512632e-27 LoaFac 62.681562613006
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 3
Num Ste 2
  Num Ite 1
    ResErr 3.098702727267662e-7 LoaFac 94.022343919509
    Max Dam 0.0008378798567051027 Min Deg 0.983676759929105
Num Ste 4
  Num Ite 1
    ResErr 3.098702727267662e-7 LoaFac 94.022343919509
    Max Dam 0.0008378798567051027 Min Deg 0.983676759929105
Num Ste 4
  Num Ite 1
    ResErr 1.565185266046758e-5 LoaFac 125.36136059678219
    Max Dam 0.010911503021434944 Min Deg 0.8866271918773956
  Num Ite 1
    ResErr 5.613078015512632e-27 LoaFac 62.681562613006
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 3
  Num Ite 1
    ResErr 5.613078015512632e-27 LoaFac 62.681562613006
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 3
  Num Ite 1
    ResErr 1.565185266046758e-5 LoaFac 125.36136059678219
    Max Dam 0.010911503021434944 Min Deg 0.8866271918773956
  Num Ite 1
    ResErr 3.098702727267662e-7 LoaFac 94.022343919509
    Max Dam 0.0008378798567051027 Min Deg 0.983676759929105
  Num Ite 1
    ResErr 3.098702727267662e-7 LoaFac 94.022343919509
    Max Dam 0.0008378798567051027 Min Deg 0.983676759929105
  Num Ite 2
    ResErr 1.645042974830353e-7 LoaFac 125.17614239795542
    Max Dam 0.01105091853894029 Min Deg 0.8842490683695091
Num Ste 4
Num Ste 4
  Num Ite 2
    ResErr 1.645042974830353e-7 LoaFac 125.17614239795542
    Max Dam 0.01105091853894029 Min Deg 0.8842490683695091
Num Ste 5
  Num Ite 1
    ResErr 2.9130673485754505e-5 LoaFac 156.467617106916
    Max Dam 0.04471510355125102 Min Deg 0.7560656522648532
  Num Ite 1
    ResErr 1.565185266046758e-5 LoaFac 125.36136059678219
    Max Dam 0.010911503021434944 Min Deg 0.8866271918773956
Num Ste 5
  Num Ite 2
    ResErr 1.645042974830353e-7 LoaFac 125.17614239795542
    Max Dam 0.01105091853894029 Min Deg 0.8842490683695091
Num Ste 5
  Num Ite 1
    ResErr 1.565185266046758e-5 LoaFac 125.36136059678219
    Max Dam 0.010911503021434944 Min Deg 0.8866271918773956
  Num Ite 1
    ResErr 2.9130673485754505e-5 LoaFac 156.467617106916
    Max Dam 0.04471510355125102 Min Deg 0.7560656522648532
  Num Ite 2
    ResErr 3.927810503996626e-6 LoaFac 155.8450166806799
    Max Dam 0.056704006161101106 Min Deg 0.7223674694734912
  Num Ite 2
    ResErr 3.927810503996626e-6 LoaFac 155.8450166806799
    Max Dam 0.056704006161101106 Min Deg 0.7223674694734912
  Num Ite 3
    ResErr 5.410640813519663e-7 LoaFac 155.657378580163
    Max Dam 0.060797662055178815 Min Deg 0.7137250018025755

A normal output file should look like this (get it from cluster with only 1 core)

cpui03
Num Ste 1
  Num Ite 1
    ResErr 1.1815291149415338e-26 LoaFac 31.340781306503
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 2
  Num Ite 1
    ResErr 5.613078015512632e-27 LoaFac 62.681562613006
    Max Dam 0.0 Min Deg 1.000000000000001
Num Ste 3
  Num Ite 1
    ResErr 3.098702727267662e-7 LoaFac 94.022343919509
    Max Dam 0.0008378798567051027 Min Deg 0.983676759929105
Num Ste 4
  Num Ite 1
    ResErr 1.565185266046758e-5 LoaFac 125.36136059678219
    Max Dam 0.010911503021434944 Min Deg 0.8866271918773956
  Num Ite 2
    ResErr 1.645042974830353e-7 LoaFac 125.17614239795542
    Max Dam 0.01105091853894029 Min Deg 0.8842490683695091
Num Ste 5
  Num Ite 1
    ResErr 2.9130673485754505e-5 LoaFac 156.467617106916
    Max Dam 0.04471510355125102 Min Deg 0.7560656522648532
  Num Ite 2
    ResErr 3.927810503996626e-6 LoaFac 155.8450166806799
    Max Dam 0.056704006161101106 Min Deg 0.7223674694734912
  Num Ite 3
    ResErr 5.410640813519663e-7 LoaFac 155.657378580163
    Max Dam 0.060797662055178815 Min Deg 0.7137250018025755
Num Ste 6
  Num Ite 1
    ResErr 7.935999964334881e-5 LoaFac 186.77680857130252
    Max Dam 0.1238986630949555 Min Deg 0.5772076022373513
  Num Ite 2
    ResErr 1.1999112834906476e-5 LoaFac 185.54765605633034
    Max Dam 0.15347340755070013 Min Deg 0.5289334799119432
  Num Ite 3
    ResErr 1.5354390771604877e-6 LoaFac 185.0418806205562
    Max Dam 0.16195976821638491 Min Deg 0.5141634286765151
  Num Ite 4
    ResErr 4.0455113498668363e-7 LoaFac 184.83622847734313
    Max Dam 0.16516550863977345 Min Deg 0.5067114390938846

I wonder that do I have to modify my code to run it properly in a slurm cluser? Or I should submit the job to cluster in a different manner?

This is how I’ve set up my slurm scripts

#SBATCH --mail-type=END,FAIL
#SBATCH --mem-per-cpu=4GB
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4

julia --threads ${SLURM_CPUS_PER_TASK} scripts/my_script.jl

and then scripts/my_script.jl would be my julia program that uses several threads. It looks like mine has 1 task and 4 cpus per task, while yours has 4 tasks. Not sure if that is a problem

2 Likes

My slurm scripts look similar to @kopperud, if that is of any help

BLAS is multithreading and has separate controls. You might want to do something like this when attempting to multithread yourself.

using LinearAlgebra
BLAS.set_num_threads(1)

There is also an environment variable, but I forget how to spell it.

Thanks for your reply! It works for me:)

Yes it works!

Does BLAS also work for SparseCSC? I used to think that solving the sparse linear system is controlled by SuitSparse.

The question has already been answered, but since this also bit me in the past, I just wanted to add some background, which might help others. The original weird output is related to the combination of --ntasks-per-node=4 and the srun command in the original script.

If you tell sbatch to allocate 4 tasks, it will make sure you can run up to 4 “things” (for lack of a better word, but it’s what is called “process” or “job” in other contexts) at the same time on the allocated node(s). The “tasks” are not (directly) the number of available threads or cores, these are called CPUs by Slurm. But each task will require at least one CPU (you can set the number with --cpus-per-task).

srun then runs job steps and uses as default for the number of tasks the same value that you specify in the script. So the single call to srun in the original script does simply do the same thing four times in parallel (e.g. srun echo "hello" prints four "hello"s if you have --ntasks=4 or four "hello"s for each allocated node if you have --ntasks-per-node=4).

You could specify the number of tasks for each srun command manually, e.g.

...
#SBATCH --ntasks=4`
...

# this will do the computation only once and have `--cpus-per-task` many threads available
srun -n 1 julia my_script.jl

# or if you want to start four independent calculations in the background (in parallel) and wait for them to finish
srun -n 1 julia my_script1.jl &
srun -n 1 julia my_script2.jl &
srun -n 1 julia my_script3.jl &
srun -n 1 julia my_script4.jl &
wait

Some useful links and excerpts of the Slurm docs:

3 Likes

Thanks for your clear explanation! It is indeed helpful to me :grinning:

1 Like