Help setting up Julia on a cluster

I am trying Julia for the first time on a HPC cluster at my university. Julia v0.6 is already installed, I have a few questions:

  1. How to install packages on a custom folder in the cluster other than .julia/v0.6, and without internet connection?
  2. The cluster is configured with PBS for resource management. I found the ClusterManagers.jl package, but I wonder what is the workflow? Do I still need to write a PBS script and call Julia from there?
2 Likes

To address 1, assuming you have a networked file system, you should be able to put your packages in a directory accessible by every node then add

prepend!(LOAD_PATH, "/some/path")

in a .juliarc.jl file on the head node.

1 Like

https://docs.julialang.org/en/latest/manual/packages/#Offline-Installation-of-Packages-1

@juliohm I asked a similar question recently. You might find this useful:

1 Like

@juliohm WRT your second question, an expert will be along in a minute.
The examples from ClusterManagers.jl have the paradigm that you start a Julia session, then use ClusterManagers to submit a job (or start processes locally using affinity). That job will be started with the number of cores you request, int he queue you request and with the other resources you request in the qsub_env
I think I may well have this totally wrong, but if you want this to run totally in a batch mode you would either have to start the main script with the master process in a terminal using a screen session and leave it running.
Or you could qsub the main/master script which in turn submits the ClusterManager job.

1 Like

I recently went through the same learning process on my university’s cluster, and your second question was the hardest part for me. I found the easiest way was to submit to one of the queues using a PBS submit script that tells Julia where to find all the processors via the PBS nodefile. Here’s a minimum working example PBS script:

#!/bin/bash
#PBS -l nodes=4:ppn=12,walltime=00:05:00
#PBS -N test_julia
#PBS -q debug

echo PBS: node file is $PBS_NODEFILE
julia --machinefile=$PBS_NODEFILE /path/to/your/home/dir/test_julia.jl
echo "finished"

I called this file test_julia.pbs. When you submit this job (i.e., by running qsub test_julia.pbs from your login prompt) the Julia process starts up with all the processors (48 in this case) available, as if you’d started it on your laptop with julia -p 2 or whatever. For completeness, here’s test_julia.jl, which has minimum working examples for basic batch-processing tasks:

println("Hello from Julia")
np = nprocs()
println("Number of processes: $np")

for i in workers()
    host, pid = fetch(@spawnat i (gethostname(), getpid()))
    println("Hello from process $(pid) on host $(host)!")
end

tasks = randn(np * 30)

@everywhere begin
    function foo(x)
        return x * 4
    end
end

results = pmap(foo, tasks)

println(round(results, 3))

for i in workers()
    rmprocs(i)
end

Hope that helps!

16 Likes

Thank you @ElOceanografo, that is really helpful already. I can give it a try and get things moving while I digest the workflow, possibly with ClusterManagers.jl.

1 Like

@ElOceanografo, I am getting an error saying that the file /home/juliohm/.ssh cannot be created. This makes sense because in the cluster I am trying to run, users cannot write to /home. Did anyone had a similar issue?

Also, for requesting more than one node on clusters, do we need to install MPI.jl? Is MPI.jl deprecated by ClusterManagers.jl?

If MPI.jl is required, that is bad news. It only supports Julia v0.4 according to the README.

The file permission error is something I would ask to your cluster’s admin about. On the cluster I’m using, we’re given write access in out home directory, so I didn’t run into this problem.

As to your second question, MPI.jl is not required. You don’t need to use ClusterManagers either if you don’t want to. As I understand it, when you run addprocs_pbs() from that package, it just auto-generates the appropriate PBS commands within Julia and sends them off via qsub. I tried this approach and couldn’t get it to work…addprocs_pbs would hang. I didn’t spend much time investigating why, because the slightly-less-elegant solution I posted above, with a separate submit script, worked fine.

I’m not actually an expert in any of this, though, so if other, more-informed people want to chime in that would be great.

I asked the cluster’s admin and he mentioned that the cluster doesn’t use SSH for communication, which I think is bad news for me. Julia will only be able to use one node in this case. Please correct me if I am wrong.

He also mentioned that a MPI manager is required to have processes running across different nodes, is there any documentation on how these concepts are connected?

Ah…you may have to use MPI after all, in that case. I would see if you can get any of the MPI.jl examples here to run–you’ll use a qsub submit script that looks something like this:

#!/bin/bash
#PBS -l nodes=4:ppn=12,walltime=00:05:00
#PBS -N test_julia
#PBS -q debug

module load openmpi
mpirun -np 48 /path/to/your/home/dir/01-hello.jl
echo "finished"

You may need to change module load openmpi to refer to whatever version of MPI is available on your cluster (or you may be able to omit it if MPI is loaded by default). I don’t know if this will work, but I have used a similar pattern to successfully run Python programs using mpi4py, the package which inspired MPI.jl.

Yes, I used mpi4py in the past, I assumed that most people doing HPC in Julia were using it also, but apparently all these threads refer to SSH communication? What about the Celeste project, do you know if they used MPI?

It looks like they used a new-to-me library called Gasp.jl, which wraps a C library, also called Gasp, which does rely on MPI.

It looks like I misinterpreted the type of parallel computing that Julia supports natively. By just reading the documentation, it compares Julia’s message passing with MPI message passing, which is not a fair comparison at all.

MPI is about distributed-memory systems where nodes communicate through high-performance hardware like InfiniBand techonology. As far as I can tell, Julia’s parallel computing features do not support any of that.

So all the beautiful pmap, @parallel and related functions are useless in an HPC cluster with hundreds of nodes? I think we need a generalization of parallel programming paradigms in a package that implements the pmap operation on arbitrary pools of processes. I will try to work on this if I find the time, pmap covers more than 80% of use cases out there for embarrassingly parallel tasks, and it looks like there is no clean solution yet.

Yes it does. It is distributed memory across multiple nodes. You can configure in more detail how it communicates through the ClusterManager I think.
But even if you just open it up with a machine file it’ll be distributed. Here’s an example of doing it the machine file way:

But of course you can add processes any of the other documented way and it will distribute as well. @oxinabox had a good recent blog post on a few different methods for doing this:

http://white.ucc.asn.au/2017/08/17/starting-workers.html

1 Like

Hi @ChrisRackauckas, thank you for the links, but what about the SSH issue? It seems that not all clusters allow nodes to communicate via SSH? I am gonna contact the admin of the cluster again to ask this in more detail.

Can confirm what @ChrisRackauckas said, since I’ve run code on multiple processors using pmap myself. It does sound like the SSH restriction may be the issue on your cluster…so following up with the admin is probably the best next step.

I want to add an update and solution to this thread.

The answer by @ElOceanografo is great, specially if you are interested in fine control with the cluster resource manager. I would say that the most clean solution nowadays is through the ClusterManagers.jl package.

Basically, the package defines various functions addprocs_slurm(), addprocs_pbs(), etc. which can be used in place of the Julia builtin addprocs(). When you start Julia, instead of asking for processes with addprocs(), you simply do:

using ClusterManagers

addprocs_pbs(100) # request 100 processes in the cluster using PBS

# REST OF SCRIPT GOES HERE

This will take care of submitting the job with the appropriate resource manager without having to write the job script manually.

2 Likes

Hello, Does anyone has a expirience with running Julia on cluster under the Torque schedguler. The Cluster manager with addprocs_pbs() do not work.

Have you tried this in julia 1.0.0? I am unable to figure it out, though the docs have something about cluster manager interface. I can’t find examples on how to implement it. Any ideas?