ClusterManagers package not connecting for PBS batch system

I am having issues getting the ClusterManagers package to work on an SGI cluster. I have referenced the following discussion heavily but to no avail:

At this point, I have edited the ~/.julia/v0.6/ClusterManagers/src/qsub.jl file so that my qsub statement is working, and the new job launches and runs, but my current Julia session never connects to the new job. In reviewing the qsub.jl script, I noticed that it is waiting for a SET of files to get created with the base name defined by the line in the script:

filename(i) = isPBS ? “home/julia-(getpid()).o$id-$i” : “home/julia-(getpid()).o$id.$i”

However, when I go into my home directory, there is only one file that’s been created for the whole node - not individual files for each core. Since the name of the file is intended to change with the $i reflecting each core on the node, obviously this file can’t satisfy this dependency the script is looking for.

So at this point, without being enough of an expert to trace through the entire ClusterManagers program, I am not sure what to do next.

Is there some simple way to get my PBS job to write a file for each core?

Or is there a simple mod I can make to the qsub.jl file to get it to connect and move on without having individual files for each core?

Just for reference, my edits to the qsub.jl file are as follows:

Changed the line:

qsub -N $jobname -j oe -k o -t 1-$np $queue $qsub_env $res_list :

to:

qsub -l select=1:ncpus=36:mpiprocs=36 -q debug -A ERDCV02221SPS -N $jobname -j oe -k o -l walltime=0:30:00 :

I will also note that I haven’t been able to figure out what the -k option is doing in the PBS line. It does not appear as an option in our machine’s documentation, but it does appear to take the command since the job does enter a run status.

I know parallel stuff on different systems can sometimes be tough to diagnose but I’d be very grateful for some help. I’m one of the few Julia users on our machine at this point and our machines typically rank in the top 50 worldwide, so there’s definitely potential for Julia expansion here if I can demonstrate to colleagues that this is a viable alternative.

V/R,

-Bob Browning

Bump…

Seems that there is something going on in the cluster side since there should be an output file for each core. You can test that without ClusterManagers like:

qsub -N mytest -j oe -k o -t 1-5 -cwd ./test

5 files should be generated in path you are running it.

If I enter the command exactly as you’ve shown it, I get the following error:

qsub -N mytest -j oe -k o -t 1-5 -cwd ./test
qsub: invalid option – ‘t’
usage: qsub [-a date_time] [-A account_string] [-c interval]
[-C directive_prefix] [-e path] [-f ] [-h ] [-I [-X]] [-j oe|eo] [-J X-Y[:Z]]
[-k o|e|oe] [-l resource_list] [-m mail_options] [-M user_list]
[-N jobname] [-o path] [-p priority] [-P project] [-q queue] [-r y|n]
[-S path] [-u user_list] [-W otherattributes=value…]
[-v variable_list] [-V ] [-z] [script | – command [arg1 …]]
qsub --version

I know that the command will not work like this anyway without other input that is necessary on our machine. Our system expects a special statement for selecting the number of nodes and cores per node, as well as the queue we’re in (debug, standard, etc.), the walltime requested, and the account ID for our allocation.

After adding all of this, I tried it the command and again and got the same error. The -t option is not recognized by our system:

qsub -l select=1:ncpus=36:mpiprocs=5 -l walltime=01:00:00 -q debug -A MYALLOCATIONID -N mytest -j oe -k o -t 1-5 -cwd ./test
qsub: invalid option – ‘t’
usage: qsub [-a date_time] [-A account_string] [-c interval]
[-C directive_prefix] [-e path] [-f ] [-h ] [-I [-X]] [-j oe|eo] [-J X-Y[:Z]]
[-k o|e|oe] [-l resource_list] [-m mail_options] [-M user_list]
[-N jobname] [-o path] [-p priority] [-P project] [-q queue] [-r y|n]
[-S path] [-u user_list] [-W otherattributes=value…]
[-v variable_list] [-V ] [-z] [script | – command [arg1 …]]
qsub --version

Thoughts? Suggestions?

-t option stands for array jobs. No ideas. Perhaps using cluster without ClusterManagers could be better for this situation.

@Janis_Erdmanis that doesn’t work either. Without ClusterManagers it hangs and fails if I try to run on more than one node. It will run in parallel on just one node by calling it using julia -p in the PBS script. Basically it appears to only work with shared memory and does not work with MPI.

I have tried using the --machinefile option, but that hasn’t worked either.

In addition, all the documentation I’ve found on the web for PBS/qsub never mentions the -t option. We have multiple large clusters across the country set up with PBS so it’s really strange that ClusterManagers is relying on a PBS feature that nobody in my organization seems to have heard of.

Does anyone know who is currently the primary maintainer of ClusterManagers right now? I’m wondering if someone could put me in touch with them and if we might be able to modify the script to work on my system.

I just learned how I can use machinefile option on my PBS cluster. Perhaps configuration is similar to yours. To begin with I started a interactive job with a command qsub -l nodes=2:ppn=2. In the session I checked that my nodefile is not empty (kind of an issue to me when I try to submit jobs) by executing cat $PBS_NODEFILE. Then I tried to ssh one of the nodes like ssh n05-01 to see if I can do that, but it asked password for me. Thus I had to make a trick with ssh-keygen to make login passwordless. Then system was configured properly and from interactive session julia --machinefile $PBS_NODEFILE worked flawlessly.

Personally I find this method nicer than using ClusterManagers as it is faster, simpler thus I am now considering of putting some commands together to get the same behavior as with ClusterManagers package.

So an alternative to ClusterManagers approach which does not rely on array jobs is as follows:

### Initializing cluster
isfile("machinefile") && rm("machinefile")

open(pipeline(`qsub -l nodes=2:ppn=2`), "w", STDOUT) do io
    println(io,"cat \$PBS_NODEFILE > machinefile; while [ -f machinefile ]; do sleep 1; done")
end

print("Queing")
while !isfile("machinefile")
    sleep(1)
    print(".")
end
println("")

cores = [(i, 1) for i in readlines("machinefile")]
clusterworkers = addprocs(cores; topology=:master_slave, dir="~",exename="julia")

### Doing some calculation

pmap(x->run(`hostname`),1:10)

### Killing the job and workers
rmprocs(clusterworkers)
rm("machinefile")

Looking at PBS or Slurm machinefiles, there is a really nice Python utility which parses those files.
Note to self:
a) develop enough google skills to find this package
b) maybe code it in Julia

ps. Regarding Clustermanagers.jl, specifically for PBS it looks to me like the integration is valid for OpenPBS
In the past year or so (probably older than that) PBSPro has open sourced into a dual model - a bit like Julia really. So I would see OpenBPS becoming a lot less popular.

Here is the Python code I thought of:
https://www.nsc.liu.se/~kent/python-hostlist/

Perhaps it seems trivial, but it can be important when using batch queuing systems. The batch system will give you a list of hostnames and the number of cores you have been allocated on each host. You might have to translate this into a different format which is understood by the code you wish to run, which is where this utility proves valuable.