Running a process on several nodes on cluster

A related question is: I have been using qsub to submit jobs on a cluster. If I need to use multiple nodes with the Distributed.jl package, would I need to use another command?

Hi. Are you still working on this problem? I understand you are on PBS Torque. Also, as I understand the code provided by you, the way you are currently shaping it, is that you are using only the cores available on one of your nodes. In order to utilize several nodes, first you have to start the processes on all of them. And now, I fully agree with @mkitti that more info would be useful as differences between the clusters are sometimes significant. Based on my own experience with PBS, for example, I was not able to use PBS features of ClusterManagers.jl because of the way my login node was configured. I had more luck with “julia --machine-file” option that is listed here: GitHub - Arpeggeo/julia-distributed-computing: The ultimate guide to distributed computing in Julia but still it was not a smooth ride as my cluster used a cshell. So, to sum up, nothing is easy in the Julia HPC World but definitely worth giving it a try. I believe a kind of authoritative guide on running distributed jobs is provided by @vchuravy here Struggling to Run Distributed I/O Operations on SLURM Cluster - #2 by vchuravy. Unfortunately, it uses SLURM not PBS.