as indicated in the title, I would like to run many (thousands) of computational experiments on a hpc-cluster using julia code. Notice that I’m new to julia, so hopefully there are better options than those I suggest here. Since I previously worked with C++ the procedure was easy: compiling a statically linked binary, and send the jobs to the queue of the cluster which processes as many jobs parallel as it has capacity. Notice that my main focus is on tracking the speed of the algorithms excluding JIT compilation times.
As far as I know, there are some options:
compiling the code using PackageCompiler, and send the jobs to the queue as described above. However, it seems that with PackageCompiler, there is no way to statically linking/compiling (?). On my local machine everything works fine and fast (with the dynamic libs), however, the software versions on the cluster are out-of-date such that I would need to link all libs statically (since I cannot update the cluster).
installing Julia on the cluster…
2a) …and send “juila my_main.jl arg1 arg2 …” in the queue. This is a mess, since the run-times are very bad and not competitive due to JIT compilation in each instance.
2b)…open a REPL environment -> start a mini-instance to get everything pre-compiled -> start the “real” instance. This is also very messy, since it produces huge unnecessary workload.
As far as I read about Julia, it is said to be a kind of ‘newcomer language’ for scientific computing, and I think that it is quite natural in many scientific disciplines to perform a large number of computational experiments. So I wonder how you are performing such experiments, or what the best practice is in your opinion?
(By now, the best option for me would be 1): a statically linked binary, which can be executed on different machines (without any dependencies). )
Thank you in advance for your remarks