I found this and yes, it already sounds ridiculous.
I don’t understand the message you are trying to convey.
I found this and yes, it already sounds ridiculous.
I don’t understand the message you are trying to convey.
Ever tried to convince management to install Julia if all they know is Python and Fortran?
If Julia were pre-installed by the hpc vendors this would help a lot with public perception.
Maybe it isn’t good enough or you didn’t know enough about it to be convincing?
No, I didn’t have to, since, as I said, it’s already installed in most systems I use. And downloading the official binaries takes relatively little effort if it isn’t there already.
Julia v1.6 is available on Fugaku, currently number 1 in the TOP500 list. Does that help the public perception?
As part of the public: I’m convinced now
Giordano:
That is interesting. Thanks for that info.
I did not know that Julia is pre-installed on the Fugaku high performance super computer.
Then I guess the public perception wasn’t much influenced by that
I wonder how many of the other TOP500 have Julia.
It’s not so much about having Julia but offering support for it if you ask me. Anyways, to give you a few more examples, Julia is available at NERSC (USA), CSCS (Switzerland), and JSC (Germany) and these are all Tier 1 supercomputing centers. And I could name more.
Personally I’m working hard (in collaboration with others) on establishing Julia in the German HPC landscape. For example, there will be a Julia for HPC course at the Tier 1 supercomputing center HLRS in Stuttgart later this year. (@Schneeschaufel since you have a German alias, may I ask which supercomputer you’re working on?)
Hi Carsten
I am not in Germany (I live in an English speaking country though).
We are in the process of getting a new hpc but I am not allowed to talk about any details during the post tender process (it is not military classified though).
Plus configuring it correctly w.r.t. MPI, SLURM, CUDA, parallel I/O, etc to make sure Julia can actually take full advantage of hardware and (low-level) software. I’m always amazed to see amount of complexity you need to go through as a user to get, say, GPU-to-GPU transfers working at full speed. I don’t work on those things myself, but my colleagues often have to guide users to compiling and running their programs efficiently.
That’s why I think just downloading Julia on a supercomputer and running it will not immediately give you all the performance you want, especially when running large multi-node jobs.
@carstenbauer Hi! I’m co-organizing the Julia for HPC webinar at SURF, you spoke to Abel
Why? Keep in mind that Julia is a compiler, the performance of the code it generates doesn’t depend on how it was compiled. There can be a little improvement in the performance of Julia’s runtime, but the benchmarks in Compiling Julia using LTO+PGO - #5 by stabbles showed negligible speedup when compiling Julia and all of its dependencies with -march=native
. You mention large multi-node jobs, but if you’re referring to MPI, the package MPI.jl
does dynamic loading, so compiling Julia locally doesn’t have any advantages compared to using a prebuilt generic binaries of Julia. Or were you thinking of something else?
No, I did not mean it is necessary to compile Julia locally. One issue is that, at least on our system, there’s quite a set of environment combinations that a user can choose (Intel versus OpenMPI), HDF5 (tied to MPI version), srun versus mpiexec, etc. E.g. even OpenMPI comes in 3 flavors, compiled with GCC, Intel Compilers and NVIDIA HPC compilers. So depending on their needs (and sometimes restrictions in the software used forcing certain choices) they will load certain versions of these and I’m still not entirely sure Julia will pick up the correct shared libraries in all cases and it will work as expected, but I haven’t looked closely at that yet.
For example, looking at Configuration · MPI.jl it seems you do need to build MPI.jl with the correct module loaded, but this then means a user will need to make sure the same module is loaded when actually running a Julia program under that MPI. But again, I haven’t actually gone through these steps myself yet, partly because it takes quite a bit of time to figure out which libraries Julia uses from the system, which from jll’s it downloads itself, if these conflict with system libraries, if all combinations work, etc.
Edit: in short, correct configuration of the environment is part of the challenge
As for the structure of the documentation, I am absolutely not implying that Julia should mimic Matlab (after all the reason for my migrating to Julia is that it does things in its own way), but for completeness I would like to share here that majority of their toolboxes have these four pieces in the documentation section:
I am confident that a better system can be designed and implemented, but this one works fairly well, I think.
Oh, ok, configuration is certainly an important component! Note that documentation if the development version of MPI.jl has also notes for HPC clusters sysadmins.
BTW, we have a Julia HPC working group meeting every fourth Tuesday of the month, and how to simplify configuration is a recurring topic. It’d be great if you could join us if you want to bring your experience or share your thoughts! See also JuliaHPC Meeting (we don’t use Google meet anymore, but the document with the agenda should have the link for joining the call)
Does this course delves deeply into HPC side of things and is it possible to participate in it online even if you are not in Stuttgart?
No. This is an in-person workshop which, given that it’s the first event at HLRS, is primarily intended for people with basic HPC knowledge (in any language) who are not particularly familiar with Julia for HPC. There’ll be a website announcement soon.
I don’t think confidence in the language should be tied to the choice of not including a particular package in the official docs. Rust has basically the same strategy when it comes to packages: it doesn’t include fundamental packages in the standard library. Not even rand
is in the standard library, or tokio
, which is used for async stuff. And let me tell you, Rust is very far from being a language people have little faith in, since it is being used gradually everywhere. And besides, I’ve also had a similar experience with Rust, having to dig the source code to understand how libraries work. It’s the nature of new, fast growing languages. I’m not saying “deal with it”, but it’s nothing unheard of.
Julia and Rust are targeting very different audiences. Julia in many ways caters to scientific and numerical computing, where many users are not professional developers.
What may need to happen is the creation of Julia distributions in the same sense that there are Linux distributions. These distributions could include a wider range of standard packages as well as custom system images with those packages compiled in. A distribution could include Plots.jl and GR.jl built-in for example and be optimized to minimize time to first plot.
Fortunately it’s easy to do this yourself