Blog post about my experiences with Julia

Oh, ok, configuration is certainly an important component! Note that documentation if the development version of MPI.jl has also notes for HPC clusters sysadmins.

BTW, we have a Julia HPC working group meeting every fourth Tuesday of the month, and how to simplify configuration is a recurring topic. It’d be great if you could join us if you want to bring your experience or share your thoughts! See also JuliaHPC Meeting (we don’t use Google meet anymore, but the document with the agenda should have the link for joining the call)


Does this course delves deeply into HPC side of things and is it possible to participate in it online even if you are not in Stuttgart?

No. This is an in-person workshop which, given that it’s the first event at HLRS, is primarily intended for people with basic HPC knowledge (in any language) who are not particularly familiar with Julia for HPC. There’ll be a website announcement soon.

I don’t think confidence in the language should be tied to the choice of not including a particular package in the official docs. Rust has basically the same strategy when it comes to packages: it doesn’t include fundamental packages in the standard library. Not even rand is in the standard library, or tokio, which is used for async stuff. And let me tell you, Rust is very far from being a language people have little faith in, since it is being used gradually everywhere. And besides, I’ve also had a similar experience with Rust, having to dig the source code to understand how libraries work. It’s the nature of new, fast growing languages. I’m not saying “deal with it”, but it’s nothing unheard of.


Julia and Rust are targeting very different audiences. Julia in many ways caters to scientific and numerical computing, where many users are not professional developers.

What may need to happen is the creation of Julia distributions in the same sense that there are Linux distributions. These distributions could include a wider range of standard packages as well as custom system images with those packages compiled in. A distribution could include Plots.jl and GR.jl built-in for example and be optimized to minimize time to first plot.


Fortunately it’s easy to do this yourself

1 Like

I find a solution to the infamous error message on archlinux running julia from the official repo.

ERROR: LoadError: InitError: could not load library "/home/volker/.julia/artifacts/cb7fc2801ca0133a5bdea4bc4585d07c08284cfa/lib/" cannot open shared object file: No such file or directory

The reason is that is packed to /usr/lib/julia on archlinux, however, the path is not in LD_LIBRARY_PATH.
So archlinux users should add a line to ~/.julia/config/startup.jl.


But actually this should be done in the JLLWrappers package, see Fix `` not found on archlinux by sukanka · Pull Request #42 · JuliaPackaging/JLLWrappers.jl · GitHub


I know

Hardcoding paths that are necessary only on a distribution is not a solution, it’s an indication that said distribution is not doing something right.


You are right. So I now set ENV["LD_LIBRARY_PATH"] in ~/.julia/config/startup.jl.

First I expected this was going to be like zverovich’s “Giving up on Julia” post but it turned out to be closer to viralinstruction’s fantastic “What’s bad about Julia”, and I bet this thread’s suggestions and edits had a lot to do with it. I do have a couple edits to suggest in the Performance section, though I expect I came a bit late for it.

I think it would be important to use the more established term “compilation/compiler latency” so people can look up more about this fascinating topic; also maybe throw in the informal “time to first plot” because of its common usage, though personally I find that to be misleadingly focused on plotting.

It is also not correct to say that interpreted languages do not compile; for example, CPython compiles source to bytecode, and though the compilation does far less work and runs really fast, it actually does do a little constant folding. It would be more accurate to say that interpreted languages “solve” compilation latency by doing much less compilation at the cost of performance optimization. Going further, most compiled languages deal with compilation latency by providing different levels of optimization and by saving compilation.

To built-in Julia’s credit, compilation-saving does happen to a signficant degree; the blog post itself notes that “precompilation” cuts down using Plots; using DifferentialEquations from 350s on the first load to 20s on subsequent loads. Compiler latency is also being improved from several different angles. The Julia blog posts on reducing method invalidations are quite interesting, and PackageCompiler’s sysimages is a way to save compilation.

I understand how editing structs fits into the compilation latency and REPL restart, but it’s a very difficult problem (see Github issue #18 for Revise.jl) that does deserve its own paragraph. Revise.jl uses backedges to track methods that get invalidated (need recompilation) because some method is edited, and this has some overhead. Type constructors are called a LOT more than methods e.g. Int(x), so tracking structs the exact same way is just infeasible bloat. This also means that if a struct could be edited, it would invalidate a lot more, possibly approaching the effect of a full restart. Scratch all that, maybe just mention Revise.jl being able to update methods but not structs via source code edits (I see that the use of includet to track source files was already clarified upthread).


This is not the reason for why redefining structs is hard. Julia uses the layout of a struct in the generated code and to optimize array allocation. We can redefine code, but can’t redefine already allocated data :slight_smile:

So in essence a fully qualified MyModule.MyType can’t be changed, but you can replace the module and the contained type.


Oh I see, I admit my understanding of the struct editing situation is spotty at best, and I was recalling this thread in particular. I take back my suggestion to edit the blog post to explain the difficulty of struct editing, though it would be worth adding a sentence to mention Revise.jl and the fact it handles editing of methods but not structs.

Replacing the module is also mentioned in Revise.jl’s issue #18 as problematic. It’s not particular to structs, preexisting code that uses the old module will still use the old module even if its name is attached to a new module, and if you attempt to using the new module via the same name, you’ll get identifier conflict warnings and “both ModuleName and ModuleName export …” errors. And as you mentioned, there’s the problem of the obsolete types’ persistent instances (a problem that is also true for the more dynamic interpreted languages). But I do wonder why there aren’t module backedges+invalidations+reruns as a way to deal with edited structs; it just seems like an obvious development from the discussion in issue #18, and rerunning affected modules seems better than a full restart.

But this is starting to be a bit removed from the main topic, so if anybody wants to clarify this stuff for laypeople like me, feel free to split this to another topic.

1 Like

you should just use juliaup :wink:

thanks, but i use this trick because i don’t want to use julia from aur.
Currently my julia just runs fine.

juliaup just installs and manages the official libraries (from the official julia page). But of course it’s just a suggestion. You should choose what and where to run as you please :wink:

1 Like

Does there exist a writeup/reference for what you consider the “right workflow”? I’d be interested to learn more.


Docstrings for the requested function is soon to be merged:

Startup times for DiffEq will be greatly improved after ArrayInterface require usage is fully removed, which just has one remaining piece:

And now that we have a whole system for high level errors at the pre-solve stage, I added a few more. For example:

More should be coming soon as well.


As @patrick, I’d also be interested in any references or documents that explain this workflow (starting with Julia here, and concerned about making some of these avoidable mistakes).

@rdiaz02 and @patrick

I didn’t answer before because I don’t have that really prepared carefully.

I do have some short notes here:

and here:

But reading such notes (and, IMHO, reading these instructions in general) doesn’t give the user the correct idea. A small demonstration is usually much better. A small video is here illustrating the use of Revise.

But, basically, what one needs is to keep the REPL open, but develop the code mostly in files, which are tracked by Revise. Put the code in functions, even one when is just plotting something, like:

# file: myplot.jl
using Plots
function myplot(data)
    plot(data, linewidth=2, color=:blue, label="test")

Because then you can, in the REPL, use something like:

julia> using Revise # normally started always by default in your startup.jl file

julia> includet("./myplot.jl")

julia> data = readdlm("mydata.dat")

julia> myplot(data) # this gets you the first plot, and may take some seconds

# change something in the `myplot` function above

julia> myplot(data) # this is fast now, and tunning the plot is easy

# change the data

julia> data2 = readdlm("./mynewdata.dat")

julia> myplot(data2)


With that the responsiveness of the development is really good, better than other alternatives, because you don’t need anymore to worry about compiling anything, rerunning the script, etc. Just modify the function (myplot) in this case in the corresponding file, and run it again in the REPL.

If you are developing a package, then the same thing holds, but then the includet will be substituted by using your package, which is loaded installed as a dev package in the environment.