Julia in Linux distributions

@iwelch @nalimilan on this thread does marvellous work with CentOS/Fedora doing just that via ‘COPR’ repositories.

However I agree with @Tamas_Papp - there should be distro packaged versions of Julia.
However that is only a starting point - ie it sets a low bar for entry to Julia.
You need then to go on with downloaded versions.

Perhaps I wasn’t clear then — I was arguing that they are not worth it.

1 Like

@Tamas_Papp I did understand your reply, and it was me who was not being clear. Sorry.

Perhaps appropriate here for me to give some experience from building HPC clusters.
In that field the cluster admins have to build a complete software stack, from the Infiniband/Omnipath drivers, through various MPI flavours through to the applications software.Here ‘build’ covers commercial applications which would eb installed rather than built from source.

This is managed using packages such as Easybuild https://easybuild.readthedocs.io/en/latest/
and Spack https://spack.io/
One of the difficulties in HPC is that a particular application may be built several times - with different compilers and different MPI flavours. So on one cluster you can choose which implementation you use, by adjusting your PATHS using Modules http://modules.sourceforge.net/

The distributions package MPI flavours, but these tend to be out of date and you continually see this on the OpenMPI mailing list - “I am working with version 1.xxx” the reply being “well use the latest version please”
Also another example is a recent kernel update which was made available by Redhat. RDMA was disabled in this kernel by a small bug. Which affected many HPC sites. No-one particularly blamed Redhat there - the developers dont have access to Infiniband hardware so could not test.


Now that I am on my high horse… The attitude that Ï will only work with the packages shipped with Ubuntu, and I will only consider Ubuntu" drives me mental. I of course can accept counter arguments here.
Remember that Ubuntu made a very conscious decision and put a lot of effort into capturing the mindset of the web scale generation - the developers there did not stumble upon Ubuntu as being a good solution. Neither did your friend pass it on to you in a viral fashion. Ubuntu marketed themselves as the distro of choice.

I will give one small specific - discussing choice of Ubuntu with someone and he said “We use it cos they ship Tensorflow in their repositories”
WTF??? That’s all well and good if you are wanting to try something out, and I an all for it.
But if you are doing something business critical, and you need the latest version or a bugfix, then you need a highly competent sysadmin (for example me). Someone who can build Tensorflow using a knife and fork in a Force 10 gale on any Linux OS. Someone who has the tools to determine where the problem is when your model won’t run.


I’ve just pushed a new version of the official Fedora package using an ILP64 OpenBLAS.


Julia can be obtained via the cross-distribution package manager, Nix, which presently provides Julia 1.0.1. To install Nix itself run:

curl https://nixos.org/nix/install | sh

then install Nix with:

nix-env -i julia

then you can launch Julia, after Nix is added to your PATH (with export PATH=$HOME/.nix-profile/bin:$PATH) with julia. Nix is very handy for getting packages not provided by our distro, or provided by your distro but with bugs.


Please don’t recommend people to install things by piping curl outputs directly to sh. There’s many valid ways of installing nix; that isn’t a good one. Especially since this is a public forum meaning there are certainly people here who don’t understand the full security implications of that bit of scripting.

EDIT: Please don’t tell people to ever pipe things from the network directly to sh.


OK, which method of installing Nix would you recommend that should work on all modern Linux distros? I chose that method because it’s the most distribution-agnostic method of installing it, many distros don’t have it in their official repos.

1 Like

I think the important part is having the ability to review the script before you run it. Therefore, first download with curl, do whatever auditing of the script you care to do, and then run it.

Has anyone used the MKL linked Julia in arch linux?

If they’re the cautious type there’d be nothing stopping them from viewing https://nixos.org/nix/install in a browser first.

Doesn’t eliminate the concern; fairly easy for an attacker to show something different in the browser than when it sees a curl user-agent. Never run untrusted code in a non-sandboxed context without first checking that it’s what it says on the tin. This is why distro package managers check signatures on packages.

EDIT: even that script verifies the checksum after downloading a tarball. Same (or higher) level of thoroughness needs to be applied to distribution of the install script itself.


Interesting (paranoia, not saying unjustified).

You justify why each distro’s update method is better (is there none that is foolproof across distros? I’ve not looked much into Snap and Flatpack; I assume they work). Compared to Windows at least with [self-extracting] .exe that curl-method isn’t any worse… I’ve not looked into if Windows users have anything better. I know of .msi files, just not if they have the same issues.

Back on topic to Julia, the Windows users are only offered nothing better than such an .exe file… (yes, ok, compile from source). And I notice they don’t even have a GPG file unlike for other platforms, at the download page. I’m guess many (most?) [Windows] users don’t care or wouldn’t know what to do with it.

Users should be able to trust “us”, but we shouldn’t be telling them (implicitly) that they should, or that it’s a good idea to trust us.

To be really paranoid would be to not trust the actual Julia binary for either Linux or Windows or any platform. An attacker could check the .sh file, but it wouldn’t guard against a virus/malware in the Julia binary itself. So are we just talking about degrees of trust (or guarding us against other distributors); just guarding against the easiest ways to get hacked?

1 Like

I try to keep Julia in openSUSE Tumbleweed as closest as possible to the latest version. In Leap, it is not possible due to some old libraries. Right now, we have 1.0.1 and probably 1.0.2 by the end of next week. So, if you are using Tumbleweed, you just need:

zypper in julia julia-devel julia-doc

Arch linux uses pacman which uses gpg keys to verify the trust of the binary downloads


sudo pacman -S julia
1 Like

Leap is brand new (and the other developer/rolling).

I’m just curious (I have no need for OpenSuSE), why are the old libraries a trouble? I assume it’s the old [distro/Linux] problem of using dynamic linking to repositories. Is it by choice? I actually thought Julia wanted to statically link a lot if not all, at least for now.

I think that’s the problem solved with containers: e.g. Flatpak and Snap.


is available for OpenSuSE (and other major Linux distros), while Snap not yet (while supported by other major), it seems.

As always, there seems to be two or more solutions in Linux. Not sure if Flatpak is better, while for OpenSuSE currently.

Needs Julia support it and/or snap, for the way forward?


The problem is that we should not bundle libraries into packages as per openSUSE guidelines. In Julia, somethings cannot be avoided, such as LLVM. Leap 15.0 is new, but uses some old, stable libraries that cannot build newer Julia versions. When 15.1 is out, then it will have Julia 1.0, but it is very difficult to keep Julia updated in Leap. Leap 15.0 has Julia 0.6.2.

Can confirm. Arch Linux has a package for Julia that is kept up-to-date with the latest version. I’ve experienced no problems with it so far. Works as I’d hope it would.

1 Like

As you say, they are just a guidelines that you break anyway for LLVM, so why not just do that too for the other problematic libraries/dependencies? You can always relax this later, and it’s the only option to support Julia 1.x.x in current Leap? Or as I said, use Flatpak (where as with Snap, I think the point is to avoid the depositories and dynamic linking).

I see no difference with LLVM vs other libraries in theory; there’s even USE_SYSTEM_LLVM in the Makefile. Isn’t there such for all the other problematic libraries too?

It’s just that LLVM is heavily patched. When LLVM catches up, I expect you want to use the system LLVM.