Cranelift: a faster alternative to LLVM

The rust compiler can now build with either LLVM or cranelift. cranelift is not nearly as developed as LLVM (e.g., SIMD is only partially supported), but has the benefit of building quite a bit faster, and from this blog post, the plan for rust is to use it to speed up debug builds.

Because a lot of work in Julia is going into reducing compile times, I thought some people here might be interested. It would almost certainly be a lot of work for Julia to add this as a backend, but might be worth considering at some point.

Cheers, and be well,

Kevin

11 Likes

As far as I know, LLVM is not considered a major part of compile time for julia (mostly inference and specialization). As those get faster though, this could become pretty important.

2 Likes

I heard about another compiler based on LLVM - MLIR

6 Likes

In last Ask us everything Jeff Bezanson said that he has only two concers about Julia and second is that LLVM became slower and slower (at 8:45), but I don’t understand precisely nature of this problem. I mean, many diffrent things in LLVM can became slower.

Do you try contacted him and ask about that?

I’m not a compiler person, so I don’t know the reasons for it, but I have become pretty concerned with what has happened with the LLVM “ecosystem”. It seems like just about everything has its own patched version of the LLVM, and have to move heaven and earth to ever update their LLVM version, first because of the patching, but second because of the serious performance regressions that I have also heard Jeff and others mention a number of times. This has led to sort of the opposite of the situation we want: different things that use LLVM are incompatible. For example, has anyone tried a Rust-Julia interface? I haven’t seen one, as far as I know there are major technical obstacles to this, though it’s obviously something we should want. Another problem I’ve noticed is with the cling C++ interpreter which also uses a patched LLVM, which may be a major obstacle to the high energy physics community using Julia, which is pretty upsetting, because this is a community which would probably get pretty excited by it.

The reason I mention this here is: does anyone qualified to say so think that either cranelift or MLIR is a realistic candidate for addressing some of the issues that have caused this LLVM fracturing and incompatibility?

5 Likes

I don’t think that is true at all.

3 Likes

I’m very interested in this things, can you give me idea where I can read about it more?

Maybe you’re referring to a different kind of interface, but I’ve used the following package to call Julia functions from Rust with good results:

Calling Rust functions from Julia is of course even easier.

5 Likes

Interesting, perhaps I’m not understanding what I thought the issues were here. I’ve definitely read various things about difficulties in compatibility between different patched versions of LLVM, but I only have a vague idea of what those are. Would be great if someone who knows better could come along and give a more detailed explanation of what might happen in those cases.

1 Like

Clashing versions of LLVM is definitely an issue in the GPU world; I’ve encountered problems with both OpenCL.jl and our AMDGPU ROCm external libraries because the libraries they call into use a different LLVM build.

Also, MLIR can only really make compile times worse, because it first runs MLIR passes, and then runs all of our LLVM passes (although I could be wrong about this).

2 Likes

Sorry for necroing this thread but LLVM getting slower over time is starting to get some attention. You can read a very good blog post on this here Make LLVM fast again .

3 Likes

Let’s assume we have the resources and determination to write a custom backend for Julia. How much better could we do in terms of compile time speed and run time speed?(or what specific features we could include that’s hard to do with LLVM) On the other hand what are the nauseas for going this route in the long term?

In terms of benefits, I think a good comparison would be with WebAssembly, at least when it comes to the benefits/drawbacks of LLVM IR (see Why not just use LLVM bitcode as a binary format?).

2 Likes

A member of the Rust community has written a blog post about the ongoing effort to understand what part of the Rust compilation is the bottleneck.

To better understand compilation times, there is a web site that has graphs like this one to show how long each step takes for hundreds of Rust packages.


https://perf.rust-lang.org/compare.html

I wonder if there is something similar with Julia packages.

The Rust community is still developing a Cranelift backend as an alternative to the LLVM backend for faster prototyping. So I wonder what steps Julia would have to take to adopt a Cranelift backend. Would it require, for example, rewriting JuliaSyntax.jl in Rust?

No.
JuliaSyntax.jl does not emit LLVM.
It’d require things like julia/src/codegen.cpp at master · JuliaLang/julia · GitHub

I’d suggest first trying to start Julia with -O1 or -O0. Cutting down the amount of work LLVM does can speed it up.
You probably won’t see that much benefit.

If you want to profile Julia compilation, I suggest looking at Tracy.jl.

4 Likes

It is probably easier to use a C interface for Cranelift than to introduce Rust into Julia itself. There unfortunately isn’t anything official, but GitHub - coffeebe4code/craneliftc: cranelift compatible c api's looks like it could be workable.

Cranelift has moved along quite a bit since this forum post was originally created, the claim is currently about 1 order of magnitude faster compilation with same OoM runtime performance, which seems to roughly reflect the experience I have using it for my Rust projects (taking into account that the frontend is a significant part of the time).

It would really be interesting to see an experiment to use it as a backend for Julia, unfortunate that the scale of work is pretty massive - probably months of work to get Hello World! compiling. Maybe this could be a good GSoC project idea at some point.

The recent LWN post is pretty interesting as well, https://lwn.net/SubscriberLink/964735/6d5585155449e50a/.

1 Like