Plans to provide Julia IR for deployment

Hey dear community :slight_smile:

I know there is package compiler but I also know that it has it’s limitations. I know that other managed language like C# and Java ship some form of IR. Is something like this possible with all the dynamic of Julia. And if so is there an projects like that?

TL;DR: Ship IR + source code and get great compile times.
(The IR could possibly grow over time for greater speed up)

Isn’t what you are asking for exactly what PackageCompiler.jl does? What is it lacking to you?

It seems to me that PackageCompiler.jl creates an sysimage with binaries for exact one system. IR is still a bit more flexible. It’s Julia code parsed into a data structure and type inferred as much as possible

Julia doesn’t just have one form of IR, it has multiple intermediate representations for different stages of the compiler pipeline. What you are probably referring to as “IR” here is usually called “typed IR”, i.e. the output you get in the REPL from @code_typed. None of Julia’s IRs are really designed for portability though. Even the highest level IR, the abstract syntax tree (i.e. Exprs and atoms) is not fully portable, because, depending on whether you are on a 32- or 64-bit platform, integer literals will be stored as either Int32 or Int64.

1 Like

The hard part about this is that there are lots of hardware specifics that determine strategies for execution. Constant folding them is often critical for performance.

1 Like

Would it be possible to store different IR? And if something isn’t lowered to the right IR just use source

The precompiled .ji files already store IR. Shipping precompiled .ji files is definitely of interest, but it’s not the biggest win since even after that, one still needs to do a lot of compilation at runtime. Execution of JVM/CLR is (a) not notorious for being all that snappy, and (b) to the extent that it is fast, it’s not because of the time saved by converting source to bytecode in advance. JVM/CLR execution of bytecode is fast because of the billions (trillions?) of dollars worth of developer-hours that have gone into making it fast over several decades. The runtimes do fancy tricks like executing bytecode in trace mode, keeping track of how often everything runs and what the types of things are, and then generating highly optimized versions of code that runs frequently enough and then (here’s the tricky part) doing on-stack replacement of the slow version that’s currently executing with the fast version. We could do that in Julia too, but it’s hard to implement and starting from bytecode rather than source doesn’t help much.

9 Likes