Getting reusable LLVM IR representation of a module(or a script), modifying LLVM IR and replugging it in Julia




I need a way to statically compile a module written in Julia to LLVM IR. I would modify this LLVM IR representation of the module for my project and would like to somehow replug this LLVM IR representation in Julia system image to run it. Please let me know what is the best way forward to do this.


I suppose there are 2 parts of this question.

  1. I need help to statically compile a Julia module to LLVM IR. I have tried the Julia Computing blog Static and AOT compiled Julia to generate a system image in LLVM bitcode (using --output-bc) format. The problem is that the generated image is just a wrapper and the actual compiled module code is stored as a byte array in this file. I am not sure if the byte array is a native representation of system image or a bit code representation.
    Also, I do not think macro @code_llvm would work as it modifies the actual LLVM code. It is useful to take a look at the LLVM representation but not if I want to link it back to the rest of the Julia code.

  2. How to embed a LLVM bitcode representation of a Julia module in the system image. How do I link it with the rest of the Julia code?

Does Julia bitcode system image file generated with --compile=all --output-bc flags actually use llvm IR representation?

Hi there,

you have a couple of options. Let me answer number 2 first. There’s a little known option to llvmcall, to just pass it a raw Function * pointer.
That’s pretty much the easiest way to invoke arbitrary LLVM IR from within julia. Alternatively you could also just compile your IR to a shared library and ccall it.

For 1, you there’s some function in the C API to help you with this, depending on what exactly you want. See e.g.

As for the .bc file, it does actually contain the LLVM IR. However, it also contains a binary blob of serialized julia AST, which is a bit scary to look at the first time.



Over at CUDAnative, for 1 we use _dump_function, together with a codegen hook to catch the IR of recursive compilations. Not the cleanest approach though…


Thanks for the reply.
I assume you modify the IR and then compile it for targets of your choice. How do you compile it for your target when it contains references to so many of julia internals. Can you provide an outline of the compilation flow you use?


We don’t modify IR after the fact, but try to make Julia emit GPU-compatible code (ie. without references to internal stuff). For example, we use CodegenParams to disable some incompatible language features, and use CodegenHooks to override others.

Not perfect yet, but already much better than where we were at a year ago. I hope to further improve these mechanisms and make codegen a bit friendlier for external back-ends like these.



I am able to get the LLVM IR by building the system image with --output-bc --compile=all flags. I am able to edit the LLVM bitcode representation of functions. How can I compile this modified file into a system image file which can then be used by julia -J flag.