Should Julia use MLIR in the future?

To add onto this, it’s worth pointing out that MLIR and Mojo are not magic. The “ML” in MLIR stands for “multi-level”; think Inception for LLVM rather than Machine Learning. In fact, LLVM IR is a MLIR “dialect”. Most Mojo code running on CPU (and likely a lot running on GPU) is “lowered” from a higher-level dialect into LLVM IR, while all Julia code is lowered from a higher-level IR which is not a MLIR dialect into LLVM IR.

This should offer some insight into why just making Julia use MLIR instead of LLVM IR directly wouldn’t change much. Does that mean there’s no benefit to using MLIR? No. You could imagine how a library like Coil.jl could benefit if Julia IR was a MLIR dialect. Other non-LLVM MLIR dialects help Mojo achieve functionality like auto-vectorization at the language level, while the Julia ecosystem has to deal with issues like Why is LoopVectorization deprecated? because trying to integrate with the compiler is far more fragile.

Lastly, I should point out that neither Mojo nor the Julia ecosystem support TPUs right now. There have been attempts to get Julia code running on TPUs, but see above about fragile compiler integration. Additionally, TPUs speak a very limited set of high-level array operations and nothing else. Given most Julia and Mojo code is being written at a significantly lower level and using constructs (e.g. complex loops, lazy conditionals with side effects) that are not supported in the XLA IR TPUs use, it’s unlikely either will get great language-level support any time soon. The best path is likely via some high-level DSL, which is basically what JAX is for Python.

7 Likes