Reactant is emphatically not an autodiff tool. Of course you can use Enzyme from inside it just like you can use it within Julia presently (now it will use EnzymeMLIR instead of EnzymeLLVM).
Reactant is a tool to compile Julia code in a way that preserves high level structure and semantics (e.g. understanding that Base.mul! is a matmul, and not just a bunch of for loops), getting rid of type instabilities, and generally compiling away performance and usability pitfalls in Julia. It does so by extending the existing Abstract Interpretation work in Julia’s compiler, and compiling to MLIR as a way to preserve and optimize this information.
Unlike JaX, working in the Julia compiler itself lets it generally work well with existing code. In particular, multiple dispatch means that you can keep using your existing Base.mul!(c, a, b)
and not need to rewrite things as jax.numpy.mul(a, b)
. Mutation is supported, existing CUDA kernels are supported, control flow works nicely (with ongoing work here), etc.
That said, Reactant is not “JaX for Julia”. JaX is a fantastic project that brings JIT compilation, autodiff, and partial evaluation to Python. Reactant aims to answer a completely separate of “can we make it easier/better for people to write effective Julia code” (which we do by making a significantly more powerful compiler). It incorporates hundreds of tensor-level optimizations (e.g. transpose(matmul(A, B)) → matmul(B, A)) [for a complete list see Enzyme-JAX/src/enzyme_ad/jax/TransformOps/TransformOps.td at main · EnzymeAD/Enzyme-JAX · GitHub, which have been shown to provide double digit speedups for JaX code]. In contrast, Reactant aims to make it easy to write existing Julia code and have it automatically run on your favorite (CPU, GPU, TPU, etc) backend – including a distributed cluster thereof, without needing to rewrite your code. This includes automatically rewriting CUDA.jl kernels to effecively run faster on GPUS and even CPUs (see Reactant.jl/test/integration/cuda.jl at b506bfcf77ee68e60ccbbb649089f65f69bd709c · EnzymeAD/Reactant.jl · GitHub and https://dl.acm.org/doi/pdf/10.1145/3572848.3577475) for more information. Reactant gets rid of unnecessary allocations for temporaries and fuses loops together. Reactant enables you to write Julia code inside of other languages (see Exporting Lux Models to Jax (via EnzymeJAX & Reactant) | Lux.jl Docs) and make it easy for you to use other languages from Julia (Reactant.jl/test/integration/python.jl at main · EnzymeAD/Reactant.jl · GitHub), all while preserving nice things like autodiff, optimizations, and multi-device. These additional optimizations and usability make autodiff like EnzymeMLIR much faster/more effective (which is one reason we started the project).
I strongly disagree with your take on the two language problem here. If anything Reactant is trying to help Julia with the two language problem – by making it such that you can write as much code in Julia as possible, and still be able to run it effectively. Many of the problems that Reactant solves (type instabilties, burdensome allocations/GC time, etc) are quite hard and presently the best solution we have for them are tools to try to find where they happen (GitHub - MilesCranmer/DispatchDoctor.jl: The dispatch doctor prescribes type stability for type instabilities; GitHub - JuliaLang/AllocCheck.jl: AllocCheck for allocations, etc) and force the user to rewrite their code to a contorted subset of Julia code that might not have these issues after rewriting the code significantly.
Instead, Reactant solves this problem by building a compiler. This compiler aims to let you keep writing your favorite Julia code.
Of course the inner Reactant compiler isn’t pure Julia (though if you look at GitHub - EnzymeAD/Reactant.jl: Optimize Julia Functions With MLIR and XLA for High-Performance Execution on CPU, GPU, TPU and more., Github rounds to say most of our code is actually pure Julia), going through C++ because core compiler libraries like MLIR/XLA/LLVM are written in C++. The Julia compiler is written in C++ for the very same reason: julia/src/cgutils.cpp at 99fd5d9a92190e826bc462d5739e7be948a3bf44 · JuliaLang/julia · GitHub). Julia matrix libraries call fortran code (libopenblas). The CUDA compiler is written in C++ and the CUDA.jl binds you access to be able to use them in Julia. I could go on…
The point of the two language problem is that users have to write code in multiple languages to be effective. You are welcome to try using Julia without ever touching the core utilities that enable the user to avoid the two language problem (like BLAS, CUDA.jl, etc), but it would likely mean you stop using Julia (especially since the Julia compiler itself wouldn’t be usable).
In short, Reactant tries to make it such that you can write your favorite pure Julia code like perhaps
function foo(model, x, y)
neural_net(model, mul(x,ocean_predict(y)))
end
and have it automatically run fast/effectively on your laptop and computing cluster (thanks to compiler-based linear algebra optimizations, parallel optimizations, device optimizations, autodiff optimizations, etc).