Bend: a new GPU-native language

In contrast, to be clear we are planning on maintaining and growing Reactant.jl.

It’s still early so we’re experimenting, but I do expect it to be what grows into a mature library.

7 Likes

Yeah though for example, it’s not the first pass at this. There were a few other projects for MLIR integration, such as

This one seems to be more reasonably scoped though. I do expect someone will take another pass at Julia-wide MLIR in the future, but a down-scoped version is a much better idea for the shorter term.

But:

It presently operates as a tracing system based off of Cassette

I presume that will change to the abstract interpreter at some point? Tapir.jl has made the transition and it seems to be going rather successfully now.

1 Like

Yes indeed there are plenty of other MLIR packages (and I for example assisted @vchuravy with brutus). It is not meant to be an all mlir to end all mlir experimentation (just like there are plenty of other packages in Julia that touch LLVM).

However it has a very specific goal of providing a (somewhat more limited scope) compilation function which generates a fast executable. This is to address precisely the lack of a fusing compiler within Julia, as needed by numerous fields – but often falsely attributed by ML folks to a limitation of AD systems. An AD system’s job is to compute derivatives not to fuse kernels to run as fast as can be. That is what reactant does, and it can be used without derivatives (but it complements AD – hence the similar name).

The internals are intentionally WIP. The first versions of Enzyme.jl used Cassette for reference. Now we use the abstract interpreter. What reactant will do is intentionally in progress.

3 Likes

Even if most of those projects are mere experiments that don’t result in any production-ready product, I still see value in including them on a website. For example, it might help to discover which projects are more mature or more likely to become a production-ready product.