[ANN] Announcing Torch.jl

I am excited to share some new development in the Julia ML stack. We are announcing Torch.jl which wraps the existing PyTorch and its CUDA kernels and makes them available through Julia. Torch.jl also exposes the kernels through integration with Flux.jl and Zygote.jl (for differentiation), so it requires minimum handling on the user side. Most operations can just be handled like we would any other function calls in Julia. It operates similarly to how a model is moved to the GPU with gpu(model), via torch(model).

We have also written a blog post about it which goes in more depth and talks about its intended use and development.

We are excited to see folks use it and see more development in the area! Please file issues for cases that may not be covered yet in the issue tracker, and we would love to hear about all the features that would be nice to see in here!

Cheers,
@dhairyagandhi96, @MikeInnes

47 Likes

Wow, nice! I have waited for sth like this and will try it out! :blush: But there is an elephant in the room: Have you benchmarked against python-pytorch?

4 Likes

Excited to try this out… just need 3D support (5D tensors :grin: )

Can you please let me use Torch.jl without assuming a CUDA enabled device is present.

2 Likes

That requires building against the cpu binaries lazy loading the artifact, which in itself is pretty simple simple to do, but means that it’s more work to switch out the cuda capable binaries when you need them. It’s basically copying this build script and including a cpu only target

2 Likes

Are you planning to add it to Torch.jl?

So, how does it compares to Knet.jl and Flux.jl in terms of capabilities and speed?

It’s a fairly easy change, I’d very much appreciate contributions back to the project, but I am tied up till JuliaCon, post which I can take a look.

1 Like

This doesn’t replace Flux or KNet, but rather in it’s current form make the core kernels available with a tie in to support the NNlib interface.

For a deeper dive in, I recommend checking out the blog post related to the OP where more of the capabilities and performance is compared

This is exciting. Is anything like it planned for TensorFlow as well?

1 Like

It already exists: https://github.com/malmaud/TensorFlow.jl
but it’s not being updated.
It’s maintainers prefer Flux.jl (or Flax.py).

This is pretty exciting. Will this be considerably less “experimental” than Flux.jl?
Not to be critical of Flux, but my experience with it is such that 1/5 times I needed it for a project some essential functionality was broken in a way that made it unusable. When it works its glorious! but having torch around would be pretty nice for those rough spots so we don’t all have to flock to python between releases/fixes/etc.

1 Like

TensorFlow.jl wraps TensorFlow, but it doesn’t provide Flux/Zygote integration, does it?

That is correct (to my knowledge). The closest equivalent would be https://github.com/FluxML/XLA.jl, which uses the same acceleration backend as TensorFlow and JAX. I imagine (hope) that there will be some more focused design work around normalizing and polishing all of these Flux backends after JuliaCon.

1 Like

Very nice work, thanks! I would be interested in trying Torch.jl with my AlphaZero.jl package. :slight_smile:

However, it seems to me that Torch.jl is not compatible with Flux 0.11 yet. I filed an issue about this.