I was wondering how easy and simple can it be to implement a simple straight forward AD (reverse mode) for machine learning and quantum physics in Julia. So I tried to write my own last weekend.
And the answer is about only 200~400 lines (including doc strings), you can get an AD with basic function defined in DiffRules and broadcast support with a reasonable performance (actually it maybe the fastest by now XD).
How long do you think it will reach feature parity with torch? Also, based on this experience can you see yourself at some point switch to Julia entirely? What about practitioners who are not comfortable with writing their own AD? Which machine package would you advice them to learn? PyTorch or some Julia package?
It will depends on how many contributors are there, I wonât implement things I totally donât need at the moment (like multi-GPU support for conv, and RNN units). PyTorch is actually quite similar to Chainer, but PyTorch has a more active community.
Iâm a physicist working on machine learning, which means sometimes people in machine learning community donât care about what we need and we have to implement them by ourselves, e.g complex number support, it will take a quite long time (can be years) to merge new things into the main tree of a large project like PyTorch. It is painful and not actually necessary for researchers, check the issue and progress here:
Iâm still working on this thing because of our legacy dependencies in the lab, however, I, personally with my lab-mates, collaborators, have switched to Julia entirely, I have built several packages that I need for research:
And more in private.
Some of them (e.g QuHamiltonian.jl) is not quite possible to implement in Python (or it would be quite hard with the ast module). Most of the python packages we write at the moment are just for public and non-pros who does not interested in coding at all. It is a nightmare comparing to Julia to bind C++ with Python, even there is pybind11.
Furthermore, Julia has the best support for tensor networks among all the languages, Python only has an i tensor wrapper. But Julia has TensorOperations.jl and another coming package of Jutho, and the author of iTensor is also writing a Julia version of it.
Iâm actually writing this AD package because of a practical problem, a recent model implemented in PyTorch is too slow and I cannot use a batched trace in PyTorch, because it does not have (I donât want to write C++ extension, and even I wrote one, it could still slower because of the python wrapper), and I cannot just use a for loop in Python, because it is slow as well, and the lattice libraries are slow as well in Python. I speed up my own model about 10x faster (on CPU) comparing to PyTorch (with almost the same syntax) in just a few days.
And I cannot just move Juthoâs TensorOperations.jl entirely to Python, meta-programming in Python comparing to what we want in TensorOperations.jl does not look possible to implement (or you will create your own DSL beneath Python like many other Python packages do).
If you are really a âpractitionerâ, under this situation, I believe you will choose Julia (if you donât want to write your own, add custom operator in Zygote.jl is faster than PyTorch on CPU, and you can use mine in the future) rather than write your own PyTorch C++ extension with its C++ interface.
Being a Practitioner is not the reason to be lazy: if there is a package good enough, then use it, if there is not, then write one.
I donât suggest to âlearnâ ANY machine learning package, because what you should learn is the algorithm and theory. Most machine learning package is designed to be intuitive enough that as long as you familiar with the theory, you will know how to use it. If you donât know how to use it, it is either because the user does not actually know the theory/how this machine learning algorithm works or the package author should change their interface.
But well, if someone just say I donât want to learn any theory, I just want to call a function and then I run a new deep learning algorithm with it. You will probably need a time machine and a black hole computer then.
I can use Flux.jl/Knet.jl/PyTorch/TensorFlow or just write from scratch as long as I find one of the approach is the fastest. I donât actually see much difference between those packages, people are making similar interfaces with different implementation now.
Thanks for the interesting and well-written blog post. AD implementation is indeed a good use case to demo the power of a language.
However, I wonder if we have too much of a good thing, as at the moment there are at least 5 reverse-mode AD libraries which are in various stages of being experimental, minimally maintained while waiting for an experimental one to be usable in production, targeting specific use cases/communities (eg ML), or catching up to 0.7/1.0 (these are not exclusive). AFAIK all of them have outstanding bugs that require some compromise or extra work on part of the user. As you have shown, Julia makes it easy to write a minimal AD library; the difficult part is maintaining one that is robust and performant for various use cases.
An outsider looking at the reverse-mode AD landscape in Julia could wonder what compels people to write yet another library for this, and whether this reflects a problem with the language.
I donât know too much about AD, but was wondering which of these ambitious points are addressable (or already addressed?) within the Julia ecosystem, or whether they are even on the radar. Or, is there anything that the Swifterâs aim to be able to, which would pose problems to Julia?
(if this is too off-topic, I can make a separate thread)
Yes, thereâs a lot AD package under development in Julia. And as you said, what I wanted was a simple and straight forward AD for practical use.
However, I donât think this reflects a problem of the language, but it reflects an advantage: while struggling with learning how to add a new operator to C++, one can write a fast and usable AD with only a few lines in Julia.
The other AD packages in Julia has different goals, e.g Zygote aims to provide a source 2 source AD by extending the compiler, this is definitely better and harder to implement.
The older but more mature one: AutoGrad, which inherited its Python version, is as slow as its Python version since it is not written in a very Julian way (e.g.some of the type are not parametric, which is not suggested by performance tips), and you will need to generate derivatives by a primitive macro, which I personally does not prefer. But it is under refactoring.
And some other attempts tried to implement source 2 source AD by macros, or overloading (actually multiple dispatch via traits).
But yes, as you said, we probably want something usable and easy for the user: and thatâs probably what is YAAD going to do next. Because it is tiny, it wonât be hard to fix future bugs. And because it make use of multiple dispatch, one can easily extend it with defining only one or two method. And because it tries to mimic the interface of popular package PyTorch (canât mimic v0.4âs tensor though), it wonât be hard to switch to it, while waiting for more promising packages like Zygote and Capstan, we might just use it first.
I believe not only for AD, but also other area, one can use Julia to implement something tiny but usable.
Iâll write an ANN later, when I add more operators to YAAD.
This was discussed in slack. I donât actually think we need to change the language to adapt AD.
Those cassette based AD in Julia will directly extend the compiler to be able to mark and differentiate expression without tweaking the language. Making AD first class might bite those whose donât actually need it.
Do you know if there is a simple âPros and consâ page somewhere? Otherwise the risk is that, while for the expert developer Julia is a dream language for AD as there are many options and itâs very easy to roll your own, the less technically knowledgeable developer ends up a bit confused on what to use for their library code.
It is always nice to have options. However, the AD landscape is very fragmented. Only ForwardDiff.jl is robust, with the reverse mode package it is very easy to run into problems using seemingly trivial code.
To be fair, doing AD while preserving the generic code is very difficult, as it highlights all the problems of result type computation etc.
Autograd has a lot of untyped stuff in its graph building types and it has a macro for defining primitives. This makes it work on pretty much everything, but the untyped parts reduces efficiency. However, on something like a neural net where the matrix multiplies take all of the time, the small amounts of dynamic dispatch wonât matter and itâs a good choice. On functions with a lot of small subfunction calls, this will be a non-trivial performance difference.
ReverseDiff and Flux are very similar. They are the reverse mode of ForwardDiff and uses types to essentially trace a computation graph. Mike and Jarrett can duke it out, but to me it seems ReverseDiff applies to more places but that has changed over time. YAAD also uses tracker types, but is a very simple implementation, but probably more similar to these two than not. However, tracker types only trace the branch that the values take. So while you can compile the computation graph and keep it with ReverseDiff, repeated applications of the gradient are only correct if it traced out something appropriate for the new value. This is a pretty fundamental limitation if you want to build a graph once and spend time optimizing/compiling it to re-use.
Zygote is source-to-source, and its paper describes how it can get a performance advantage by allowing all branches to compile and optimize at once. Capstan is via Cassette, which is essentially a form of source-to-source transformations using Cassette overdubing. Again, Mike and Jarrett are working on something the is probably more similar than different here, for similar reasons but for different applications. But Zygote already exists and Cassette/Capstan is still more of a near future thing, so . However, while tracker-based systems are easy to control (you just define a new dispatch on the type that says what the derivative is), I am not sure how customizable source-to-source is, but hereâs a challenge problem that can give it an issue:
const x = Vector{Float64}(undef,4)
function f!(z,y,x)
x .= 2.*y
z .= sin.(x)
nothing
end
g!(z,y) -> f!(z,y,x)
# Challenge: autodiff z = g!(y)
I am not sure how Zygote would know how to handle the cache array, while with a type you can create a dual cache system that works with type-based AD via multiple dispatch. Capstan might be able to handle this because itâs using Cassette which is essentially a flexible and overridable source-to-source engine, but this is to be seen.
So for now, Zygote.jl is awesome if it works for your code. If not, ReverseDiff and Flux are good to go to, and ReverseDiff can store/compile the computation graph if appropriate to get similar speeds to Zygote, but you have to be careful with the application. Autograd you can easily get working on pretty much anything, but thereâs a dispatch cost associated with it. Capstan and Cassette might be a beautiful system in the near future for both AD and customizing the source transformation, but itâs not here yet and Iâm not sure most Julia users will actually know how to write overdubs.
For now, I always find ForwardDiff and ReverseDiff robust enough to send through big codes (entire differential equation solvers) with ease, and am waiting to see what happens with source-to-source.
I thought Flux was using ReverseDiff directly which is not actually true, thatâs why I didnât tested in the post, because ReverseDiff is not active maintained anymore. And thanks @MikeInnes to mention Fluxâs AD here. And I would be happy to help if we could make a similar separated AD package in the future.
Yes, I implemented YAAD in a very similar way comparing to Fluxâs AD mixed with similar conventions from PyTorch (both backends and frontends this may make PyTorch users easier to adapt). Iâm just hoping we can have a separate package for Fluxâs AD now!
While waiting for Capstan and Zygote, we need something to use at the moment.
I tried something similar a while back (here) but stopped because the language was changing in each version. Are you willing to accept pull requests? It would be great if you could create some more issues for the plans you have in mind in the github repository.
Iâm still considering what we are going to do with YAAD.jl, since Fluxâs Tracker actually looks more optimized. I will probably choose to mock Fluxâs tracker (e.g move it out of Flux), or keep using a extreme simple AD with this reasonable performance (not the fastest now, haha).
Iâll file some issue under YAAD.jlâs repo later along with an ANN here in discourse. And Iâm definitely happy to accept PRs!
Several packages that need Fluxâs AD (e.g. Omega and Turing) just depend on Flux directly. There isnât much downside to that since thereâs not much else to Flux anyway (basically just some layer definitions), so the advantage of splitting it out is relatively minimal.
That said, we will likely split it out once Zygote and Capstan are ready to be used as the default AD. But this is not going to be the case for a few months at the least.