Not quite yet: getting there. Forward mode works on Optim though.
Has there been any progress on this? Iām planning to work on a similar thing soon.
The docs have optimisation examples Unconstrained optimization Ā· ImplicitDifferentiation.jl.
Thanks for the link, that looks quite interesting but Iām not sure thatās what I need. When there is no āimplicitnessā Iād expect there would just be forward and backward rules directly for calls to some form of optimize? Is there something like that already done? Or is there a non-obvious problem with that? My optimization problem is, very roughly, \arg \min_{x}f(x, \arg\min_{y} g(x, y)).
Sorry for being late to the party, but the idea was that with ImplicitDifferentiation.jl, you can compute gradients of x \longmapsto \arg\min_y g(x, y), which can then be fed to the outer optimization loop