A hacky guide to using automatic differentiation in nested optimization problems

Not quite yet: getting there. Forward mode works on Optim though.

1 Like

Has there been any progress on this? Iā€™m planning to work on a similar thing soon.

Check GitHub - gdalle/ImplicitDifferentiation.jl: Automatic differentiation of implicit functions

2 Likes

The docs have optimisation examples Unconstrained optimization Ā· ImplicitDifferentiation.jl.

Thanks for the link, that looks quite interesting but Iā€™m not sure thatā€™s what I need. When there is no ā€œimplicitnessā€ Iā€™d expect there would just be forward and backward rules directly for calls to some form of optimize? Is there something like that already done? Or is there a non-obvious problem with that? My optimization problem is, very roughly, \arg \min_{x}f(x, \arg\min_{y} g(x, y)).

Sorry for being late to the party, but the idea was that with ImplicitDifferentiation.jl, you can compute gradients of x \longmapsto \arg\min_y g(x, y), which can then be fed to the outer optimization loop

1 Like