A hacky guide to using automatic differentiation in nested optimization problems

Not quite yet: getting there. Forward mode works on Optim though.

1 Like

Has there been any progress on this? I’m planning to work on a similar thing soon.

Check GitHub - gdalle/ImplicitDifferentiation.jl: Automatic differentiation of implicit functions

1 Like

The docs have optimisation examples Unconstrained optimization · ImplicitDifferentiation.jl.

Thanks for the link, that looks quite interesting but I’m not sure that’s what I need. When there is no “implicitness” I’d expect there would just be forward and backward rules directly for calls to some form of optimize? Is there something like that already done? Or is there a non-obvious problem with that? My optimization problem is, very roughly, \arg \min_{x}f(x, \arg\min_{y} g(x, y)).