Finding saddle point

@cortner note that this problem has an explicit saddle separation: min_x max_y f(x,y). That simplifies things a lot, and might actually be a good basis to think about robust algorithms

@amrods essentially yes. But you don’t know how to compute derivatives of F(x) = |nabla f(x)| so you don’t want to do a Newton-like iteration. Also Newton doesn’t see minima, saddles or maxima, it just sees critical points (and so if you start near a minimum it will converge to that). OTOH flipped gradient iteration converges locally to saddles (and not minima or maxima), so anderson acceleration on top of that (https://github.com/JuliaNLSolvers/NLsolve.jl#anderson-acceleration) may be a good bet (although it will still try to make it converge to any critical point).