Using NonlinearSolve with autodiff

Hello,

I was trying to test NonlinearSolve.jl with automatic differentiation like Enzyme.jl or Zygote.jl. But the following simple code fails

f(x, p) = p.a * x^2 - p.b
prob = NonlinearProblem(f, 0.0, (a=1.0, b=2.0))
solve(prob, NewtonRaphson(autodiff=AutoEnzyme()))

It returns zero, which is not the correct solution. Using NewtonRaphson() works.

I think it might be due to you using u0 = 0.0 instead of u0 = [0.0]. I can’t tell from the docs if the scalar form is supported in the general NonlinearProblem (but you can use StaticArrays to speed things up)

0 looks to be a local minima. The difference could be finite differentiation vs exact one.

Enzyme and Zygote perform exact differentiation, so I don’t think that’s it

Note that f'(x_0) = 0 here so you might want to use another initial guess. Even using the in-place version I get that NewtonRaphson stalls on this problem.

All solvers support a scalar form with the out of place version. It uses the same code path as static arrays.

Though generally for a nonlinear solve, if it’s scalar, SimpleNonlinearSolve.jl will be a little bit faster.

1 Like

Why? Since they are exact, they stuck in gradient 0 for Newton-Raphson algorithm. On the other hand, if finite difference(except maybe when central diff used) is used, it can have nonzero gradient.

Oh right, my bad. Usually, when people say “finite differentiation vs exact” they either mean “autodiff is performing finite differentiation” (which is wrong) or “exact differentiation would be better” (which in this case happens to also be wrong due to the flat gradient) but you meant neither of those ^^

My comparison was for OPs getting 2 results, one from exact derivatives using autodiff and the other without autodiff in the last sentence but, in retrospect, I ordered my saying the other way around.

1 Like

Thanks! That was just a wrong initial condition, together with the use of “exact” differentiation.