Specifying analytical Jacobian in NonlinearProblem à la Optim.only_fg!

When minimizing a costly function f(x) using Optim.jl and providing the gradient \nabla f(x), one can use Optim.only_fg! to compute the gradient and the function itself within the same function to avoid repeating computations like this:

function fg!(F, G, x)
  # do common computations here
  # ...
  if G !== nothing
    # code to compute gradient here
    # writing the result to the vector G
    # G .= ...
  end
  if F !== nothing
    # value = ... code to compute objective function
    return value
  end
end

Optim.optimize(Optim.only_fg!(fg!), [0., 0.], Optim.BFGS())

See here for more info.

Now, when solving a system of nonlinear equations f(x) = 0 using NonlinearSolve.jl and we want to specify the Jacobian, we have to specify the problem as:

function f(u, p)
    # computations here
    return fval
end

function df(u, p)
    # computations here
    return du
end

fn = NonlinearFunction(f, jac = df)
prob = NonlinearProblem(fn, u0, p)
sol = solve(prob, NewtonRaphson( ;concrete_jac = true))

Now, there are some settings (just like in the previous case) where f and df share common costly computations so calling a function like df(u,p) with f(u) inside can be costly. Is there some equivalent Optim.only_fg! but that applies to NonlinearProblem.jl’s NonlinearFunction?

Thanks in advance!

I ran into a similar problem using Optimization.jl and their similar OptimizationFunction interface.

I solved the problem by memoization. In essence, i defined a cache struct which I could pass with the parameter argument to the NonlinearFunction instance. Say when df is called, both f and df are calculated and the results are stored. When f is then subsequently called (checking equality of arguments) you can immediately retrieve the answer.

I’m on phone, but I can elaborate with a code example later.