`solve(prob,Optim.BFGS())` requires `using OptimizationOptimJL

I am using the following two lines and OptimizationOptimJL is installed.

using OptimizationOptimJL;
optimized_sol_nn2 = Optimization.solve(optprob, optimized_sol_nn.minimizer, Optim.BFGS(initial_stepnorm=0.001), maxiters = 1000, allow_f_increases = true)
#optimized_sol_nn2 = DiffEqFlux.sciml_train(p -> cost_adjoint_nn(p, 0.08), optimized_sol_nn.minimizer, BFGS(initial_stepnorm=0.001), maxiters = 1000, allow_f_increases = true)

However, I am getting the following error:

Optimization algorithm not found. Either the chosen algorithm is not a valid solver choice for the OptimizationProblem, or the Optimization solver library is not loaded. Make sure that you have loaded an appropriate Optimization.jl solver library, for example, solve(prob,Optim.BFGS()) requires using OptimizationOptimJL and solve(prob,Adam()) requires using OptimizationOptimisers. For more information, see the Optimization.jl documentation: https://docs.sciml.ai/Optimization/stable/. Chosen Optimizer:

I checked the documentation. I am using the correct package. However, I did not understand why I am getting error for Optim.BFGS(initial_stepnorm=0.001)

What’s optimized_sol_nn.minimizer? The algorithm is supposed to be the second argument, and I don’t know what this second thing is supposed to be in the call you’re doing.

If you check the tutorials:


Then you see the solver is always the second algorithm, not the third. So it’s interpreting whatever you put as the second algorithm as the solver, and telling you it does not make sense.

1 Like

you are right, I found my mistake,
I changed the code as follows:

optprob2 = remake(optprob,u0 = optimized_sol_nn.u)
using OptimizationOptimJL;
optimized_sol_nn2 = Optimization.solve(optprob2, Optim.BFGS(initial_stepnorm=0.001), maxiters = 1000, allow_f_increases = false)