Parameter estimation DefaultOptimizationCache not defined (DiffEqFlux)

Hi,

I’m trying to learn the basics of parameter estimation with DiffEqFlux by adapting the first example from here https://docs.juliahub.com/DiffEqFlux/BdO4p/1.9.0/ to my own problem. However, I keep getting the following error: UndefVarError: DefaultOptimizationCache not defined. This occurs even when I copy and paste the example directly. It seems to be working fine until the last iteration when the error appears. The example is a couple years old, but it’s useful for my purposes. I would greatly appreciate if anyone can tell me what’s going on here and how to fix it. Thanks.

For convenience, here is the code from the example:

using DifferentialEquations, Flux, Optim, DiffEqFlux
function lotka_volterra(du,u,p,t)
    x, y = u
    α, β, δ, γ = p
    du[1] = dx = α*x - β*x*y
    du[2] = dy = -δ*y + γ*x*y
  end
  u0 = [1.0,1.0]
  tspan = (0.0,10.0)
  p = [1.5,1.0,3.0,1.0]
  prob = ODEProblem(lotka_volterra,u0,tspan,p)
  sol = solve(prob,Tsit5())

  plot(sol)

  function predict_adjoint(p) # Our 1-layer neural network
    Array(concrete_solve(prob,Tsit5(),u0,p,saveat=0.0:0.1:10.0))
  end

  function loss_adjoint(p)
    prediction = predict_adjoint(p)
    loss = sum(abs2,x-1 for x in prediction)
    loss,prediction
  end

  cb = function (p,l,pred) #callback function to observe training
    display(l)
    # using `remake` to re-create our `prob` with current parameters `p`
    display(plot(solve(remake(prob,p=p),Tsit5(),saveat=0.0:0.1:10.0),ylim=(0,6)))
    return false # Tell it to not halt the optimization. If return true, then optimization stops
  end
  
  # Display the ODE with the initial parameter values.
  cb(p,loss_adjoint(p)...)
  
  res = DiffEqFlux.sciml_train(loss_adjoint, p, BFGS(initial_stepnorm = 0.0001), cb = cb)[1]

  plot(solve(remake(prob,p=res.minimizer),Tsit5(),saveat=0.0:0.1:10.0),ylim=(0,6))

And the stacktrace:
ERROR: UndefVarError: DefaultOptimizationCache not defined
Stacktrace:
 [1] ___solve(prob::OptimizationProblem{true, OptimizationFunction{true, Optimization.AutoZygote, DiffEqFlux.var"#121#128"{typeof(loss_adjoint)}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Float64, Flat}, data::Base.Iterators.Cycle{Tuple{Optimization.NullData}}; callback::Function, maxiters::Nothing, maxtime::Nothing, abstol::Nothing, reltol::Nothing, progress::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ OptimizationOptimJL C:\Users\alexa\.julia\packages\OptimizationOptimJL\WqQOV\src\OptimizationOptimJL.jl:168
 [2] #__solve#2
   @ C:\Users\alexa\.julia\packages\OptimizationOptimJL\WqQOV\src\OptimizationOptimJL.jl:67 [inlined]
 [3] #solve#486
   @ C:\Users\alexa\.julia\packages\SciMLBase\kTnku\src\solve.jl:89 [inlined]
 [4] sciml_train(::typeof(loss_adjoint), ::Vector{Float64}, ::BFGS{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}, Nothing, Float64, Flat}, ::Nothing; lower_bounds::Nothing, upper_bounds::Nothing, cb::Function, callback::Function, maxiters::Nothing, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ DiffEqFlux C:\Users\alexa\.julia\packages\DiffEqFlux\2IJEZ\src\train.jl:45
 [5] top-level scope
   @ c:\Users\alexa\Documents\Research\Chapter 2\Production_fit.jl:191

Which version of DiffEqFlux are you using? The docs you link are for version 1.9, and the current release is 2.0 (and between that there were quite a few minor versions, the one before that was 1.54.0…)

I’m using v1.53.0

So you should also use the docs for that version, which don’t have the example you posted. Maybe one of the tutorials in the current docs covers what you are after?

https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode/

I just tried the example in your link but I get the same DefaultOptimizationCache not defined error when I try to run result_neuralode = Optimization.solve(...). I can see when I plot the callback that the fitting process is working just fine, but it’s just not storing the solution in result_neuralode. There must be something wrong on my end but I’m not sure what it could be. I updated all the packages, restarted Julia etc. but to no avail.

Update: I updated SciMLBase and it seems to have resolved the issue.

1 Like