I am getting the following error (which I did not use to get):
┌ Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs
What generates this error? Does it basically mean that the source code cannot be differentiated? How can this be debugged when using a UODE? Thanks.
As a result, the code has slowed to a crawl as a result. Unfortunately, the code is too complex to show here, and if I simplify the code, it is likely the error will go away. If I could have a simple example that demonstrates when the error happens, this might help me.
For reference, the error occurs in the following location (there is no trace back to my code):
┌ Warning: FastChain is being deprecated in favor of Lux.jl. Lux.jl uses functions with explicit parameters f(u,p) like FastChain, but is fully featured and documented machine learning library. See the Lux.jl documentation for more details.
└ @ DiffEqFlux ~/.julia/packages/DiffEqFlux/jHIee/src/fast_layers.jl:9
┌ Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs
└ @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/6YVpi/src/concrete_solve.jl:115
callback_static, iter: 1, loss: 67.55578186035156
┌ Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs
└ @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/6YVpi/src/concrete_solve.jl:115
callback_static, iter: 2, loss: 44.50825299438902
┌ Warning: Reverse-Mode AD VJP choices all failed. Falling back to numerical VJPs
└ @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/6YVpi/src/concrete_solve.jl:115
The error likely occurs in the following code section (I doubt this will help you, but just in case …):
for k = range(start_at, length(protocols), step=1)
# Loss function closure
loss_fn(θ) = loss_univ([θ; p_system], protocols[1:k], tspans[1:k], σ0, σ12_all, k, dct)
# Callback function closure
# k are the trajectories (1:8)
cb_fun(θ, loss) = callback(θ, loss, protocols[1:k], tspans[1:k], σ0, σ12_all, k, iter)
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p)->loss_fn(x), adtype)
optprob = Optimization.OptimizationProblem(optf, θi) # get_parameter_values(nn_eqs)) # nn_eqs???
parameter_res = Optimization.solve(optprob, Optimisers.AdamW(), callback=cb_fun, sensealg = ReverseDiffVJP(true), allow_f_increases=false, maxiters=dct[:maxiters])
θi = parameter_res.u
push!(out_files, "tbnn_k=" * string(k))
#@save "tbnn.bson" θi
@save out_files[end] θi
end
Any help is appreciated.