Hello,
I am trying to train a UODE
with an extended version of the lotka-voltera equations [6 equations] in sciML
.
However, I keep on getting the error:
┌ Warning: EnzymeVJP tried and failed in the automated AD choice algorithm with the following error. (To turn off this printing, add `verbose = false` to the `solve` call)
└ @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/2kCYE/src/concrete_solve.jl:21
MethodError: no method matching asprogress(::Base.CoreLogging.LogLevel, ::String, ::Module, ::Symbol, ::Symbol, ::String, ::Int64)
The applicable method may be too new: running in world age 33721, while current world is 33929.
Closest candidates are:
asprogress(::Any, ::Any, ::Any, ::Any, ::Any, ::Any, ::Any; progress, kwargs...) (method too new to be called from this world context.)
@ ProgressLogging ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:156
asprogress(::Any, ::ProgressLogging.Progress, ::Any...; _...) (method too new to be called from this world context.)
@ ProgressLogging ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:155
asprogress(::Any, ::ProgressLogging.ProgressString, ::Any...; _...) (method too new to be called from this world context.)
@ ProgressLogging ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:200
The code I am using is similar to the one available in the tutorials:
using Pkg;
Pkg.activate(".");
Pkg.instantiate()
# SciML Tools
using OrdinaryDiffEq, ModelingToolkit, DataDrivenDiffEq, SciMLSensitivity, DataDrivenSparse
using Optimization, OptimizationOptimisers, OptimizationOptimJL
# Standard Libraries
using LinearAlgebra, Statistics
# External Libraries
using ComponentArrays, Lux, Zygote, Plots, StableRNGs, HDF5, DataFrames
gr()
# Set a random seed for reproducible behaviour
rng = StableRNG(1111);
function MY_ODE!(du, u, p, t)
c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11 = p;
du[1] = c1 * u[1] - c2 * u[2] * u[1]
du[2] = c3 * u[1] * u[2] - c4 * u[2]
du[3] = c4 * u[3] - c5 * u[4] * u[3]
du[4] = c6 * u[3] * u[4] - c7 * u[4]
du[5] = c8 * u[5] - c9 * u[6] * u[5]
du[6] = c10 * u[5] * u[6] - c11 * u[6]
end
# Define the experimental parameter
nPoints = 300;
tspan = (0.0, 40.0);
u0 = 5.0f0 * rand(rng, 6)
p_ = ( 1.3, 0.9, 0.8, 1.8,
1.5, 1.1, 1.0, 2.0,
1.1, 0.7, 0.6, 1.6);
t_true = LinRange(tspan[1], tspan[2], nPoints)
prob = ODEProblem(MY_ODE!, u0, tspan, p_);
solution = solve(prob, Vern7(), abstol = 1e-12, reltol = 1e-12, saveat = t_true);
plt = plot(solution, vars =(0, 1))
plot!(plt, solution, vars =(0, 2))
plot!(plt, solution, vars =(0, 3))
plot!(plt, solution, vars =(0, 4))
plot!(plt, solution, vars =(0, 5))
plot!(plt, solution, vars =(0, 6))
xlabel!(plt, "t[s]")
# Grab solution
X_true = Array(solution)
# Define NN
nHiden = 24
# Multilayer FeedForward
U = Lux.Chain(
Lux.Dense(6, nHiden, Lux.tanh),
Lux.Dense(nHiden, nHiden, Lux.tanh),
Lux.Dense(nHiden, 6)
)
# Get the initial parameters and state variables of the model
p, st = Lux.setup(rng, U)
# Define the hybrid model
function ude_dynamics!(du, u, p, t, p_true)
û = U(u, p, st)[1] # Network prediction
c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11 = p_true;
du[1] = c1 * u[1] - c2 * û[1]
du[2] = c3 * û[2] - c4 * u[2]
du[3] = c4 * u[3] - c5 * û[3]
du[4] = c6 * û[4] - c7 * u[4]
du[5] = c8 * u[5] - c9 * û[5]
du[6] = c10 * û[6] - c11 * u[6]
end
# Closure with the known parameter
nn_dynamics!(du, u, p, t) = ude_dynamics!(du, u, p, t, p_)
# Define the problem
prob_nn = ODEProblem(nn_dynamics!, u0, tspan, p)
function predict(θ, X = X_true[:, 1], T = t_true)
_prob = remake(prob_nn, u0 = X, tspan = (T[1], T[end]), p = θ)
Array(solve(_prob, TRBDF2(), saveat = T,
abstol = 1e-3, reltol = 1e-3))
end
function loss(θ)
X̂ = predict(θ)
mean(abs2, X_true .- X̂)
end
losses = Float64[]
callback = function (p, l)
push!(losses, l)
if length(losses) % 50 == 0
println("Current loss after $(length(losses)) iterations: $(losses[end])")
end
return false
end
# Training
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentVector{Float64}(p))
res1 = Optimization.solve(optprob, ADAM(), callback = callback, maxiters = 100)
println("Training loss after $(length(losses)) iterations: $(losses[end])")
Can I tell it to not use Enzyme
? What is this running in world age 33721 warning? I went to package mode and updated the packages with up
, but nothing changed.
How can I solve the MethodError: no method matching asprogress(::Base.CoreLogging.LogLevel, ::String, ::Module, ::Symbol, ::Symbol, ::String, ::Int64)
?
Best Regards