Issue with UDE repository - LV_Scenario 1

Hi,

When trying to perform the optimisation with Adam

adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p)->loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentVector{Float64}(p))
res1 = Optimization.solve(optprob, ADAM(0.1), callback=callback, maxiters = 200)

optprob returns the error

ERROR: "No matching function wrapper was found!"

then highlights optf with the same error as above, the two functions

function predict(θ, X = Xₙ[:,1], T = t)
    _prob = remake(prob_nn, u0 = X, tspan = (T[1], T[end]), p = θ)
    Array(solve(_prob, Vern7(), saveat = T,
                abstol=1e-6, reltol=1e-6,
                sensealg = ForwardDiffSensitivity()
                ))
end

# Simple L2 loss
function loss(θ)
    X̂ = predict(θ)
    sum(abs2, Xₙ .- X̂)
end

With the same error about no matching function wrapper.

I am only going through the code line by line so have not changed anything apart from running the code locally.

Link to the example is here

Not sure if it is an issue with my Pkgs, sorry I can’t offer anything more helpful! :slight_smile:

Cheers,
Harry

Make the ODEProblem{true, SciMLBase.FullSpecialize}.

If you can help make an MWE I can add auto-unwrapping which is the real plan here.

Of course,

the code below generates the error which is mentioned above.

using OrdinaryDiffEq
using ModelingToolkit
using DataDrivenDiffEq
using LinearAlgebra, ComponentArrays
using Optimization, OptimizationOptimisers, OptimizationOptimJL #OptimizationFlux for ADAM and OptimizationOptimJL for BFGS
using Optim
using DiffEqSensitivity
using Lux
using Plots
gr()
using JLD2, FileIO
using Statistics
# Component arrays - array blocks that can be accsessed through named components 
gr()
using Random
rng = Random.default_rng()
Random.seed!(1234)
# Set a random seed for reproduceable behaviour

## Data generation
function lotka!(du, u, p, t)
    α, β, γ, δ = p
    du[1] = α*u[1] - β*u[2]*u[1]
    du[2] = γ*u[1]*u[2]  - δ*u[2]
end
# U1 is prey they grow due to food α but are killed off due to interaction with 
# U2 which is the predator and grows from eating the prey and we have a linear loss term which represents neatural death or emigration 
# Without either of these we would have exp growth and exp decay for prey and predator repectivley.

# Define the experimental parameter
tspan = (0.0,3.0)
u0 = [0.44249296,4.6280594]
p_ = [1.3, 0.9, 0.8, 1.8]
prob = ODEProblem(lotka!, u0,tspan, p_)
solution = solve(prob, Vern7(), abstol=1e-12, reltol=1e-12, saveat = 0.1)

# Ideal data
X = Array(solution) # contains prey and predator solution
t = solution.t # time vector of the solution
DX = Array(solution(t, Val{1})) # Extract the derivatives of the solution at each time step  

# Add noise in terms of the mean
x̄ = mean(X, dims = 2) # Mean of the prey predator data set
noise_magnitude = 5e-3 # Noise level 
Xₙ = X .+ (noise_magnitude*x̄) .* randn(eltype(X), size(X)) # Shift the prey predator data by this noise

# Plot of Real & Noisy data
plot(solution, alpha = 0.75, color = :black, label = ["True Data" nothing])
scatter!(t, transpose(Xₙ), color = :red, label = ["Noisy Data" nothing])

## Define the network
# Gaussian RBF as activation
rbf(x) = exp.(-(x.^2))

# Multilayer FeedForward
U = Lux.Chain(
    Lux.Dense(2,5,rbf), Lux.Dense(5,5, rbf), Lux.Dense(5,5, rbf), Lux.Dense(5,2)
)

# Get the initial parameters and state variables of the model
p, st = Lux.setup(rng, U)

# Define the hybrid model
function ude_dynamics!(du,u, p, t, p_true)
    û = U(u, p, st)[1] # Network prediction
    du[1] = p_true[1]*u[1] + û[1]
    du[2] = -p_true[4]*u[2] + û[2]
end

# Closure with the known parameter
nn_dynamics!(du,u,p,t) = ude_dynamics!(du,u,p,t,p_)
# include true parameters for the linear terms 
# Define the problem
prob_nn = ODEProblem(nn_dynamics!,Xₙ[:, 1], tspan, p) 

## These additions to ODE problem are Extra

# First col as inital conditions

## Function to train the network

# Define a predictor
function predict(θ, X = Xₙ[:,1], T = t)
    _prob = remake(prob_nn, u0 = X, tspan = (T[1], T[end]), p = θ)
    Array(solve(_prob, Vern7(), saveat = T,
                abstol=1e-12, reltol=1e-12,
                sensealg = ForwardDiffSensitivity()
                ))
end

# Simple L2 loss
function loss(θ)
    X̂ = predict(θ)
    sum(abs2, Xₙ .- X̂)
end

# Container to track the losses
losses = Float64[]

callback(θ,args...) = begin
	l = loss(θ) # Equivalent L2 loss
    push!(losses, l)
    if length(losses)%50==0
        println("Current loss after $(length(losses)) iterations: $(losses[end])")
    end
    false
end

## Training

# First train with ADAM for better convergence -> move the parameters into a
# favourable starting positing for BFGS
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p)->loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, ComponentVector{Float64}(p))
res1 = Optimization.solve(optprob, ADAM(0.01), callback=callback, maxiters = 200)

Thanks,
Harry

And you tried the fix prob_nn = ODEProblem{true, SciMLBase.FullSpeicalize}(nn_dynamics!,Xₙ[:, 1], tspan, p)?

Can you post the whole error message?

Sorry I should have clarified prob_nn = ODEProblem(nn_dynamics!,Xₙ[:, 1], tspan, p) did fix the problem.

Try this PR: More auto-promotion to FullSpecialization by ChrisRackauckas · Pull Request #741 · SciML/SciMLSensitivity.jl · GitHub it should fix it.

Yea now it works with the PR

Perfect.

1 Like

Thanks, patch released.