Use sciml_train with Optim optimizers. It has a type fix: DiffEqFlux.jl/DiffEqFlux.jl at master · SciML/DiffEqFlux.jl · GitHub
with workarounds:
function sciml_train(loss, θ, opt=OptimizationPolyalgorithms.PolyOpt(), adtype=nothing, args...;
lower_bounds=nothing, upper_bounds=nothing, cb = nothing,
callback = (args...) -> (false),
maxiters=nothing, kwargs...)
@warn "sciml_train is being deprecated in favor of direct usage of Optimization.jl. Please consult the Optimization.jl documentation for more details. Optimization.jl's PolyOpt solver is the polyalgorithm of sciml_train"
if adtype === nothing
if length(θ) < 50
fdtime = try
ForwardDiff.gradient(x -> first(loss(x)), θ)
@elapsed ForwardDiff.gradient(x -> first(loss(x)), θ)
catch
Inf
end
zytime = try
Zygote.gradient(x -> first(loss(x)), θ)
@elapsed Zygote.gradient(x -> first(loss(x)), θ)
catch
Inf
This file has been truncated. show original
to make this work. We’ve been using it in our physics-informed neural networks, like in:
function NNODE(chain, opt, init_params = nothing;
strategy = nothing,
autodiff = false, batch = nothing, kwargs...)
NNODE(chain, opt, init_params, autodiff, batch, strategy, kwargs)
end
"""
```julia
ODEPhi(chain::Lux.AbstractExplicitLayer, t, u0, st)
ODEPhi(chain::Flux.Chain, t, u0, nothing)
```
Internal, used as a constructor used for representing the ODE solution as a
neural network in a form that respects boundary conditions, i.e.
`phi(t) = u0 + t*NN(t)`.
"""
It looks like you’re training some PINNs, so you might want to join our discussion in #diffeq-bridged on the Slack since right now we’re starting up a project automated “from symbolic” PINN training, and from the looks of your Discourse posts you seem interested.
2 Likes