DiffEqFlux tutorial example not running out-of-the-box

Hi, thanks for all the great work!

I’ve been trying to run this example: Neural Ordinary Differential Equations · DiffEqFlux.jl (I attach it at the bottom for future reference)

So I’ve installed the DiffEqFlux 1.53.0 version, then the rest of the dependencies of the script, but still the script won’t run:

  1. ComponentArray is not part of Lux namespace (sorry for python lingo) in the versions I have installed. Taking it from ComponentArrays.jl seems to work
  2. I need to add using OptimizationOptimisers for the ADAM optimizer to work.

These are simple fixes, but am I maybe misunderstanding how version pinning works in julia?

  • Should this example have worked but versions were pinned in the wrong way, or
  • are these typos in the docs, or
  • is it common in Julia that code is moved around between different libraries, or are these just mutual imports and the real code lives somewhere else?
(@v1.9) pkg> st
Status `~/.julia/environments/v1.9/Project.toml`
⌃ [aae7a2af] DiffEqFlux v1.53.0
  [0c46a032] DifferentialEquations v7.8.0
  [7073ff75] IJulia v1.24.2
⌅ [b2108857] Lux v0.4.58
⌃ [7f7a1694] Optimization v3.14.0
⌃ [36348300] OptimizationOptimJL v0.1.8
  [91a5bcdd] Plots v1.38.17
  [9a3f8284] Random
using Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots

rng = Random.default_rng()
u0 = Float32[2.0; 0.0]
datasize = 30
tspan = (0.0f0, 1.5f0)
tsteps = range(tspan[1], tspan[2], length = datasize)

function trueODEfunc(du, u, p, t)
    true_A = [-0.1 2.0; -2.0 -0.1]
    du .= ((u.^3)'true_A)'
end

prob_trueode = ODEProblem(trueODEfunc, u0, tspan)
ode_data = Array(solve(prob_trueode, Tsit5(), saveat = tsteps))

dudt2 = Lux.Chain(x -> x.^3,
                  Lux.Dense(2, 50, tanh),
                  Lux.Dense(50, 2))
p, st = Lux.setup(rng, dudt2)
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)

function predict_neuralode(p)
  Array(prob_neuralode(u0, p, st)[1])
end

function loss_neuralode(p)
    pred = predict_neuralode(p)
    loss = sum(abs2, ode_data .- pred)
    return loss, pred
end

# Do not plot by default for the documentation
# Users should change doplot=true to see the plots callbacks
callback = function (p, l, pred; doplot = false)
  println(l)
  # plot current prediction against data
  if doplot
    plt = scatter(tsteps, ode_data[1,:], label = "data")
    scatter!(plt, tsteps, pred[1,:], label = "prediction")
    display(plot(plt))
  end
  return false
end

pinit = Lux.ComponentArray(p)
callback(pinit, loss_neuralode(pinit)...; doplot=true)

# use Optimization.jl to solve the problem
adtype = Optimization.AutoZygote()

optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, pinit)

result_neuralode = Optimization.solve(optprob,
                                       ADAM(0.05),
                                       callback = callback,
                                       maxiters = 300)

optprob2 = remake(optprob,u0 = result_neuralode.u)

result_neuralode2 = Optimization.solve(optprob2,
                                        Optim.BFGS(initial_stepnorm=0.01),
                                        callback=callback,
                                        allow_f_increases = false)

callback(result_neuralode2.u, loss_neuralode(result_neuralode2.u)...; doplot=true)

The answer is likely yes - note that your environment shows that you’re not on current versions of packages, I get:

  [aae7a2af] DiffEqFlux v2.1.0
  [0c46a032] DifferentialEquations v7.8.0
  [b2108857] Lux v0.5.0
  [7f7a1694] Optimization v3.15.2
  [36348300] OptimizationOptimJL v0.1.9
  [91a5bcdd] Plots v1.38.17

but still see the same error. The docs are automatically built so should work as-is if you copy/paste, it looks like something went wrong here with Lux updating what it exports (in this case ComponentArray). Worth @ChrisRackauckas looking at this I guess, but as I’m typing this he’s giving a talk at JuliaCon on another tab in my browser so he won’t get to it immediately :wink:

Based on what I said above: unlikely,

Not sure I understand the question, but it seems to me like what’s happening here is that there were changes in what packages re-export (e.g. Lux is using ComponentArrays internally and might have done export ComponentArray before, and stopped doing that at some point). The SciML universe is a bit unusual maybe in that it has loads of these glue packages like OptimizationOptimJL which I don’t really understand.

Also Julia 1.9 brought a new way to specify optional dependencies as “extensions”, so especially in tightly coupled ecosystems like SciML you would have seen a lot of refactoring of dependencies into extensions which might lead to more than usual code churn with possibility for error.

Hi @nilshg , thanks for the detailed answer!

The answer is likely yes - note that your environment shows that you’re not on current versions of packages, I get:

The docs do not seem to exist for the latest version (dropdown menu on the bottom left) - the latest I found was for 1.53.0, which is why I used that one. I had initially tried with the latest, but then figured that the transition to 2.1 might have introduced breaking changes, so rolled back.

Not sure I understand the question, but it seems to me like what’s happening here is that there were changes in what packages re-export (e.g. Lux is using ComponentArrays internally and might have done export ComponentArray before, and stopped doing that at some point). The SciML universe is a bit unusual maybe in that it has loads of these glue packages like OptimizationOptimJL which I don’t really understand.

In which case I would have thought that DiffEqFlux 1.53.0 should have pinned earlier versions of Lux and OptimizationOptimJl or wherever ADAM lived before than the ones I got when installing DiffEqFlux.

Thanks for the heads up on the Julia 1.9 “extensions”, I’ll read up on that, maybe that’s at fault here.

Just to be clear I meant versions of dependencies - you have DiffEqFlux@1.53.0 like me, but Lux@0.4.58 while I got 0.5 for that. In any case the problem persists with 0.5 so like I said @ChrisRackauckas or @avikpal or someone else from the SciML team probably needs to have a look at what’s going on here.

To your point about “pinning” - the entire Julia ecosystem usually follows SemVer, so pinning isn’t generally done as people rely on compat bounds. I think what happened here is that Lux moved ComponentArrays into an extension in this commit Move CA stuff into an extension · LuxDL/Lux.jl@91619bc · GitHub and bumped the version from 0.4.37 to 0.4.38, i.e. a patch release. With this, the docs would have to do using ComponentArrays explicitly - that’s the idea of an extension, you get to define behaviour that relies on ComponentArrays being loaded in Lux without having to pull in ComponentArrays for those users which don’t need that bit of functionality, making the overall package more lightweight.

What puzzles me is that the docs are building - normally this stuff should be caught when the docs get generated in CI as it should error out with the same issue that you stumbled upon, but as I said maybe Chris or Avik can clarify.

Aah, that makes sense, thank you so much! So apart from the docs building, the ComponentArray issue seems to boil down to two things I could be tempted to call sloppy, at least with my python background:

  1. changing the Lux API (in the widest sense) in a patch release, and
  2. relying on using ComponentArray from Lux where the code does not actually live.

I’ll look at the OptimizationOptimisers thing along the same lines :slight_smile:

The docs just haven’t built in a long time and we’re working to fix them as part of the JuliaCon hackathon Revive docs by ErikQQY · Pull Request #848 · SciML/DiffEqFlux.jl · GitHub