DataDrivenProblem signature not found

@Julius_Martensen, @ChrisRackauckas

I am symbolically fitting a function using DataDrivenProblem, which is part of the DataDrivenDiffEq ecosystem. I am using the documentation of version 0.8.6 (the documentation of versions 1.0 and 1.0.1 is mostly missing) (URL: Home · DataDrivenDiffEq.jl).

From the simplest example Getting Started, I come across the line

problem = DirectDataDrivenProblem(X, Y, name = :Test)

However, this function does not exist. Executing methods on the function, I find the following:

# 4 methods for type constructor:
[1] DataDrivenProblem(sol::T; use_interpolation, kwargs...) where T<:Union{DESolution, SciMLSolution} in DataDrivenDiffEq at /Users/erlebach/.julia/packages/DataDrivenDiffEq/Yfvcd/src/problem/type.jl:468
[2] DataDrivenProblem(X::AbstractMatrix; t, DX, Y, U, p, probtype, kwargs...) in DataDrivenDiffEq at /Users/erlebach/.julia/packages/DataDrivenDiffEq/Yfvcd/src/problem/type.jl:150
[3] DataDrivenProblem(probtype, X, t, DX, Y, U::F, p; kwargs...) where F<:Function in DataDrivenDiffEq at /Users/erlebach/.julia/packages/DataDrivenDiffEq/Yfvcd/src/problem/type.jl:140
[4] DataDrivenProblem(probType, X, t, DX, Y, U, p; name, kwargs...) in DataDrivenDiffEq at /Users/erlebach/.julia/packages/DataDrivenDiffEq/Yfvcd/src/problem/type.jl:115

I have only included two modules via

using DataDrivenDiffEq
using DataDrivenSparse

I am using Julia 1.8.2, and DataDrivenDiffeq 1.01 .

Thanks for any help!

Gordon

That’s not the correct documentation. That’s JuliaHub and v0.8.6. These are the real docs:

https://docs.sciml.ai/DataDrivenDiffEq/stable/libs/datadrivendmd/examples/example_01/

Thanks. So now, I am trying out some examples of DiffEqFlux.jl at DiffEqFlux.jl: High Level Scientific Machine Learning (SciML) Pre-Built Architectures · DiffEqFlux.jl.
I do not see listed the project.toml and package versions under which the examples run. I assumed that these would be listed on all the library pages.

When running the NeuralODE example (see pasted code),

using Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots

rng = Random.default_rng()
u0 = Float32[2.0; 0.0]
datasize = 30
tspan = (0.0f0, 1.5f0)
tsteps = range(tspan[1], tspan[2], length = datasize)

function trueODEfunc(du, u, p, t)
    true_A = [-0.1 2.0; -2.0 -0.1]
    du .= ((u.^3)'true_A)'
end

prob_trueode = ODEProblem(trueODEfunc, u0, tspan)
ode_data = Array(solve(prob_trueode, Tsit5(), saveat = tsteps))

dudt2 = Lux.Chain(x -> x.^3,
                  Lux.Dense(2, 50, tanh),
                  Lux.Dense(50, 2))
p, st = Lux.setup(rng, dudt2)
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)

function predict_neuralode(p)
  Array(prob_neuralode(u0, p, st)[1])
end

function loss_neuralode(p)
    pred = predict_neuralode(p)
    loss = sum(abs2, ode_data .- pred)
    return loss, pred
end

# Do not plot by default for the documentation
# Users should change doplot=true to see the plots callbacks
callback = function (p, l, pred; doplot = false)
  println(l)
  # plot current prediction against data
  if doplot
    plt = scatter(tsteps, ode_data[1,:], label = "data")
    scatter!(plt, tsteps, pred[1,:], label = "prediction")
    display(plot(plt))
  end
  return false
end

pinit = Lux.ComponentArray(p)
callback(pinit, loss_neuralode(pinit)...; doplot=true)

# use Optimization.jl to solve the problem
adtype = Optimization.AutoZygote()

optf = Optimization.OptimizationFunction((x, p) -> loss_neuralode(x), adtype)
optprob = Optimization.OptimizationProblem(optf, pinit)

result_neuralode = Optimization.solve(optprob,
                                       ADAM(0.05),
                                       callback = callback,
                                       maxiters = 300)

optprob2 = remake(optprob,u0 = result_neuralode.u)

result_neuralode2 = Optimization.solve(optprob2,
                                        Optim.BFGS(initial_stepnorm=0.01),
                                        callback=callback,
                                        allow_f_increases = false)

callback(result_neuralode2.u, loss_neuralode(result_neuralode2.u)...; doplot=true)

I get errors in Visual Studio Code (VSC), but it seems to run fine on the command line with the --check-bounds options set to yes. But inside Visual Studio code, I get bumped out of the Terminal with no stack trace. Very disconcerting. Here is the status in the package manager:

  [2445eb08] DataDrivenDiffEq v1.0.1
  [7fed8a53] DataDrivenSR v0.1.2
  [5b588203] DataDrivenSparse v0.1.1
  [aae7a2af] DiffEqFlux v1.52.0
  [41bf760c] DiffEqSensitivity v6.79.0
  [0c46a032] DifferentialEquations v7.6.0
  [587475ba] Flux v0.13.9
  [28b8d3ca] GR v0.71.1
  [b2108857] Lux v0.4.36
  [961ee093] ModelingToolkit v8.36.0
  [429524aa] Optim v1.7.4
  [3bd65402] Optimisers v0.2.13
  [7f7a1694] Optimization v3.10.0
  [36348300] OptimizationOptimJL v0.1.5
  [1dea7af3] OrdinaryDiffEq v6.35.1
  [91a5bcdd] Plots v1.37.2
  [c3572dad] Sundials v4.11.4
  [e88e6eb3] Zygote v0.6.51
  [37e2e46d] LinearAlgebra
  [9a3f8284] Random
``

Julia is really great, but the set of packages is getting out of control, and Julia is way too fragile in my opinion. The great acceleration compared to Python gets wiped out by the extensive reloading and rerunning scripts and code because of various issues (at compile or run time.) These issues are not reproducible.

I created a new environment, and ran the above example from VSC. I got kicked out with the following message: 

( The terminal process “/Applications/Julia-1.7.app/Contents/Resources/julia/bin/julia’-¡‘,’–banner=no’,
‘–project=/Users/erlebach/src/2022/basic_UODE’,
^/Users/erlebach/.vscode/extensions/julialang.language-julia-W38.2/scripts/terminalserver/terminalserver.jl’,
var/folders/hn/w6z4rd3n0xng_rc6fqmsttwh0000gn/T/vsc-jl-repl-d1b04747-4d2a-4792-94ed-e44dafe10b4f’,
var/folders/hn/w6z4rd3n0xng_rc6fqmsttwh0000gn/T/vsc-jl-cr-ba0030fa-d2af-4a96-827b-1030009390b0’,
‘USE_REVISE=true’, ‘USE_PLOTPANE=true’,
‘USE_PROGRESS=true’, ‘ENABLE_SHELL_INTEGRATION=true’,
‘DEBUG MODE=false’” terminated with exit code: 139.


I am at a loss. I am on a Macbook M1 with Ventura. Perhaps tomorrow, I will try this on Linux Ubuntu 22.04 . These problems are very unfortunate.

We just haven’t had a release of DiffEqFlux in a bit.

Is it some weird M1 only thing? I don’t think I’ve ever seen that.

There doesn’t seem to be a reported issue on any of the release packages, and all of the issues you have reported the last few days seemed to be from finding old documentation hosted. That is unfortunate and I have contacted @pfitzseb to take those down. Was there anything other than that? And this weird piece that may be due to some potential M1 issue (which is only Tier 2 support)? Do you run into any issues on any non-M1 computers?

I found that using Julia 1.8 solves a lot of issues on M1

1 Like

Oh I didn’t see the Julia v1.7 part. Yes indeed, M1 was only Tier 3 supported with Julia v1.7. It’s Tier 2 supported with v1.8. That means “most” things should be okay with M1 on v1.8, but that’s definitely not the case with v1.7. And that’s to be expected: that release came out at around the same time as the silicon first got into the hands of folks, and so there were issues with things like BLAS builds (which remember was an issue that all languages had for the first year of Apple silicon, because they were all using the same BLAS builds). This kind of segfault in particular looks/feels to me like one of those BLAS threading issues that occurred with early releases on M1.

I am running Julia 1.8. I will continental on Linux. It probably is a Mac M1 issue. Ideally, I would need a MWE, and that would take to much time. But I will post one if zI do come up with something.

Your dedication, Chris, is awesome!

Gordon

This says it was v1.7

That is very interesting. Several times, I did a Julia – version or equivaent and it was 1.8.2 . Admittedly, I did not do this on every run. But that would explain the weird errors. Nice catch!

Gordon

@ChrisRackauckas

I figured out the problem. When running Julia in REPL, I was running Julia 1.8.2 . But the startup path set in Visual Code settings was Julia 1.7.x. So mixup occurred, with non-reproducible consequences. I uninstalled Julia 1.8.2 and installed a fresh version of Julia 1.8.3 and performed some experiments, which I am r reporting, as it relates to precompilation and loading modules.

Here is my Project.toml: (Pkg.status)

  [aae7a2af] DiffEqFlux v1.53.0
  [0c46a032] DifferentialEquations v7.6.0
  [b2108857] Lux v0.4.36
  [7f7a1694] Optimization v3.10.0
  [36348300] OptimizationOptimJL v0.1.5
  [91a5bcdd] Plots v1.37.2
  [9a3f8284] Random

The documentation states that precompilation has occurred. I then perform the following module loads:

using Lux
using DiffEqFlux
using DifferentialEquations
using Optimization
using OptimizationOptimJL
using Random
using Plots

For each one, I first record time and size of the compiled package in ~/.julia/compiled/v1.8/. I find that for some of the modules, the compiled file is not touched and for others it is touched (the timestamp changed) but the size did not change. For example, for Plots.jl, the size of the compiled file did not change, but the timestamp did change. For OptimizationOptimJL, the timestamp is not changed. Why is that? It is hard to intuit. Thanks.

p.s.: I reran my problematic code from last night (using Optimization.jl routine rather than sciml_train routine), and the tutorial code ran with no issues on my Mac M1. Thanks for the help!

2 Likes