Lux.jl demo with Lotka-Voltera with UODE

Okay, the deployment just finished: https://docs.sciml.ai/Overview/dev/showcase/missing_physics/

It looks like the full example got good results in the end. And since it runs the tutorial and generates the plots at build time, that means it should be working now. Walk through it and see if you run into anything. I took the disclaimer off the page now because it now seems to work.

I’ll be keeping it on dev until https://docs.sciml.ai/Overview/dev/getting_started/fit_simulation/ and https://docs.sciml.ai/Overview/dev/getting_started/integral_approx/ are complete, but those should be done over the weekend and the new SciML overview page should launch with that.

1 Like

Hi @ChrisRackauckas ,

Congratulations, the code runs with no changes. I ran the code as a single program, outside of Jupyter. I like the StableRNG. Nice touch!
Recall I am on an M1 Mac, Ventura OS, just for reference.

On most of the run, I got the same results sometimes to 6-7 digits, but not everything.

One discrepancy was the ideal_problem case. After running ideal_res.residuals, I got residuals of 1.e-30, whereas your demo get residuals of 6.1. The other two cases produced the same results to at least one significant digit. This difference is really surprising. It suggests that a slight shift of parameters makes the fit not work well.

When fitting experimental or numerical data, I doubt things will out most of the time. Presumably there is lots of trial and error.

Thanks for all the help!

Gordon

That’s expected. Different SIMD vector sizes and BLAS implementations cause floating point differences. M1 uses a different BLAS for matmuls IIRC

Hi @ChrisRackauckas ,

I have a question related to the demo code. I am trying to modularize the code for more extensive experimentation, so I have create a module, under the control of Revise.jl. All Good.
In the code, one sees the three lines:

    optf    = Optimization.OptimizationFunction((x,p)->loss(x, dct), adjoint_type)
    optprob = Optimization.OptimizationProblem(optf, ComponentVector{Float64}(p))
    res1    = Optimization.solve(optprob, ADAM(), callback=callback, maxiters = 500)

together with the callback function:

    callback = function (p, l)  # GE added kwargs
        push!(losses, l)
        if length(losses)%50==0
            println("Current loss after $(length(losses)) iterations: $(losses[end])")
        end
        return false
    end

This function has two arguments. How would I have known this if I had written a code on my own? I added an additional argument and the code crashed. I searched Optimziation.jl and did not find a reference to callback. I searched DiffEqCallbacks, which also did not help me. Any help is appreciated.

https://docs.sciml.ai/Optimization/stable/search/?q=callback

Callback Functions

The callback function callback is a function which is called after every optimizer step. Its signature is:

callback = (x,other_args) -> false

where other_args is are the extra return arguments of the optimization f. This allows for saving values from the optimization and using them for plotting and display without recalculating. The callback should return a Boolean value, and the default should be false, such that the optimization gets stopped if it returns true.

I’m surprised the search didn’t catch it better but it is there.

The new docs have launched.

The example is here: Automatically Discover Missing Physics by Embedding Machine Learning into Differential Equations · Overview of Julia's SciML

Other ones will be added to the examples section of SciMLSensitivity

1 Like