How to analyze sensitivity w.r.t. the initial values?

I want to run sensitivity analysis w.r.t. the initial conditions:

using DifferentialEquations,
    Optimization, OptimizationPolyalgorithms, SciMLSensitivity,
    Zygote

function harm_osc_matrix(t::Real, p::Vector)
    ω, ε = p # ε is not used but anyway
    return [[0.0 1.0]
        [-ω 0.0]]
end

function harm_osc!(du, u, p, t)
    du .= harm_osc_matrix(t, p) * u
end

tspan = (0.0, pi)
tsteps = tspan[begin]:pi/5:tspan[end]
u0 = rand(2)
p = [1.0, 1.0]
h = pi / 5
prob_harm_osc = ODEProblem(harm_osc!, u0, tspan)
target = solve(prob_harm_osc, Euler(), p=p, dt=h).u[end]

function get_iv(target, tspan)
    global p, h
    function loss(x)
        _prob = remake(prob_harm_osc, u0=x[1:2], tspan=tspan, p=p)
        sol = solve(_prob, Euler(), dt=h, p=p)
        loss = sum(abs2, sol.u[end] - target)
        return loss, sol
    end

    adtype = Optimization.AutoZygote()
    optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
    optprob = Optimization.OptimizationProblem(optf, p)
    result_ode = Optimization.solve(optprob, PolyOpt(), maxiters=100)
    return result_ode.u
end
get_iv(target, tspan)

This code raises the error:

ERROR: MethodError: objects of type Float64 are not callable
Maybe you forgot to use an operator such as *, ^, %, / etc. ?
Stacktrace:
 [1] (::var"#10#12")(x::Vector{Float64}, p::SciMLBase.NullParameters)
   @ Main ~/Code/test/HarmonicOscillatorSensitivity.jl:33
 [2] OptimizationFunction
   @ ~/.julia/packages/SciMLBase/8XHkk/src/scimlfunctions.jl:3933 [inlined]
 [3] #2
   @ ~/.julia/packages/OptimizationPolyalgorithms/MOJAF/src/OptimizationPolyalgorithms.jl:14 [inlined]
 [4] __solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, var"#10#12", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Vector{Float64}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::PolyOpt; maxiters::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ OptimizationPolyalgorithms ~/.julia/packages/OptimizationPolyalgorithms/MOJAF/src/OptimizationPolyalgorithms.jl:15
 [5] __solve
   @ ~/.julia/packages/OptimizationPolyalgorithms/MOJAF/src/OptimizationPolyalgorithms.jl:9 [inlined]
 [6] #solve#614
   @ ~/.julia/packages/SciMLBase/8XHkk/src/solve.jl:97 [inlined]
 [7] get_iv(target::Vector{Float64}, tspan::Tuple{Float64, Irrational{:π}})
   @ Main ~/Code/test/HarmonicOscillatorSensitivity.jl:35
 [8] top-level scope
   @ ~/Code/test/HarmonicOscillatorSensitivity.jl:38

However, if I “unwrap” get_iv function, putting its body to global scope, it works fine.

  1. How can I make get_iv work?
  2. Is there a shorter way to analyze the system sensitivity w.r.t. the IVs?

The problem seems to be that the Float64 variable loss defined inside the function loss is overwriting the function. So at line 34, when preparing the optimization problem to calculate the loss, loss refers to the value of the loss, not the function to compute it. Placing the definition of the function loss in at the top level instead of inside the function get_iv, marking that value as an explicit local

     local loss
     loss = sum(abs2, sol.u[end] - target)
     return loss, sol

or renaming the Float64 value to something else (for example lss = sum(abs2, sol.u[end] - target)) all restore the behavior you hoped for.

2 Likes

The problem seems to be that the Float64 variable loss defined inside the function loss is overwriting the function.

Thanks a lot!

But why there is no such problem for a function defined in the global scope?

function test() 
       test = 3
       return test
end
test() # returns 3

This is a difference between local scope and global scope: Scope of Variables · The Julia Language

1 Like

I also wonder whether SciMLSensitivity provides something that would allow for making the code shorter. E.g., can ODEForwardSensitivityProblem be used for optimization?

I’m not sure what your ultimate goal is here, but if what you want is the jacobian of the final integrator state with respect to the initial integrator state, something like this should work.

function get_iv_jacobian(u0)
    tspan = (0.0, pi)
    tsteps = tspan[begin]:(pi / 5):tspan[end]
    p = [1.0, 1.0]
    h = pi / 5
    Zygote.jacobian(u0) do u0
        prob_harm_osc = ODEProblem(harm_osc!, u0, tspan)
        target = solve(prob_harm_osc, Euler(), p = p, dt = h).u[end]
    end
end
julia> u0 = rand(2)
2-element Vector{Float64}:
 0.8109605817521025
 0.11386129909980691

julia> get_iv_jacobian(u0)
([-2.168569032163724 0.7590168182970979; -2.927585850460822 -1.4095522138666259],)

If you have more detail about your actual problem, perhaps I can provide better advice.

If you have more detail about your actual problem, perhaps I can provide better advice.

In general, given the solution of a system (which may be non-linear), I would like to find the corresponding IVs. In this case, of course, I can simply can inverse the fundamental matrix, but I would like to have a more or less general way to “inverse” systems from x to the corresponding x0.

I think ODEForwardSensitivityProblem only calculates derivatives with respect to the parameter vector, not the initial conditions. I’m pretty sure your initial example is the right approach.

1 Like

I want to try the same approach with a problem of a different dimension, e.g.:

using DifferentialEquations,
    Optimization, OptimizationPolyalgorithms, SciMLSensitivity,
    Zygote, LinearAlgebra

const dim = 4

function matrix_prob(t::Real, p::Vector)
    global dim
    return float(Matrix(I(dim)))
end

function f!(du, u, p, t)
    du .= matrix_prob(t, p) * u
end

tspan = (0.0, pi + 0.0)
h = pi / 5
tsteps = tspan[begin]:h:tspan[end]
u0 = rand(dim)
prob = ODEProblem(f!, u0, tspan)
target = solve(prob, Euler(), p=p, dt=h).u[end]

function get_iv(target, t)
    global p, h, dim
    function loss(u0)
        @show u0
        _prob = ODEProblem(f!, u0, (0.0, t))
        sol = solve(_prob, Euler(), dt=h, p=p)
        l = sum(abs2, sol.u[end] - target)
        return l, sol
    end

    adtype = Optimization.AutoZygote()
    optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
    optprob = Optimization.OptimizationProblem(optf, p)
    result_ode = Optimization.solve(optprob, PolyOpt(), maxiters=100)
    return result_ode.u
end
get_iv(target, tspan[end])

But the optimizer seems to pass a 2D vector to loss(x) instead of 4D:

u0 = [1.0, 1.0]
ERROR: DimensionMismatch: second dimension of A, 4, does not match length of x, 2

How can I make sure that the optimizer accepts vector of the correct dimension?

You are passing p instead of u0 as the initial condition.

1 Like