IDA is allocating a lot

What if you manually trigger the gc at every iteration so that garbage is not piling up and each invokation becomes faster?

I tried that, with Julia 1.10 and multi-threaded garbage collection it makes things worse…

I am trying to use the integrator interface with the DFBDF solver. Is that supposed to work?

I have the following code:
Source: KiteModels.jl/mwes/mwe_09.jl at main · ufechner7/KiteModels.jl · GitHub

which fails with the following error message:

ERROR: LoadError: MethodError: Cannot `convert` an object of type Float64 to an object of type Vector{Vector{Float64}}

Closest candidates are:
  convert(::Type{Array{T, N}}, ::StaticArraysCore.SizedArray{S, T, N, N, Array{T, N}}) where {S, T, N}
   @ StaticArrays ~/.julia/packages/StaticArrays/EHHaF/src/SizedArray.jl:88
  convert(::Type{Array{T, N}}, ::StaticArraysCore.SizedArray{S, T, N, M, TData} where {M, TData<:AbstractArray{T, M}}) where {T, S, N}
   @ StaticArrays ~/.julia/packages/StaticArrays/EHHaF/src/SizedArray.jl:82
  convert(::Type{T}, ::T) where T
   @ Base Base.jl:84
  ...

Stacktrace:
 [1] __init(prob::DAEProblem{…}, alg::DFBDF{…}, timeseries_init::Float64, ts_init::Tuple{}, ks_init::Tuple{}, recompile::Type{…}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Nothing, callback::Nothing, dense::Bool, calck::Bool, dt::Float64, dtmin::Float64, dtmax::Float64, force_dtmin::Bool, adaptive::Bool, gamma::Rational{…}, abstol::Nothing, reltol::Float64, qmin::Rational{…}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{…}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias_u0::Bool, alias_du0::Bool, initializealg::OrdinaryDiffEq.DefaultInit, kwargs::@Kwargs{})
   @ OrdinaryDiffEq ~/.julia/packages/OrdinaryDiffEq/ZbQoo/src/solve.jl:268
 [2] init_call(::DAEProblem{…}, ::DFBDF{…}, ::Vararg{…}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@Kwargs{…})
   @ DiffEqBase ~/.julia/packages/DiffEqBase/O8cUq/src/solve.jl:530
 [3] init_up(::DAEProblem{…}, ::Nothing, ::Vector{…}, ::Nothing, ::DFBDF{…}, ::Vararg{…}; kwargs::@Kwargs{…})
   @ DiffEqBase ~/.julia/packages/DiffEqBase/O8cUq/src/solve.jl:562
 [4] init(::DAEProblem{…}, ::DFBDF{…}, ::Vararg{…}; sensealg::Nothing, u0::Nothing, p::Nothing, kwargs::@Kwargs{…})
   @ DiffEqBase ~/.julia/packages/DiffEqBase/O8cUq/src/solve.jl:544
 [5] init()
   @ Main ~/repos/KiteModels.jl/mwes/mwe_09.jl:62
 [6] top-level scope
   @ ~/repos/KiteModels.jl/mwes/mwe_09.jl:76
 [7] include(fname::String)
   @ Base.MainInclude ./client.jl:489
 [8] top-level scope
   @ REPL[1]:1
in expression starting at /home/ufechner/repos/KiteModels.jl/mwes/mwe_09.jl:76
Some type information was truncated. Use `show(err)` to see complete types.

This line fails:

  integrator = OrdinaryDiffEq.init(prob, solver, abstol, reltol=0.001)

Any idea?

In general, for DAEs, IDA is the only solver that actually works (e.g. callback support).

Well, then we need to fix this. IDA cannot be the answer. We cannot make it allocation free.

1 Like

I fixed this by writing:

reltol = 0.001 * ones(length(y0))

Is this a bug? Should it be allowed to pass a real value instead of an array as reltol?

And I achieved a new minimum for the number of allocations:

Allocated 575 bytes per iteration!

Current status:

Solver Allocations Time [ms]
IDA 938 0.423
DFBDF 575 0.620
DFBDF2 598 0.652

Timing on Linux laptop running on battery, using a Ryzen 7 7840u CPU.

I am using the integrator interface, therefore I do not need callback support…

Now using the DFBDF solver… It works so much better than IDA!

About 2.4 times less memory usage than IDA and much faster AND much more accurate!

Testcase: Run a flight simulation for 460s using 50ms time steps. The time in ms below is the time for finding a solution for one time step. The relative error was set to 0.0005 for both solvers. The state vector had 66 elements, a Ryzen 7950X CPU was used.

Solver Mem usage [GB] Time in ms (av/max)
DFBDF 11.3 0.78 / 8.1
IDA 27.0 3.55 / 13.8

The IDA solver was - in difficult situations - unstable, resulting in an additional error of about 1.8~\% for the tether force, the DFBDF solver is perfectly stable.

1 Like

I added the parameter save_everystep=false to the init() function,
an now get even less allocations:

Solver Allocations Time [ms]
IDA 525 0.165
DImplicitEuler 1 0.225
DFBDF 87 0.261
DFBDF2 1 0.246

I also use autodiff=false.

Timing on Linux desktop, using a Ryzen 7 7950X CPU.