DomainError in TemporalGPs

I’m trying to fit a GP with TemporalGPs. Sometimes I get a DomainError when it attempts to sqrt a negative number. When that happens I usually just guess again at some initial parameters and re-run it. Sometimes that doesn’t work and I’m not able to find a fit after a few attempts at choosing initial parameters. Is it possible to circumvent this problem? For example, rather than taking sqrt of a negative, could it return a bad likelihood so Optim would avoid that area? I tried putting try/catch in nlml, but Zygote doesn’t support try/catch so that didn’t work. The only thing I can think of is doing a grid search over an initial parameter space, wrapping the call to optimize with try/catch, but maybe there is a better way.

What is a good solution to this issue?

function kern(params)
    matern = params.s_kernel^2 * (Matern32Kernel() ∘ ScaleTransform(params.λ))
    matern + ConstantKernel()
end

function build_model(params)
    gp = GP(kern(params))
    to_sde(gp, SArrayStorage(Float64))
end
nlml = (θ_flat) -> let
        θ = unpack(θ_flat)
        f = build_model(θ)
        -logpdf(f(df.t, θ.s_noise), df.y)
end

θ₀ = (s_kernel=positive(.1), λ=positive(1/80), s_noise=positive(2.))
θ_flat_init, unflatten = flatten(θ₀);
unpack(θ_flat::Vector{Float64}) = value(unflatten(θ_flat))

# check that it works
@show(nlml(θ_flat_init))
results = Optim.optimize(nlml, θ->gradient(nlml, θ)[1], θ_flat_init, BFGS(), Optim.Options(show_trace=false,); inplace=false,)
θ_bfgs = unpack(results.minimizer)

ERROR: LoadError: DomainError with -0.7496822688732334:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
  [1] throw_complex_domainerror(f::Symbol, x::Float64)
    @ Base.Math ./math.jl:33
  [2] sqrt
    @ ./math.jl:582 [inlined]
  [3] posterior_and_lml(x::TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}, f::TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, y::Float64)
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/models/linear_gaussian_conditionals.jl:292
  [4] step_logpdf
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:171 [inlined]
  [5] step_logpdf
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:167 [inlined]
  [6] scan_emit(f::typeof(TemporalGPs.step_logpdf), xs::Base.Iterators.Zip{Tuple{TemporalGPs.LGSSM{TemporalGPs.GaussMarkovModel{TemporalGPs.Forward, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, FillArrays.Fill{FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}, 1, Tuple{Base.OneTo{Int64}}}, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}}, StructArrays.StructVector{TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, NamedTuple{(:A, :a, :Q), Tuple{FillArrays.Fill{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, 1, Tuple{Base.OneTo{Int64}}}, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}, Vector{Float64}}}, Int64}}, Vector{Float64}}}, state::TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}, idx::UnitRange{Int64})
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/util/scan.jl:23
  [7] logpdf(model::TemporalGPs.LGSSM{TemporalGPs.GaussMarkovModel{TemporalGPs.Forward, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, FillArrays.Fill{FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}, 1, Tuple{Base.OneTo{Int64}}}, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}}, StructArrays.StructVector{TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, NamedTuple{(:A, :a, :Q), Tuple{FillArrays.Fill{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, 1, Tuple{Base.OneTo{Int64}}}, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}, Vector{Float64}}}, Int64}}, y::Vector{Float64})
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:164
  [8] _logpdf
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/gp/lti_sde.jl:69 [inlined]
  [9] logpdf(ft::AbstractGPs.FiniteGP{TemporalGPs.LTISDE{AbstractGPs.GP{AbstractGPs.ZeroMean{Float64}, KernelFunctions.KernelSum{Tuple{KernelFunctions.ScaledKernel{KernelFunctions.TransformedKernel{KernelFunctions.Matern32Kernel{Distances.Euclidean}, KernelFunctions.ScaleTransform{Float64}}, Float64}, KernelFunctions.ConstantKernel{Float64}}}}, TemporalGPs.SArrayStorage{Float64}}, Vector{Int64}, LinearAlgebra.Diagonal{Float64, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}}}, y::Vector{Float64})
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/gp/lti_sde.jl:62

cc @willtebbutt

My guess would be that what’s going on is s_noise is getting really close to being zero at some point during optimisation, and this is what’s causing problems.

Could you try replacing

-logpdf(f(df.t, θ.s_noise), df.y)

with something like

-logpdf(f(df.t, θ.s_noise + 1e-6), df.y)

and see if that makes things more stable? Either that, or maybe λ is getting really small?

Could you print out the parameter values that you’re getting during optimisation, so that we can see what values the logpdf falls over at?

Here is the result:

nlml = (θ_flat) -> let
        θ = unpack(θ_flat)
        f = build_model(θ)
        nlp = -logpdf(f(df.t, θ.s_noise + 1e-6), df.y)
        @show(θ, nlp)
        nlp
    end

θ = (s_kernel = 0.0029999999999999996, λ = 0.002500000000000001, s_noise = 2.0)
nlp = 585086.8932151705
nlml(θ_flat_init) = 585086.8932151705
θ = (s_kernel = 0.0029999999999999996, λ = 0.002500000000000001, s_noise = 2.0)
nlp = 585086.8932151705
θ = (s_kernel = 0.00017187491676687273, λ = 6.875354401209036e-6, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.00017187491676687273, λ = 6.875354401209036e-6, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.0022538631539050515, λ = 0.001386065098194097, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.0022538631539050515, λ = 0.001386065098194097, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.002915425008186334, λ = 0.002356808630330971, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.002915425008186334, λ = 0.002356808630330971, s_noise = Inf)
nlp = Inf
θ = (s_kernel = 0.002991433250587895, λ = 0.002485297822842195, s_noise = 7.707469540306969e46)
nlp = 1.4857888346874423e7
θ = (s_kernel = 0.002991433250587895, λ = 0.002485297822842195, s_noise = 7.707469540306969e46)
nlp = 1.4857888346874423e7
θ = (s_kernel = 0.0029962089030018562, λ = 0.0024934882434486582, s_noise = 7.953159943308289e20)
nlp = 6.760963258465175e6
θ = (s_kernel = 0.0029962089030018562, λ = 0.0024934882434486582, s_noise = 7.953159943308289e20)
nlp = 6.760963258465175e6
θ = (s_kernel = 0.002998323048108231, λ = 0.0024971185138478458, s_noise = 2.569176499868862e9)
nlp = 3.180636332161012e6
θ = (s_kernel = 0.002998323048108231, λ = 0.0024971185138478458, s_noise = 2.569176499868862e9)
nlp = 3.180636332161012e6
θ = (s_kernel = 0.0029992583638492267, λ = 0.002498725444432232, s_noise = 21319.420863265455)
nlp = 1.5974987959695635e6
θ = (s_kernel = 0.0029992583638492267, λ = 0.002498725444432232, s_noise = 21319.420863265455)
nlp = 1.5974987959695635e6
θ = (s_kernel = 0.002999672006699741, λ = 0.0024994362784044283, s_noise = 120.83790183346422)
nlp = 901498.761666889
θ = (s_kernel = 0.002999672006699741, λ = 0.0024994362784044283, s_noise = 120.83790183346422)
nlp = 901498.761666889
θ = (s_kernel = 0.002999852521185475, λ = 0.0024997465202781675, s_noise = 12.644221682678946)
nlp = 630406.8970879754
θ = (s_kernel = 0.002999852521185475, λ = 0.0024997465202781675, s_noise = 12.644221682678946)
nlp = 630406.8970879754
θ = (s_kernel = 0.0029999225331769257, λ = 0.0024998668519794014, s_noise = 5.26862544353195)
nlp = 565660.6224649793
θ = (s_kernel = 0.0029999225331769257, λ = 0.0024998668519794014, s_noise = 5.26862544353195)
nlp = 565660.6224649793
θ = (s_kernel = 0.0009683002664412094, λ = 0.0002485744024900923, s_noise = 3.9885677632101473)
nlp = 557549.6270934672
θ = (s_kernel = 0.0009683002664412094, λ = 0.0002485744024900923, s_noise = 3.9885677632101473)
nlp = 557549.6270934672
θ = (s_kernel = 1.0524479898285667e-5, λ = 3.9194918478470524e-8, s_noise = 1.3100732423734256)
nlp = 655589.7167694747
θ = (s_kernel = 1.0524479898285667e-5, λ = 3.9194918478470524e-8, s_noise = 1.3100732423734256)
nlp = 655589.7167694747
θ = (s_kernel = 0.0007550313243477856, λ = 0.000149595679975218, s_noise = 3.7516500336104857)
nlp = 556944.7808542177
θ = (s_kernel = 0.0007550313243477856, λ = 0.000149595679975218, s_noise = 3.7516500336104857)
nlp = 556944.7808542177
θ = (s_kernel = 0.0006484035722547316, λ = 0.00010920185763878601, s_noise = 3.5520003996022926)
nlp = 556813.7591785361
θ = (s_kernel = 0.0006484035722547316, λ = 0.00010920185763878601, s_noise = 3.5520003996022926)
nlp = 556813.7591785361
θ = (s_kernel = 0.0006659049587393208, λ = 0.00011538300329035076, s_noise = 3.586143614574056)
nlp = 556807.8247414458
θ = (s_kernel = 0.0006659049587393208, λ = 0.00011538300329035076, s_noise = 3.586143614574056)
nlp = 556807.8247414458
θ = (s_kernel = 0.0006655123427762512, λ = 0.00011524017669529707, s_noise = 3.5853614467302406)
nlp = 556807.8216660907
θ = (s_kernel = 0.0006655123427762512, λ = 0.00011524017669529707, s_noise = 3.5853614467302406)
nlp = 556807.8216660907
θ = (s_kernel = 0.0006655211074440458, λ = 0.00011524336411159136, s_noise = 3.5853789108367193)
nlp = 556807.8216644857
θ = (s_kernel = 0.0006655211074440458, λ = 0.00011524336411159136, s_noise = 3.5853789108367193)
nlp = 556807.8216644857
θ = (s_kernel = 0.000665532781182557, λ = 0.00011524530507893809, s_noise = 3.5853789089746426)
nlp = 556807.8216644851
θ = (s_kernel = 0.000665532781182557, λ = 0.00011524530507893809, s_noise = 3.5853789089746426)
nlp = 556807.8216644851
θ = (s_kernel = 0.0006655794781843441, λ = 0.00011525306927527698, s_noise = 3.5853789015263375)
nlp = 556807.8216644828
θ = (s_kernel = 0.0006655794781843441, λ = 0.00011525306927527698, s_noise = 3.5853789015263375)
nlp = 556807.8216644828
θ = (s_kernel = 0.0006658130123462958, λ = 0.0001152918981049211, s_noise = 3.585378864284809)
nlp = 556807.821664471
θ = (s_kernel = 0.0006658130123462958, λ = 0.0001152918981049211, s_noise = 3.585378864284809)
nlp = 556807.821664471
θ = (s_kernel = 0.0006669819128727933, λ = 0.00011548623858855, s_noise = 3.5853786780771753)
nlp = 556807.8216644118
θ = (s_kernel = 0.0006669819128727933, λ = 0.00011548623858855, s_noise = 3.5853786780771753)
nlp = 556807.8216644118
θ = (s_kernel = 0.0006728572701518284, λ = 0.00011646286652016316, s_noise = 3.585377747039149)
nlp = 556807.8216641153
θ = (s_kernel = 0.0006728572701518284, λ = 0.00011646286652016316, s_noise = 3.585377747039149)
nlp = 556807.8216641153
θ = (s_kernel = 0.0007030195817824803, λ = 0.00012147131345715567, s_noise = 3.5853730918526443)
nlp = 556807.8216626093
θ = (s_kernel = 0.0007030195817824803, λ = 0.00012147131345715567, s_noise = 3.5853730918526443)
nlp = 556807.8216626093
θ = (s_kernel = 0.0008753667295555162, λ = 0.00014993614146153642, s_noise = 3.585349816010784)
nlp = 556807.821652769
θ = (s_kernel = 0.0008753667295555162, λ = 0.00014993614146153642, s_noise = 3.585349816010784)
nlp = 556807.821652769
θ = (s_kernel = 0.002620035926155134, λ = 0.00042962725088938763, s_noise = 3.585233439068028)
nlp = 556807.8206022428
θ = (s_kernel = 0.002620035926155134, λ = 0.00042962725088938763, s_noise = 3.585233439068028)
nlp = 556807.8206022428
ERROR: LoadError: DomainError with -420490.65136657475:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).

Stacktrace:
  [1] throw_complex_domainerror(f::Symbol, x::Float64)
    @ Base.Math ./math.jl:33
  [2] sqrt
    @ ./math.jl:582 [inlined]
  [3] posterior_and_lml(x::TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}, f::TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, y::Float64)
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/models/linear_gaussian_conditionals.jl:292
  [4] step_logpdf
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:171 [inlined]
  [5] step_logpdf
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:167 [inlined]
  [6] _pullback(#unused#::Zygote.Context, #unused#::typeof(TemporalGPs.scan_emit), f::typeof(TemporalGPs.step_logpdf), xs::Base.Iterators.Zip{Tuple{TemporalGPs.LGSSM{TemporalGPs.GaussMarkovModel{TemporalGPs.Forward, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, FillArrays.Fill{FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}, 1, Tuple{Base.OneTo{Int64}}}, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}}, StructArrays.StructVector{TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, NamedTuple{(:A, :a, :Q), Tuple{FillArrays.Fill{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, 1, Tuple{Base.OneTo{Int64}}}, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}, Vector{Float64}}}, Int64}}, Vector{Float64}}}, init_state::TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}, idx::UnitRange{Int64})
    @ TemporalGPs ~/.julia/packages/TemporalGPs/NtnD2/src/util/scan.jl:43
  [7] _pullback
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/models/lgssm.jl:164 [inlined]
  [8] _pullback(::Zygote.Context, ::typeof(Distributions.logpdf), ::TemporalGPs.LGSSM{TemporalGPs.GaussMarkovModel{TemporalGPs.Forward, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, FillArrays.Fill{FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}, 1, Tuple{Base.OneTo{Int64}}}, Vector{StaticArrays.SMatrix{3, 3, Float64, 9}}, TemporalGPs.Gaussian{StaticArrays.SVector{3, Float64}, StaticArrays.SMatrix{3, 3, Float64, 9}}}, StructArrays.StructVector{TemporalGPs.ScalarOutputLGC{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, Float64, Float64}, NamedTuple{(:A, :a, :Q), Tuple{FillArrays.Fill{LinearAlgebra.Adjoint{Float64, StaticArrays.SVector{3, Float64}}, 1, Tuple{Base.OneTo{Int64}}}, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}, Vector{Float64}}}, Int64}}, ::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface2.jl:0
  [9] _pullback
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/gp/lti_sde.jl:69 [inlined]
 [10] _pullback(::Zygote.Context, ::typeof(TemporalGPs._logpdf), ::AbstractGPs.FiniteGP{TemporalGPs.LTISDE{AbstractGPs.GP{AbstractGPs.ZeroMean{Float64}, KernelFunctions.KernelSum{Tuple{KernelFunctions.ScaledKernel{KernelFunctions.TransformedKernel{KernelFunctions.Matern32Kernel{Distances.Euclidean}, KernelFunctions.ScaleTransform{Float64}}, Float64}, KernelFunctions.ConstantKernel{Float64}}}}, TemporalGPs.SArrayStorage{Float64}}, Vector{Int64}, LinearAlgebra.Diagonal{Float64, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}}}, ::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface2.jl:0
 [11] _pullback
    @ ~/.julia/packages/TemporalGPs/NtnD2/src/gp/lti_sde.jl:62 [inlined]
 [12] _pullback(::Zygote.Context, ::typeof(Distributions.logpdf), ::AbstractGPs.FiniteGP{TemporalGPs.LTISDE{AbstractGPs.GP{AbstractGPs.ZeroMean{Float64}, KernelFunctions.KernelSum{Tuple{KernelFunctions.ScaledKernel{KernelFunctions.TransformedKernel{KernelFunctions.Matern32Kernel{Distances.Euclidean}, KernelFunctions.ScaleTransform{Float64}}, Float64}, KernelFunctions.ConstantKernel{Float64}}}}, TemporalGPs.SArrayStorage{Float64}}, Vector{Int64}, LinearAlgebra.Diagonal{Float64, FillArrays.Fill{Float64, 1, Tuple{Base.OneTo{Int64}}}}}, ::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface2.jl:0
 [13] _pullback
      ...
 [14] _pullback(ctx::Zygote.Context, f::Main.TimeGP.var"#35#37", args::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface2.jl:0
 [15] _pullback(f::Function, args::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface.jl:34
 [16] pullback(f::Function, args::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface.jl:40
 [17] gradient(f::Function, args::Vector{Float64})
    @ Zygote ~/.julia/packages/Zygote/zowrf/src/compiler/interface.jl:58

Ah, interesting. It looks like the kernel standard deviation is becoming really quite small, but observation variance is fine. Also the length scale is becoming really long, suggesting that the model is basically just learning noise. I could imagine that the numerics might become a bit unstable in this case, but I’m not 100% sure why.

Could you provide a plot of a subset of your data so that I can get a better sense of what it looks like? I would like to try and reproduce this locally, so it would be helpful to know what df.t looks like, and to get a vague sense of what df.y looks like.

I’m trying this on two different variables.

Below is a time-series scatterplot with loess smoother (in red) and a histogram for each variable. These images capture a 10k-row sample of a 2M-row dataset. In the real dataset that I’m fitting, there are about 400-800 rows for each time t.

Fairly Gaussian variable

function kern(params)
    matern = params.s_kernel^2 * (Matern32Kernel() ∘ ScaleTransform(params.λ))
    matern + ConstantKernel()
end

function build_model(params)
    gp = GP(kern(params))
    to_sde(gp, SArrayStorage(Float64))
end
nlml = (θ_flat) -> let
        θ = unpack(θ_flat)
        f = build_model(θ)
        nlp = -logpdf(f(df.t, θ.s_noise + 1e-6), df.xresid)
        @show(θ, nlp)
        nlp
end
θ₀ = (s_kernel=positive(.05), λ=positive(1/40), s_noise=positive(2.))
θ_flat_init, unflatten = flatten(θ₀);
unpack(θ_flat::Vector{Float64}) = value(unflatten(θ_flat))


@show(nlml(θ_flat_init)) # test init
results = Optim.optimize(nlml, θ->gradient(nlml, θ)[1], θ_flat_init, BFGS(), Optim.Options(show_trace=false,); inplace=false,)
θ_bfgs = unpack(results.minimizer)

fails on the initial value with DomainError with -159340.87622738612.

Quite non-Gaussian variable

This GP fits fine on small periods of time, e.g. 40 values of t, but not on longer periods like 300 or 3000 ts.

As you’ll notice, the data don’t look Gaussian. In settings like linear regression, I don’t usually consider Gaussian errors to be very important to successful estimation, but maybe it’s more important for GPs? I tried a transformation xs->demean(3.0 .^ (xs.+3)) to de-skew it, but it didn’t help:

θ₀ = (s_kernel=positive(.5), λ=positive(1/1000), s_noise=positive(12.))

gives LoadError: DomainError with -16.701296663676633.

Original:

Transformed:

1 Like

Ahhhhhhhhhh okay, so you’ve got multiple observations at each time point? It makes more sense to me now that you’re suffering from numerical problems. To accomodate this properly I really need to add an input type that explicitly allows for multiple observations at each point in time, and dispatches on this type to ensure that updates are performed in a manner which accounts for this structure.

I’ve opened an issue about this here, but I’m not sure when I’ll have time to fix it.

1 Like

I’ll follow along there. Cheers.

For the shorter time periods where the model does fit, do I need to worry about whether it’s producing correct results?

Nah, should be fine as long t_n <= t_{n+1}.

1 Like