Need help interpreting an error message (DifferentialEquations.jl and IDA)

I’ve written a climate model in Julia, essentially a mix of simple first order ODEs and ordinary algebraic equations (so I have a system of DAEs). I’ve successfully solved it using a straightforward manual implementation of finite differences, and now I want to see how DifferentialEquations.jl performs on the same model. But I’m stuck in the debugging phase - I consistently get this error message:

[IDAS ERROR]  IDACalcIC
  tout1 too close to t0 to attempt initial condition calculation.

The model is too large to post here (it’s spread over 11 files), so for now I’m only asking for help interpreting this error. I’ve checked my initial solution vectors (u0 and du0 using the notation of the DifferentialEquations.jl manual) and everything looks correct there. To me the error hinted at a timestepping issue so I also tried running with various values of the dt=... option, but I always get the same error.

Any ideas that can point me in the right direction for finding the problem?

Looks like it’s an error with how we were internally calling IDACalcIC. Can you Pkg.checkout("Sundials"), restart your Julia session, and then try it again? I just put in a fix.

5 Likes

That fixed it, thanks a lot! I never even considered the possibility that the problem wasn’t entirely on my end. I’m glad I decided to post before spending my entire Christmas vacation going through the model line by line … :slight_smile:

3 Likes

Thanks for the report. Sorry for the bug! I’ll set this to release today. You should Pkg.free("Sundials") to get back to releases, and when this merges:

https://github.com/JuliaLang/METADATA.jl/pull/12618

then Pkg.update() will give the fixed version. Happy holidays!

P.S. Make sure you try using the different linear solvers. If the system is large that can speed it up. Take care.

1 Like

Thanks for the tip. I tried BCG and TFQMR but got the following error:

ERROR: MethodError: no method matching IDASptfqmr(::Sundials.Handle{Sundials.IDAMem}, ::UInt32, ::Int64)

I used the syntax solve(problem, IDA(linear_solver=:BCG)). Do I need to set additional options in order to match arguments for the method call?

I see those aren’t actually covered by our tests for IDA, just for CVODE. Give me tomorrow to build a nice test with inconsistent ICs and that utilizes all of the linear solvers and I’ll make sure it’s working. My guess is that the events overhaul introduced these bugs but weren’t caught due to not having enough IDA tests.

Pkg.update(). The linear solvers for IDA should be fixed (and covered by the tests now). Please feel free to report any further issues.

1 Like

Sorry to keep nagging, but I guess that’s what you get when you’re way too helpful to your end users. :slight_smile:

The IDA solver works fine for my full problem (which is heavily nonlinear), so now I want to get a feel for the uncertainty of the solution as I vary the tolerances. I also want to make a fair comparison with my naive finite difference implementation. So I tried the Monte Carlo methods described in the manual but they didn’t work for me. I’m guessing that they only work for ODEs and not for DAE problems, is that correct? If so, is there anything else I can do to quantify the uncertainty in my variables somehow?

This is the error message I get on the solve statement of the Monte Carlo problem:

ERROR: type IDAIntegrator has no field EEst

No worries. This is how I know what questions need answering.

Well, this both works but doesn’t. It works in the sense that I can implement the same algorithm as follows:

using Sundials

# Test DAE
function f(t,u,du,out)
  out[1] = - 0.04u[1]              + 1e4*u[2]*u[3] - du[1]
  out[2] = + 0.04u[1] - 3e7*u[2]^2 - 1e4*u[2]*u[3] - du[2]
  out[3] = u[1] + u[2] + u[3] - 1.0
end
u₀ = [1.0, 0, 0]
du₀ = [-0.04, 0.04, 0.0]
tspan = (0.0,100.0)
differential_vars = [true,true,false]
prob = DAEProblem(f,u₀,du₀,tspan,differential_vars=differential_vars)
sol = solve(prob,IDA())

immutable SundialsAdaptiveProbIntsCache{T}
  EEst_cache::T
  last_order::Vector{Cint}
end
function (p::SundialsAdaptiveProbIntsCache)(integrator)
  Sundials.IDAGetEstLocalErrors(integrator.mem,p.EEst_cache)
  Sundials.IDAGetLastOrder(integrator.mem,p.last_order)
  integrator.u .= integrator.u .+ norm(p.EEst_cache)*sqrt((integrator.t-integrator.tprev)^(2*first(p.last_order)))*randn(size(integrator.u))
end

function SundialsProbIntsUncertainty(save=true)
  affect! = SundialsAdaptiveProbIntsCache(similar(prob.u0),Cint[0])
  condtion = (t,u,integrator) -> true
  save_positions = (save,false)
  DiscreteCallback(condtion,affect!,save_positions=save_positions)
end

sol = solve(prob,IDA(),callback=SundialsProbIntsUncertainty())

That will do the Prob Ints uncertainty quantification algorithm (needs Sundials v0.18.2… so Pkg.update() again but that makes IDA event handling more robust to work with this). However… it’s not that easy. As a multistep method, IDA needs to use previous values. It’s an adaptive order method so it starts with implicit Euler and works up. Every time there is a discontinuity it becomes implicit Euler, and this gives a discontinuity every step. In addition, it changes the state variables which then makes the previous state inconsistent, so there is an additional consistency verification that’s needed which can introduce floating point error. So what happens is that IDA, in the case where you have a discontinuity at every step, very naturally becomes a very bad version of implicit Euler… so you can run that code and see that, but it’s not very indicative of what the stepping behavior and errors are like when there’s no discontinuities.

So the prob ints idea for uncertainty quantification really just doesn’t work with multistep methods at all (CVODE_BDF as well). So while that code will calculate something, it’s not necessarily what you want, and it’s impossible for it to get an “unperturbed estimate”. That leaves an open question as to how you could get one, and I simply don’t know if that’s possible and don’t know of any literature that addresses this.

To deal with this, I made the documentation of the uncertainty quantification more specific to say which packages it can support. Since it’s simply impossible to get the error estimates out of the Hairer radau methods, this stuff will only ever be useful with *DiffEq packages. So if you need uncertainty quantified solutions to a DAE, I would recommend using one of the methods from OrdinaryDiffEq.jl. They require you re-write the problem in mass matrix form though.

Sorry if that’s not the answer you were looking for, but actually I didn’t realize that would be the case before getting this to work (now I am curious about making an algorithm that could work here…). Let me know if you need anything else.

Thank you very much for the long explanation - I learned a lot just parsing this slowly. I didn’t realize my question would lead you onto the minefields of the research front. The uncertainty estimate would have been a nice bonus, but I can live without it.

One last question. It was a bit painful collecting all the variables and derivatives from the various files of my model into one large u and du vector, and then unpacking them again to read the results. I remember reading somewhere (can’t find it now) that you were considering expanding the @odedef macro into a full DSL similar to JuMP. Is this idea still on the table?

Yes it is. I hope we can get a Google Summer of Code student to help kick off this project. It’s nice and entry level, could make good use of the Julia type system, and doesn’t require graduate-level math to do, so I think it makes an awesome student project and I don’t want to “take it over” too soon! For that reason (and because most of my work/research is on the solvers itself), it’s somewhat sidelined for now, but definitely on the list of things I want.

That’s where I like to be.

1 Like