Explicit Jacobian on DAEs?

Hello all,

I am new to this forum

I am somewhat confused on how to solve DAEs using the DifferentialEquations package -
I am trying to solve a large system, that I think requires explicit Jacobian definition.

I defined a minimal problem to learn how to do this from the example in DifferentialEquations, but I seem to be doing something wrong - if I try to solve my small problem without dealing with Jacobians - I get quick results - when I try to incorporate the jacobain to my calculation the IDA crashes - Any diea why or how to fix it? Also I read that it is possible to solve DAEs using the ODEProblems, any advice how to do that would also be appreciated to.

Here is the code i was trying:


p=[2,1.2,3]


function f(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

function g(::Type{Val{:jac}},J,du,u,p,gamma,t)
  J[1,1] = gamma - p[1] + p[2] * u[2]
  J[1,2] = p[2]* u[1]
#   J[1,3] = 0
  J[2,1] = - 1 * u[2]
  J[2,2] = gamma - p[3] - u[1]
#   J[2,3] = 0
#   J[3,1] = 1
#   J[3,2] = 1
#   j[3,3] = -1
  nothing
end

I tested g 3 ways as is, without comments and commenting only J[3,:]

Then I run the DAE:

fg = DAEFunction(f;jac=g)

dIC=[-0.8,-4]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]

#works, but does not take into consideration jacobian defined in g

testProb = DAEProblem(f,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(testProb,IDA(linear_solver=:GMRES))

# does not work
testProb2 = DAEProblem(fg,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(testProb2,IDA(linear_solver=:GMRES))

Here is the error form the IDA

[IDAS ERROR] IDACalcIC The residual routine or the linear setup or solve routine had a recoverable error, but IDACalcIC was unable to recover.

Out[147]:

retcode: InitialFailure Interpolation: 3rd order Hermite t: 1-element Array{Float64,1}: 0.0 u: 1-element Array{Array{Float64,1},1}: [1.0, 1.0, 2.0]

Any advice is appreciated

A

Hey,
So after getting rid of the dispatch and changing back to the default linear solver it works:

using Sundials

p=[2,1.2,3]


function f(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

function g(J,du,u,p,gamma,t)
  J[1,1] = gamma - p[1] + p[2] * u[2]
  J[1,2] = p[2]* u[1]
  J[1,3] = 0
  J[2,1] = - 1 * u[2]
  J[2,2] = gamma - p[3] - u[1]
  J[2,3] = 0
  J[3,1] = 1
  J[3,2] = 1
  J[3,3] = -1
  nothing
end

fg = DAEFunction(f;jac=g)

dIC=[-0.8,-4]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]

testProb2 = DAEProblem(fg,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(testProb2,IDA())

So that’s using your Jacobian. You can check by putting a print in there. But now you want to scale to a large problem. Instead of going straight to GMRES, if you’re willing to define your Jacobian then it’s usually more efficient to define a sparse Jacobian. You do that by defining jac_prototype and pass in a matrix with the right sparsity pattern.
Then you can directly write the Jacobian pieces in an call a sparse factorization method (:KLU). You can then directly loop through J.data to speed up the Jacobian updates instead of using Cartesian indexing.

If you are willing to do this work though, sparse factorization with a 5th order Rosenbrock method, writing the DAE in mass matrix form, can be more efficient (it’ll get more miles per factorization!), but of course than can be a pretty substantial rewrite.

Hi Chris -

thanks for the help. It worked like a charm, after I wrote the question I’ve been playing with writing the system as a sparse matrix to use some of the ODE solvers, but I am still making some mistake. Looking for sparse jacobians is a great idea as most of my equations depend at most in 4 to 5 variables and I don’t mind spending the time if I’ll get something better and learn something in the process!
Are you aware of any examples or tutorials I can use as guide for sparse jacobians or to transform the DAE into mass matrix form?

I appreciate your help and your time!

Thanks

No, but if you ping me a few times I’ll write one.

4 Likes

That would be great -
I’ve learned a lot from the docs in DifferentialEquations and your tutorial, so I’d be very excited to see your take on DAEs (or other complex problems) that require sparse and fast Jacobian calculations and how to go about it.

Just out of curiosity and as a first step towards rewriting my system -
If you were to solve the problem abouve using mass matrix is this what you’d do?


using ODEInterface

p=[2,1.2,3]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)


function f2(du,u,p,t)
  du[1] = p[1] * u[1] - p[2] * u[1]*u[2]
  du[2] = p[3] * u[2] -+ u[1]*u[2]
  du[3]=  u[2] + u[1] - u[3]
end

M = [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 0.0]

DtoO = ODEFunction(f2;mass_matrix=M)
DtoOprob = ODEProblem(DtoO,IC,tspan,p)
sol = solve(DtoOprob,radau())

Or would you still write the problem in mass matrix form, but using DAEFunction isntead of ODEFunction?

The main reason I ask is that the results of this and directly solving the DAE as above are not the same, but I m not sure if that is because this is wrong or just a consequence of tolerances and other settings.

Once again thanks!

A

That should be the same as the DAE, at least at lower tolerance. If it doesn’t converge to the same thing, that’s an issue. What about with our Rosenbrock methods? Just making sure it’s not an interfacing issue with radau.

so here is what I did :
I run your code, the mass matrix form using Rodas5 and mass matrix using radau -
I ma printing the result at the time 10 (DAE solution gives a matrix with 322 rows, RODAS 100 rows and radau 96 rows)


using Sundials

p=[2,1.2,3.0]


function f1(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

function g1(J,du,u,p,gamma,t)
  J[1,1] = gamma - p[1] + p[2] * u[2]
  J[1,2] = p[2]* u[1]
  J[1,3] = 0
  J[2,1] = - 1 * u[2]
  J[2,2] = gamma - p[3] - u[1]
  J[2,3] = 0
  J[3,1] = 1
  J[3,2] = 1
  J[3,3] = -1
  nothing
end

fg1 = DAEFunction(f1;jac=g1)

dIC=[-0.8,-4]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]

testProb2 = DAEProblem(fg1,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(testProb2,IDA())

# My code

function f4(du,u,p,t)
  du[1] = p[1] * u[1] - p[2] - u[1]*u[2]
  du[2] = p[3] * u[2] + u[1]*u[2]
  du[3] = u[3] - (u[2] + u[1])
end
M = [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 0.0]

DtoO = ODEFunction(f4;mass_matrix=M)
DtoOprob = ODEProblem(DtoO,IC,tspan,p)
sol1 = solve(DtoOprob,Rodas5())
sol2 = solve(DtoOprob,radau())


sol.t[322], sol[322], sol1.t[100], sol1[100], sol2.t[94], sol2[94]

Results:

(10.0, [-1.96787e-18, 1.78649e13, 1.78649e13], 10.0, [8.40881e-9, 1.48604e13, 1.48604e13], 10.0, [-2.31947e-8, 1.48609e13, 1.48609e13])

Rodas and radau are consistent within tolerance I think, but as you see the values for x2 and x3 are much larger for the DAE at time 10.

The model diverges, so this really isn’t a good test. You might want to choose parameters which don’t diverge.

Makes sense -
I’ll check with a new set

A

Ok - So i changed the parameters to a set that converges and still DAE and ODE methods don’t converge to same results, here is the code:

using Sundials
using ODEInterfaceDiffEq
using ODEInterface

p=[-2,1.2,-1.0]


function f1(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

function g1(J,du,u,p,gamma,t)
  J[1,1] = gamma - p[1] + p[2] * u[2]
  J[1,2] = p[2]* u[1]
  J[1,3] = 0
  J[2,1] = - 1 * u[2]
  J[2,2] = gamma - p[3] - u[1]
  J[2,3] = 0
  J[3,1] = 1
  J[3,2] = 1
  J[3,3] = -1
  nothing
end

fg1 = DAEFunction(f1;jac=g1)

dIC=[0.8,2]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]

testProb2 = DAEProblem(fg1,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(testProb2,IDA())

# My code

function f4(du,u,p,t)
  du[1] = p[1] * u[1] - p[2] - u[1]*u[2]
  du[2] = p[3] * u[2] + u[1]*u[2]
  du[3] = u[3] - (u[2] + u[1])
end
M = [1.0 0.0 0.0; 0.0 1.0 0.0; 0.0 0.0 0.0]

DtoO = ODEFunction(f4;mass_matrix=M)
DtoOprob = ODEProblem(DtoO,IC,tspan,p)
sol1 = solve(DtoOprob,Rodas5())
sol2 = solve(DtoOprob,radau())


sol.t[size(sol)[2]], sol[size(sol)[2]], sol1.t[size(sol1)[2]], sol1[size(sol1)[2]], sol2.t[size(sol2)[2]], sol2[size(sol2)[2]]

result at time 10:

(10.0, [-6.75543e-8, 6.33185e-5, 6.32509e-5], 10.0, [-0.6, 2.54563e-7, -0.599999], 10.0, [-0.6, 2.54452e-7, -0.599999])

Plotting:


using Plots
plot(sol, layout=(3,1), label=["DAE X1" "DAE X2" "DAE X3"], lw=3)
plot!(sol1, layout=(3,1), label=["Rodas5 X1" "Rodas5 X2" "Rodas5 X3"], lw=3)
plot!(sol2,linestyle = :dot, layout=(3,1), label=["radau X1" "radau X2" "radau X3"], lw=3)

Here is the plot:

newplot

I have check the jacobian and equivalency between functions f1 and f4 and as far as I can tell everything matches.

Take care

Your equations weren’t the same. Notice yours had du[1] = p[1] * u[1] - p[2] - u[1]*u[2], so p[2] * turned into p[2] -. Also, 0 = u[2] + u[1] - u[3] is not the same equation as 0 = u[3] - (u[2] + u[1]). It should be

function f4(du,u,p,t)
  du[1] = p[1] * u[1] - p[2] * u[1]*u[2]
  du[2] = p[3] * u[2] + u[1]*u[2]
  du[3] = u[2] + u[1] - u[3]
end

With this change the solutions match.

So sorry!
Can’t believe I did not notice!

It happens, take care.

Thanks -
Two more quick question about DAEs and ODEs and callbacks in general in Julia -
Do DAe/ODE take intermediate variables?

For example one of my changing variables in tumor volume ie:

function PKPD2(out,du,u,p,t)
    out[1] = (growth-kill)*TV -dTV
    out[2] = (3/4pi)*TV^(1/3) - dRTum
    out[3] = Diff_drug - 6*D_drug/(R_tum)^2
    #etc
end

Diff_drug will be used intransport equation from blood to tumor, and as you see RTum and Diff_drug change as function of TV… Is the above notatin the right way to deal with it or can I just define them directly as:

function PKPD2(out,du,u,p,t)
    
    Rtum= (3/4pi)*TV^(1/3)
    Diff_drug = 6*D_drug/(R_tum)^2
    out[1] = (growth-kill)*TV -dTV
    #etc
end

And for callbacks:
if I have two conditions:

  1. i want some variables to be set to 0 while other continue changing after an event for example TV reaches 0 (in my real case u[16])

  2. want to change the value of variable to a weighted average when new drug is added

would you do something like this?


PKPD2_Prob = DAEProblem(PKPD2,dIC,IC,tspan,p,differential_vars=diff_vars)

condition(u,t,integrator) = t in ustrip.([14u"d",28u"d",42u"d"])


function affect!(integrator)
    integrator.u[1] += ustrip(Dose)
    integrator.u[11] = (4*ustrip(Dose)+u[11]*(u[1]+u[2]+(u[3]+u[4]+u[5])*u[16]))/(ustrip(Dose)+(u[1]+u[2]+(u[3]+u[4]+u[5])*u[16]))
end

condition1(u,t,integrator) = u[16]
#affect1!(integrator) = terminate!(integrator)

function affect2!(integrator)
    integrator.u[1] +=  (integrator.u[3] +  integrator.u[4] + integrator.u[5])*integrator.u[16]
    integrator.u[3] = 0
    integrator.u[4] = 0
    integartor.u[5] = 0    
end

    
cb = DiscreteCallback(condition,affect!)
cb2 = ContinuousCallback(condition1, affect2!, rootfind = true)
cbset = CallbackSet(cb,cb2)
sol = solve(PKPD2_Prob,IDA(),callback=cbset,tstops=ustrip.([14u"d",28u"d",42u"d"]))

Finally -
I almost forgot, is there a way to constraint the value of variables fro example TV and RTum >=0?

I can share the whole code if that would make things clearer

I don’t get the question.

Sorry I was editing

Sorry to bother you one more time -
I finally defined my full DAE system (it’s not that large 21 equations and some 30 parameters)
I calculated the Jacobian manually for the whole thing -
The surprising thing to me is that when I add the Jacobian to the the DAEFunction it has much more trouble converging than when I don’t. I am not using callbacks -
I have checked the Jacobian several times (more succesfully than last time, and I dont see obvious errors)
Is there a reason why manually adding the Jacobian would cause issues?

Thanks,


using Sundials

PKPD = DAEFunction(PKPD3;jac=Jacobi)

cb = DiscreteCallback(condition,affect!)
#cb1 = ContinuousCallback(condition1, affect1!, rootfind = true)
cb1 = DiscreteCallback(condition1, affect1!)
cbset = CallbackSet(cb1,cb)

# PKPD3: system without manually adding Jacobian
PKPD3_Prob = DAEProblem(PKPD3,dIC,IC,tspan,p,differential_vars=diff_vars)
sol3 = solve(PKPD3_Prob,IDA(),alghint=[:stiff])#,callback=cb,tstops=ustrip.([14u"d",28u"d",42u"d"]))#,alghint=[:stiff])#, reltol=1e-12,abstol=1e-12)

# PKPDJ: system after manually adding Jacobian
PKPDJ = DAEFunction(PKPD3;jac=Jacobi)
PKPDJ_Prob = DAEProblem(PKPDJ,dIC,IC,tspan,p,differential_vars=diff_vars)
solJ = solve(PKPDJ_Prob,IDA(),alghint=[:stiff])

the code above quickly solves for sol3
but solJ gives:

[IDAS ERROR] IDASolve At t = 0 and h = 2.42869e-014, the corrector convergence failed repeatedly or with |h| = hmin.

Out[5]:

retcode: ConvergenceFailure Interpolation: 3rd order Hermite t: 1-element Array{Float64,1}: 0.0 u: 1-element Array{Array{Float64,1},1}: [0.261438, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0005, 0.0, 0.0, 0.0, 0.0005, 0.07794, 0.0, 4923.73, 0.544486, 6.18734]

One more thingl ooking at the Jacobian gamma is huge! >3e7

A

PS- I’ll be happy to share the full code with you, in case its of interest yo you - It is a nice PKPD model based on J Pahrmacokinetic Pharmacodyn 2012 39:643-659 , and I know that’s one of your themes of interest.

I just don’t want to overwhelm the chat as right now I have about 500 lines

Take care

Are you sure the Jacobian is correct? Did you check it against autodiff?

I’ll do that - i did not know about that package, so I did it all manually, but I ahd a second person double check - do you have the link to the function so I can double check it?

Thanks

A

1 Like