Error evaluating sparse, analytical Jacobian from ModelingToolkit in DAE system

I am having trouble with taking a sparse, analytical Jacobian generated in ModelingToolkit and passing it to my Sundials IDA solver. Here is an MWE adapted from a previous question.

Running this segment of code with a pre-defined analytical Jacobian works (for some reason, running this code only works about every other time).

using Sundials
using ODEInterfaceDiffEq
using ODEInterface

p=[-2,1.2,-1.0]


function f!(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

function g!(J,du,u,p,gamma,t)
  J[1,1] = gamma - p[1] + p[2] * u[2]
  J[1,2] = p[2]* u[1]
  J[1,3] = 0
  J[2,1] = - 1 * u[2]
  J[2,2] = gamma - p[3] - u[1]
  J[2,3] = 0
  J[3,1] = 1
  J[3,2] = 1
  J[3,3] = -1
  nothing
end

fg1 = DAEFunction(f!;jac=g!)

dIC=[0.8,2]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]

prob = DAEProblem(fg1,dIC,IC,tspan,p,differential_vars=diff_vars)
sol = solve(prob,IDA())

using Plots
plot(sol, layout=(3,1), label=["DAE X1" "DAE X2" "DAE X3"], lw=3)

However, when I try to generate the same Jacobian as a sparse Jacobian using ModelingToolkit, I get errors

# generating the sparse jacobian using ModelingtoolKit
using ModelingToolkit, SparseArrays
@variables u[1:3] du[1:3] γ

res = similar(u)
t = 0.0

f!(res, du, u, p, t)

# need to combine d(res)/du + γ.*d(res)/d(du)
J_u  = ModelingToolkit.jacobian(res, u)
J_du = γ.*ModelingToolkit.jacobian(res, du)
J = similar(J_u)

for i in 1:length(J_u)
    J[i] = simplify(J_u[i] + J_du[i])
end

J_func! = eval(build_function(sparse(J), u, γ)[2])

function gnew!(J,du,u,p,gamma,t)
    J_func!(J, u, gamma)
    nothing
end

jac_prototype = sparse([1.0 1 0; 1 1 0; 1 1 1])

# solving the DAE
fg2 = DAEFunction(f!;jac=gnew!)
probMTK = DAEProblem(fg2,dIC,IC,tspan,p,differential_vars=diff_vars,jac_prototype = jac_prototype)
solMTK = solve(probMTK,IDA())

with the error type Array has no field nzval. If I use solMTK = solve(probMTK,IDA(;linear_solver=:KLU)), I get the error MethodError: no method matching nonzeros(::Nothing). What am I doing wrong?

As a side note: I would strongly prefer using Sundials to solve my real system.

The DAEFunction was misspecified and a sparse linear solver is required. Working code is:

# generating the sparse jacobian using ModelingtoolKit
using ModelingToolkit, SparseArrays, Sundials
@variables u[1:3] du[1:3] γ

dIC=[0.8,2]
IC =[1.0,1.0,2.0]
tspan=(0.0,10.0)
diff_vars = [true,true, false]
p=[-2,1.2,-1.0]

function f!(res,du,u,p,t)
  res[1] = du[1] - p[1] * u[1] + p[2] * u[1]*u[2]
  res[2] = du[2] -p[3] * u[2] - u[1]*u[2]
  res[3]=  u[2] + u[1] - u[3]
end

res = similar(u)
t = 0.0

f!(res, du, u, p, t)

# need to combine d(res)/du + γ.*d(res)/d(du)
J_u  = ModelingToolkit.jacobian(res, u)
J_du = γ.*ModelingToolkit.jacobian(res, du)
J = similar(J_u)

for i in 1:length(J_u)
    J[i] = simplify(J_u[i] + J_du[i])
end

J_func! = eval(build_function(sparse(J), u, γ)[2])

function gnew!(J,du,u,p,gamma,t)
    J_func!(J, u, gamma)
    nothing
end

jac_prototype = sparse([1.0 1 0; 1 1 0; 1 1 1])

# solving the DAE
fg2 = DAEFunction(f!;jac=gnew!,jac_prototype = jac_prototype)
probMTK = DAEProblem(fg2,dIC,IC,tspan,p,differential_vars=diff_vars)
solMTK = solve(probMTK,IDA(linear_solver=:KLU))
1 Like

Thank you so much! This works perfectly.

One last thing: I’ve noticed that using the common interface makes the overall run time of my model a bit slower. I need to run this thing thousands of time in a row, so any speed increase is welcome. I am trying to use the direct Sundials interface, but I can’t seem to figure out how to set up the sparse Jacobian for KLU. Using the same functions defined above, this is the farthest I got (I could only figure out how to use a dense linear solver)

function ida_func(t, u, du, res)
    f!(res,du,u,p,t)
    
    return Sundials.IDA_SUCCESS
end
function ida_jac(_, γ, u, _, _, J, _)
    J_func!(J, u, gamma)
    
    return Sundials.IDA_SUCCESS
end

u   = copy(IC)
du  = copy(dIC)
t = [tspan[1]]

mem = Sundials.IDACreate()
Sundials.@checkflag Sundials.IDAInit(mem, @cfunction(
                                            Sundials.idasolfun, Cint,
                                            (Sundials.realtype, Sundials.N_Vector, Sundials.N_Vector, Sundials.N_Vector, Ref{Function})),
                                            tspan[1], u, du)

Sundials.@checkflag Sundials.IDASetId(mem, diff_vars)
Sundials.@checkflag Sundials.IDASetUserData(mem, ida_func)
Sundials.@checkflag Sundials.IDASStolerances(mem, 1e-8, 1e-8)

A  = Sundials.SUNDenseMatrix(length(u),length(u))
LS = Sundials.SUNLinSol_Dense(u, A)

Sundials.@checkflag Sundials.IDADlsSetLinearSolver(mem, LS, A)
    
Sundials.IDACalcIC(mem, Sundials.IDA_YA_YDP_INIT, t[1]+10.0)
Sundials.IDAGetDky(mem, t[1], 0, u)

Sundials.@checkflag Sundials.IDASolve(mem, 1.0e10, t, u, du, Sundials.IDA_ONE_STEP)

Can you open an issue which has common interface and direct interface code to look at the timing difference? I think @jdlara-berkeley might be interested in help tracking down what the difference is. The common interface wrapper is fairly basic so it shouldn’t be hard to figure out what’s causing a difference in your case.

1 Like

Yeah, I would be interested in finding some performance there. I am working on updating the underlying binaries first.