MATLAB is faster for my large system of ODEs?

Excuse the deliberately provocative title :slight_smile:
I have a very large system of ODEs of the form:

dot{x} = Ax + Bu

A is a sparse matrix 5505x5505. B*u is a 5505 element vector. Using ode45 in matlab, it solves in about 59 minutes. In Julia I have the problem set up like so:

function sys!(dx,x,p,t)
    dx.= A*x + Bu
    nothing
end

x0 = zeros(5505)
prob = ODEProblem(sys!, x0, (0.0,5.0))
@btime sol = solve(prob, CVODE_BDF(), save_everystep = false)

I’ve tried CVODE_BDF and QNDF but I cut them both off after neither solved within an hour. I don’t have a lot of experience with solving large systems, so any tips for accelerating solving would be much appreciated.

Thanks in advance!

5 Likes

The rough equivalent of ode45 would be Tsit5 You are trying to use implicit solvers which are going to be a lot slower if your matrices aren’t stiff. Also, you probably want bu to be part of p rather than a global variable, i.e.

function sys!(dx,x,p,t)
    Bu = p[1]
    dx.= A*x + Bu
    nothing
end

x0 = zeros(5505)
prob = ODEProblem(sys!, x0, (0.0,5.0), (Bu,))
@btime sol = solve(prob, Tsit5(), save_everystep = false)
6 Likes

I’ll give that a shot, thanks! Generally speaking, does Tsit5 tend to be any faster than ode45? Are there any other solvers you’d recommend for this case?

Also, the above isn’t really in-place, it allocates two temporary arrays and then copies the result into dx.

I am not that familiar with sparse linear algebra, but look at the mul! function for in-place, non-allocating operations.

4 Likes

In general, Tsit5 is very similar to ode45 (also known as DP5), but is generally ~20% faster due to using more optimized coefficients. There are tons of methods that are potentially better, but knowing which to look into requires knowing more about the properties of A

3 Likes

Shouldn’t A also be passed within the parameters, rather than being global?

BTW, rather than wait an hour to test a simulation from t = 0 to 5.0, a shorter time span is probably sufficient to get an idea of relative speed.

2 Likes
function sys!(dx,x,p,t)
    mul!(dx, A, x)
    mul!(dx, B, u, 1, 1)
    nothing
end

this is the non-allocating version (depending a bit on where u is coming from).

If u is constant or piecewise constant, you can solve this differential equation by computing a single matrix exponential of exp([A B; 0 0]) and iterate the corresponding system

\begin{align} x(k+1) = A_d x(k) + B_d u(k) \end{align}
11 Likes

@martru, for convenience of other readers of this forum, you may want to consider reflecting the essence of your problem better in the title. Perhaps changing it to something like “Matlab faster in solving a huge linear ODE?”

2 Likes

Best way to get help :sweat_smile: but I edited to be a bit more clear what it’s about to future searchers

5 Likes

Oh, didn’t see this suggestion before I acted :wink:. I just got this power to edit titles and I might be drunk with it

5 Likes

u is indeed constant. I’ll try implementing this!

It’s implemented here

You could either use this functionality directly, or just find the definition of Ad, Bd there

2 Likes

I was just wondering where does this exp! function come from and what does it do?

1 Like

This might clear it up.

1 Like

Clear up what?

I believe OP was answering your question about exp!, with a reference about using matrix exponentials to solve state-space systems. Perhaps your question was more about the ! than the exp. If so, exp! is in-place matrix exponential, specific to LinearAlgebra:

EDIT: In the example above exp! is used in a way I never dared. A matrix is composed within the in-place function call and then assigned, e.g. M = exp!([a b; c d]). My mindset was always that you have to allocate an array before calling in place, otherwise how do you know what the result is? But by assigning to LHS you know the result. I know this is trivial, but this pattern wasn’t obvious to me, and it’s great!

M = exp!([A*Ts B*Ts; zeros(nu, nx + nu)])

3 Likes

Indeed, the ! symbol in exp! was what made me wondering. I am aware of the general habit of naming functions that modify their argument in this way, but it is just that I was unable to find any description of the function exp! – neither in the online documentation, nor through the ? help in REPL.

1 Like

This essentially sped things up by a factor of 170! Thanks!

6 Likes

That’s very cool :slight_smile:

If you find yourself in need of speeding it up even further, you may be able to perform all matrix operations on the GPU. Fig 7 in the ControlSystems.jl paper indicates this being worthwhile above state dimensions of about 100. You may also use the trick employed in ControlSystems.lsim that B*u does not need to be iterated and can be computed in a single operation and added to the resulting state by broadcasting.

5 Likes