Unitful.jl and DifferentialEquations.jl: compatibility?

I’ve been using Julia for a little while, long enough to have successfully solved ODEs before. I’m trying to use DifferentialEquations together with Unitful, which I thought I had read should work, but it doesn’t seem to be as simple as I thought. Here’s an example mostly like what I’m trying to do:

using Unitful
using DifferentialEquations
k1 = 0.1u"1/s"
k2 = 2.0u"K/s"
ode3a = @ode_def begin
    dTb = Tb*k1 + k2
end k1 k2
tspan = (0.0u"s", 6.0u"s")
params = [k1, k2]
prob = ODEProblem(ode3a, [300u"K"], tspan, params)

And the result I get is

InexactError: Int64(1731//50000)


I’ve used a units package in Python that didn’t play nicely with solvers, so inside the RHS function I had to add units at the start and strip units away before passing back to the solver; I can do that here, but hoped to avoid that. Is there an obvious answer I’m missing?

1 Like

Can’t try it right now, but the error indicates that a non-integer gets put into a variable that is defined as an integer.

So, I would check whether changing your initial condition to 300.0u”K” helps.

1 Like

This works:

julia> prob = ODEProblem(ode3a, [300.0u"K"], tspan, params)
ODEProblem with uType Vector{Quantity{Float64, 𝚯, Unitful.FreeUnits{(K,), 𝚯, nothing}}} and tType Quantity{Float64, 𝐓, Unitful.FreeUnits{(s,), 𝐓, nothing}}. In-place: true
timespan: (0.0 s, 6.0 s)
u0: 1-element Vector{Quantity{Float64, 𝚯, Unitful.FreeUnits{(K,), 𝚯, nothing}}}:
 300.0 K

julia> solve(prob, Tsit5())
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 5-element Vector{Quantity{Float64, 𝐓, Unitful.FreeUnits{(s,), 𝐓, nothing}}}:
                0.0 s

Note the 300.0 and the Tsit5. The former is necessary to get the IC in the right format. The latter is using an explicit solver (specifying none seems to select Rodas5). This makes it much easier as there is no autodiff happening, which again adds constraints on the types.

It can still be done, but gets a bit tricker. Potentially easier, if you need an implicit solver, is to set autodiff=false, although this still throws an error:

julia> solve(prob, Rodas5(autodiff=false))                                                                                                                                                      
ERROR: DimensionError: K and false are not dimensionally compatible.                                                                                                                            
  [1] convert(#unused#::Type{Quantity{Float64, 𝚯, Unitful.FreeUnits{(K,), 𝚯, nothing}}}, x::Bool)                                                                                               

not sure what’s the problem there.


Unitful can’t do linear algebra well.


Thanks! I figured it was something like this. I think I assumed that Unitful automatically treated these numbers as floats. An explicit solver is just fine for what I need, at least for now, so this fixes everything I have.

Are there any better packages for doing unit-aware calculation? Probably when performance really matters it’s better to sanitize inputs and do without the units, I’m guessing, but I would love to be able to use something like Unitful more of the time.

1 Like

Unitful.jl’s approach does not have a runtime cost. It’s the best we have right now.

The function signature should be twoBodyNew(t, y, μ, dy) . It’s always ((du),u,p,t) . The docs on that linked page have been fixed.

Why does it need to? I was under the (probably too simplistic) impression that that problem is solved/avoided by multiple dispatch/julias unreasonable effectiveness? Meaning, unitful needs to know about units, and linear algebra packages about linear algebra.

Why does this fall down here, are a number of operation implementations missing in unitful or some such?

Not for unitful. For unitful it would need to check units and then reinterpret to unitless, apply the linear function, and then reinterpret back to the correct units. So all linear algebra would need an overload. Otherwise every linear algebra call falls back to not use BLAS (slow) and requires generic linear algebra to all be unit-compatible (which it is not).

Huhu, interesting, Thanks for clearing that up.
I wonder how python’s Pint units package achieves that - iirc, most (all?) of numpy’s operations nowadays work with units transparently, maybe using ufuncs or another mechanism.

Pint is built in a way that’s really slow though. I’ll do a write-up on unit implementations soon to show what needs to be done. I’ve been meaning to get to it for like, 2 years.


The code shown in the comment assumes that the number and its inverse have the same type, assumption which fails when the unit dimension is encoded in the type domain, which is what Unitful.jl does. Another approach would be to have a type which doesn’t embed the unit and unit dimension in the type as parameters, which is what Chris would prefer.


Pint has a 700-line module to manually wrap Numpy functions. Most of those functions work out-of-the-box for Unitful.jl, in particular the elementwise ones like sin, cos, etc, thanks to the fact broadcasting in Julia is explicit and you don’t need to write a different function to opt into “vectorization”.

The only problematic functions for Unitful.jl are the linear algebra ones where the functions defined in the standard library LinearAlgebra make assumptions that don’t play nicely with unit dimensions in the type domain, as explained in the message above. Some simple functions on homogeneous arrays already work though:

julia> using Unitful, LinearAlgebra

julia> M = [1u"m" 2u"m"; 3u"m" 4u"m"]
2×2 Matrix{Quantity{Int64, 𝐋, Unitful.FreeUnits{(m,), 𝐋, nothing}}}:
 1 m  2 m
 3 m  4 m

julia> det(M)
-2.0 m²

julia> tr(M)
5 m

Interesting, thanks for the insight & links!

1 Like

Trying to use DiffEqOperators and Unitful together throws an error. Do I understand it correctly that it’s it just the current state of the things?

Probably yeah. Open an issue