Physical units in Julia

A big reason I came to Julia was to have physical units for scientific computations:
g = 4 * nS,
E = 0 * mV,
input_currents = g * (E .- V_mems).
All of the current solutions in Python are unfeasibly slow if you use them in simulations — they are really only for toy calculations. You can strip off units before passing your data to say Numba, and afterwards re-add them, but that’s just not worth the effort and code bloat. So: the promised land of Julia!

The de facto standard library in Julia is currently Unitful.jl: GitHub - PainterQubits/Unitful.jl: Physical quantities with arbitrary units. I love how it handles units in the type system, yielding zero runtime overhead (that is, if your unitful arrays are of homogeneous dimension – which they often are). A perfect match with Julia.
Unitful seems to have learned from previous attempts at units in Julia. Such evolution in the ecosystem is lovely to see.

However, the interactive UX is pretty bad (and interactive use is a big reason for wanting physical units): Unitful quantities have massive type signatures, that make the display of compound types unreadable.
(As an aside, compound types in Julia are already often quite unreadable… anyone know of a package that can display nested types a bit more humanely?).

A yet simple example:

julia> [(0:8)]
1-element Vector{UnitRange{Int64}}:

julia> using Unitful; using Unitful: mV

julia> [(0:8)*mV]
1-element Vector{StepRange{Quantity{Int64, 𝐋 ^2 𝐌  𝐈 ^-1 𝐓 ^-3, Unitful.FreeUnits{(mV,), 𝐋 ^2 𝐌  𝐈 ^-1 𝐓 ^-3, nothing}}, Quantity{Int64, 𝐋 ^2 𝐌  𝐈 ^-1 𝐓 ^-3, Unitful.FreeUnits{(mV,), 𝐋 ^2 𝐌  𝐈 ^-1 𝐓 ^-3, nothing}}}}:
 (0:8) mV

– or try combining them with ComponentArrays and DifferentialEquations if not convinced.

For my own work I gave up on Unitful, and resorted to defining milli = 1e-3; volt = 1; mV = milli * volt, as I also did in Python.

I think the verbosity problem in Unitful cannot be solved without a couple of quite breaking changes. Or we could have show(::Type{}) print something prettier, but then it would be lying about the actual type of the object, which isn’t desired.
Hence why I’m thinking of prototyping yet another units library, with leaner type parameters.

I wanted to get some others’ thoughts on this.

  • @ChrisRackauckas, I remember you having ideas of how a new unit library should work? – Especially with regards to Arrays of units.
    (Unitful already handles this pretty nicely IMO: An Array{Unitful.Quantity{Float64, ....} has the exact same bits representation as the correspondent Array{Float64}).

  • Is there animo for a units interface package?
    This would allow different ideas and tradeoffs in unit library design to coexist, while still allowing interop with downstream packages that process Numbers (think e.g. Distributions.Normal(μ = 8 mV, σ = 2 mV): ok. But how do you specify the units of the variate for a LogNormal(μ, σ), whose parameters are unitless? You need a third argument – or maybe an extra type parameter – of type AbstractUnits).
    Note that Base already has oneunit(x). But you generally need more to support units: abstract types for Units and unitful Numbers; functions like units(x) and stripunits(x); and objects like dimensionless.
    If such an interface package is deemed useful, it might live under a new github organization, say JuliaUnits.

(Tagging some contributors who might have thoughts: @ajkeller34 @sostock @giordano @tim.holy ).


The problem is that putting units in the type domain is basically antithetical to having different units in an array type. Unless you have at most 4 units, then you would need a separate way of doing it since otherwise you’d be beyond union splitting and would have dynamic dispatch and your performance would be bad. So there’s essentially three ways of doing this:

  1. Use the units in the modeling and then drop it at the solve time. This can be done with modeling in a place like ModelingToolkit.jl. And it documents how to do this. I would almost always recommend doing this, since then you have a much simpler unitless solve at full performance. That said, you lose some of the niceties like units in the plots.

  2. Use dynamic units. You can make your units be isbits structs with a Symbol and a value so that everything is still static dispatch, but unit checks and conversions are done at runtime. This might inhibit SIMD, but would allow for arbitrary units in arrays. It would no longer be doing conversations at compile-time so it would not be a free abstraction.

  3. Use larger array constructs, like PartitionedArrays, to have different partitions for the units in the array. In theory this would allow Unitful (though maybe some things which rely on a “scalar type” need to be expanded), but would greatly increase the compile times. I generally don’t think this is the right direction.


It seems like the problem is the types are being displayed verbosely. I think the solution would be to change the display of the types, not change the semantics of the types.

We can show an abbreviated type with an indication that the full type isn’t being shown.

For example, using /\ to abbreviate types, I can replace

julia> [(0:8)]
1-element Vector{UnitRange{Int64}}:


julia> (, ::Type{Vector{UnitRange{T}}}) where T) = print(io, "/$(first(string(T)))\\")

julia> [(0:8)]
1-element /I\:

Though it’s probably better to target Base.display(r::Base.REPL.REPLDisplay, ...)
instead of something so low-level as show(io::IO, ...). If you do that, it wouldn’t need to be a breaking change to Unitful, it could be a third-party library that customizes the Unitful display for your repl.

See also


I think one design choice in Unitful that I’d do differently and that would already halve the output, is to not add the dimensions to the type of a quantity – they’re already in the unit.
The ability to dispatch on dimensions sounds nice, but is not that often used I think.
Edit: You could still dispatch on dimensions. You just have to go through the Units, which contains the Dimensions.

1 Like

This covers 2 of the 3 main use cases of units for me: semantically annotating input parameters (can also be done with just comments); and validation of equations (doesn’t trigger often, but when it does, it’s a real life saver). The 3rd use case – printing unitful quantities in the REPL, in notebooks and in plots – is just as, if not more important IMO.

I do not understand this (due to limited julia internals knowledge). Is this about arrays containing different units, like [8 GHz, 8 mm, 8 kg]?

I don’t think that’s true for everybody. I’d say for me it’s more common that I want to dispatch on a quantity being, say, an energy, rather than having a specific unit. I see little value in that.

Regarding the original point of your thread, it’s clear that embedding unit/dimension in the type domain creates problems when it comes to dealing with heterogeneous quantities. But if you remove dimension/units from the type parameters you loose the ability to dispatch on them, which is probably one of the most attractive features. Also, if you add more fields to the struct you loose also the ability to easily reinterpret a Quantity.

I’m not sure there is an obvious solution, it’s a balance of trade-offs.


Oh yes totally, dispatch on physical dimension is way more useful than on specific units. I just didn’t know many people used dispatch on anything unit related at all. (Except on the fact that it’s a unitful quantity in itself; f(x::AbstractArray{<:Quantity), that I did often)

Edit: No I see, dimension matching is very useful in where clauses, to make sure different arguments / fields are of the same dimension.

Yes, that array is not type stable, and if you do this (with two more units) then you’ll hit a case where the performance is astronomically worse than if you just did runtime unit conversions.

1 Like

An arguably more intuitive parametrization would be median=exp(mu) and sigma. Then you’d use LogNormal(median=10u"m", sigma=1) and everything just worked with units.

This is very useful simply for restriction of the method signature to only accept inputs of the correct dimension. Granted, you could always check this manually in the method body, or perhaps define suitable union types.

To address the problem with verbosity in printing types, there is

which is an attempt at mitigating this problem by making the display of nested types more compact.


It should be possible to archieve this by modifying the corresponding method (or, probably better, defining your own show method) and a plotting receipe.
If you got a solution, a PR to Unitful.jl would surely be welcome!

1 Like

I feel your pain about printing long types, but it’s historically been frowned upon to lie about printing of types—once you do that, certain forms of debugging become nearly impossible. Once upon a time, it even routinely caused segfaults, though I think those days are over. Generally, though, try to avoid specializing show(io::IO, ::Type{...}) wherever possible.

I’d also discourage you from creating a new Units library without a really, really good reason. (really, really, really, really…)

The better approach is to show values and just try to avoid showing types. You should be able to fix that by creating appropriate Base.showarg methods.


Specifically on the topic of arrays with unitful entries: I do think we need a special array package for that. The point is that, e.g., linear algebra only makes sense for certain arrays, where the units are a cartesian product of unitful vectors for each axis. You can see that quite easily: with

\sum_j A_{ij} x_j

you can only add the terms together if A_{ij} x_j all have the same units for each j. Consequently, the units of A_{ij} must be given by u_i v_j where {\bf u} and {\bf v} are unitful-vectors.

So really unitful arrays should be handled as dimensionless arrays with axis-vectors of units.