DynamicQuantities.jl v0.7.0: fast and lightweight physical quantities

Happy to share DynamicQuantities.jl v0.7.0!


Dev Build Status Coverage Aqua QA

The package has had some major changes over the last few versions, so I think it deserved a new post.

DynamicQuantities defines a simple statically-typed Quantity type for storing physical units.

Physical dimensions are stored as a value, as opposed to a parametric type, as in Unitful.jl. This is done to allow for calculations where physical dimensions are not known at compile time.

Changes since the last post (described more below)

  • Performance improvements (via better compiler inlining)
  • Units, and unit parsing via the @u_str macro
  • Physical constants, and constant parsing via the Constants.* prefix (also in @u_str)
  • SymbolicDimensions for working with symbolic units and constants
    • Standard Dimensions will eagerly convert to SI units; this avoids it.
    • Symbolic unit/constant parsing via the @us_str macro.
  • QuantityArray <: AbstractArray for efficiently storing arrays of quantities that have the same units
  • Extensions for ScientificTypes.jl, LinearAlgebra.jl, as well as an extension to convert to/from Unitful.jl quantities
  • AbstractQuantity and AbstractDimensions for extending behavior or defining custom spaces of physical dimensions

Thanks to contributions from Gaurav Arya @odow @Oscar_Smith @j-fu, and suggestions/feedback from @non-Jedi @ChrisRackauckas @mcabbott and many others in the last thread and on GitHub!


DynamicQuantities can greatly outperform Unitful when the compiler cannot infer dimensions in a function:

julia> using BenchmarkTools, DynamicQuantities; import Unitful

julia> dyn_uni = 0.2u"m^0.5 * kg * mol^3"
0.2 m¹ᐟ² kg mol³

julia> unitful = convert(Unitful.Quantity, dyn_uni)
0.2 kg m¹ᐟ² mol³

julia> f(x, i) = x ^ i * 0.3;

julia> @btime f($dyn_uni, 1);
  2.750 ns (0 allocations: 0 bytes)

julia> @btime f($unitful, 1);
  2.718 μs (30 allocations: 1.34 KiB)

Note the μ and n: this is a 1000x speedup. Here, the DynamicQuantities quantity object allows the compiler to build a function that is type stable, while the Unitful quantity object, which stores its dimensions in the type, requires type inference at runtime.

However, if the dimensions in your function can be inferred by the compiler, then you can get better speeds with Unitful:

julia> g(x) = x ^ 2 * 0.3;

julia> @btime g($dyn_uni);
  1.875 ns (0 allocations: 0 bytes)

julia> @btime g($unitful);
  1.500 ns (0 allocations: 0 bytes)

While both of these are type stable, because Unitful parametrizes the type on the dimensions, functions can specialize to units and the compiler can optimize away units from the code.

Aside: The compiler seems to be pretty good at inlining things especially with the recent update (thanks to Gaurav Arya for helping get this through!), so this performance gap seems to have shrunk for even statically typed calculations. This above calculation used to be 5x in favor of Unitful. However, this depends on compiler constant propagation so it is calculation dependent how big this gap would be.


At the heart of the package is just two immutable structs:

struct Dimensions{R<:Real} <: AbstractDimensions{R}

struct Quantity{T,D<:AbstractDimensions} <: AbstractQuantity{T,D}

The R type here is typically a rational-like number (the default is an internal FixedRational type – a rational number with fixed denominator which gives much faster operations than Rational).

What’s nice about the abstract interface is you can just write a custom AbstractDimensions type with the physical dimensions you want to use, and via the use of @oxinabox’s Tricks.jl, static_fieldnames is used to compile all of the unit propagation methods at first call. So e.g., struct MyDimensions{R} <: AbstractDimensions{R}; length::R; time::R; end would work out of the box! (Quantity(0.3, MyDimensions, length=1, mass=-1) would then be 0.3 m/s, and you could calculate away)

Generic usage

You can create a Quantity object by using the convenience macro u"...":

julia> x = 0.3u"km/s"
300.0 m s⁻¹

julia> room_temp = 100u"kPa"
100000.0 m⁻¹ kg s⁻²

This supports a wide range of SI base and derived units, with common prefixes.

You can also construct values explicitly with the Quantity type, with a value and keyword arguments for the powers of the physical dimensions (mass, length, time, current, temperature, luminosity, amount):

julia> x = Quantity(300.0, length=1, time=-1)
300.0 m s⁻¹

Elementary calculations with +, -, *, /, sqrt, cbrt, abs are supported, and ^ will use rationalize to get a reasonable power from exponentiation:

julia> x * y
12600.0 m kg s⁻¹

julia> x / y
7.142857142857143 m kg⁻¹ s⁻¹

julia> x ^ 3
2.7e7 m³ s⁻³

julia> x ^ -1
0.0033333333333333335 m⁻¹ s

julia> sqrt(x)
17.320508075688775 m¹ᐟ² s⁻¹ᐟ²

julia> x ^ 1.5
5196.152422706632 m³ᐟ² s⁻³ᐟ²

Each of these values has the same type, which means we don’t need to perform type inference at runtime.

Furthermore, we can do dimensional analysis by detecting DimensionError:

julia> x + 3 * x
1.2 m¹ᐟ² kg

julia> x + y
ERROR: DimensionError: 0.3 m¹ᐟ² kg and 10.2 kg² s⁻² have incompatible dimensions

The dimensions of a Quantity can be accessed either with dimension(quantity) for the entire Dimensions object:

julia> dimension(x)
m¹ᐟ² kg

or with umass, ulength, etc., for the various dimensions:

julia> umass(x)

julia> ulength(x)

Finally, you can strip units with ustrip:

julia> ustrip(x)


There are a variety of physical constants accessible
via the Constants submodule:

julia> Constants.c
2.99792458e8 m s⁻¹

These can also be used inside the u"..." macro:

julia> u"Constants.c * Hz"
2.99792458e8 m s⁻²

For the full list, see the docs.

Symbolic Units

You can also choose to not eagerly convert to SI base units, instead leaving the units as the user had written them. For example:

julia> q = 100us"cm * kPa"
100.0 cm kPa

julia> q^2
10000.0 cm² kPa²

You can convert to regular SI base units with expand_units:

julia> expand_units(q^2)
1.0e6 kg² s⁻⁴

This also works with constants:

julia> x = us"Constants.c * Hz"
1.0 Hz c

julia> x^2
1.0 Hz² c²

julia> expand_units(x^2)
8.987551787368176e16 m² s⁻⁴

This dimensions type works a bit differently as it stores all the dimensions in a sparse vector (source for the curious). All unit calculations are performed as operations on this sparse vector (this is convenient for user interfaces, and still reasonably fast, but not as fast as the regular immutable Dimensions).


For working with an array of quantities that have the same dimensions, you can use a QuantityArray:

julia> ar = QuantityArray(rand(3), u"m/s")
3-element QuantityArray(::Vector{Float64}, ::Quantity{Float64, Dimensions{DynamicQuantities.FixedRational{Int32, 25200}}}):
 0.2729202669351497 m s⁻¹
 0.992546340360901 m s⁻¹
 0.16863543422972482 m s⁻¹

This QuantityArray is a subtype <:AbstractArray{Quantity{Float64,Dimensions{...}},1},
meaning that indexing a specific element will return a Quantity:

julia> ar[2]
0.992546340360901 m s⁻¹

julia> ar[2] *= 2
1.985092680721802 m s⁻¹

julia> ar[2] += 0.5u"m/s"
2.485092680721802 m s⁻¹

Which performs dimension checks.

This has a custom broadcasting interface which allows the compiler to avoid redundant dimension calculations and dimensional analysis, relative to if you had simply used an array of quantities:

julia> f(v) = v^2 * 1.5;

julia> @btime $f.(xa) setup=(xa = randn(100000) .* u"km/s");
  109.500 μs (2 allocations: 3.81 MiB)

julia> @btime $f.(qa) setup=(xa = randn(100000) .* u"km/s"; qa = QuantityArray(xa));
  50.917 μs (3 allocations: 781.34 KiB)

So we can see the QuantityArray version saves on both time and memory.


DynamicQuantities allows you to convert back and forth from Unitful.jl:

julia> import Unitful; import DynamicQuantities

julia> x = 0.5Unitful.u"km/s"
0.5 km s⁻¹

julia> y = convert(DynamicQuantities.Quantity, x)
500.0 m s⁻¹

julia> y2 = y^2 * 0.3
75000.0 m² s⁻²

julia> x2 = convert(Unitful.Quantity, y2)
75000.0 m² s⁻²

julia> x^2*0.3 == x2


One other thing to mention, which is actually part of the reason I started working on this package: SymbolicRegression.jl now supports dimensional constraints as part of equation discovery searches! This feature required the creation of this package, as expressions are runtime generated so Unitful.jl could not be used efficiently. Example here: Examples · SymbolicRegression.jl


Thanks for the package! It’ll also be useful outside of the diffeq ecosystem sometimes.

I wonder how compatible its interface is with Unitful? That is, can one take code, replace using Unitful with DynamicQuantities, and be done? Or, is this among future goals?

Also, just a minor comment:
The name clearly conveys what this package does, but the short description doesn’t really differentiate it from Unitful.

fast and lightweight physical quantities

simple statically-typed type for storing physical units

Both are applicable to Unitful as-is. Maybe, highlight dynamic/runtime/untyped nature more?

1 Like

That is, can one take code, replace using Unitful with DynamicQuantities, and be done?

I think this is mostly there? The same core symbols: @u_str, uparse, and ustrip can be used the same way (mostly; I don’t allow u"(m, s)" for example as it would require type instability), and @u_str creates a Quantity. This Quantity type in both packages forwards all the basic mathematical methods you would expect. You can also dispatch on Quantity{Float64} in both packages to expect quantities with a Float64 value type. But please raise an issue if there are any major incompatibilities I’m missing! I would like it to be compatible aside from the major design differences –

The main difference which we are stuck with is from how they treat dimensions at the type level (if that is what you are asking). So if you are dispatching on dimensions, like

julia> using Unitful

julia> x = 1.5u"m/s"
1.5 m s⁻¹

julia> typeof(x)
Quantity{Float64, 𝐋 𝐓⁻¹, Unitful.FreeUnits{(m, s⁻¹), 𝐋 𝐓⁻¹, nothing}}

and then dispatching on <:Quantity{Float64, 𝐋 𝐓⁻¹}, then you would have to change that code to look at the values of the quantity instead. Because DynamicQuantities.jl does this:

julia> using DynamicQuantities

julia> x = 1.5u"m/s"
1.5 m s⁻¹

julia> typeof(x)
Quantity{Float64, Dimensions{FixedRational{Int32, 25200}}}

This is the same type regardless of the units used. The dimensions are stored in dimension(x) which is a simple immutable struct:

julia> Base.propertynames(dimension(x))
(:length, :mass, :time, :current, :temperature, :luminosity, :amount)

which are all rationals.

Good point, thanks! Any suggestions for wording it would be much appreciated. Maybe something like DynamicQuantities defines a simple type-consistent Quantity for storing physical units?

This is very nice indeed. I do suspect however that this is not for large-scale simulations? There is a penalty to pay when storing and manipulating a large number of numbers with units?

1 Like

Thanks. So If you are confident that there are no type instabilities due to failure to infer units during compilation (which can really hurt performance), or even things like multiple units in a single array (which IIRC might promote to Array{Any}?), then yes Unitful will be faster.

How much faster depends on the calculation and how much inlining the compiler does.

Sometimes the compiler can propagate unit values pretty well by itself, without needing to encode everything in the type (as seen in the above comparison). And the QuantityArray means it might choose to process the units only once for the entire array. For storage costs, this would be a single Dimensions for the whole array — there is not a separate Dimensions stored for each number. So only 32 bytes extra per entire array.

My gut feeling is that if this is a highly tuned and type stable simulation with a few expensive kernels that are called repeatedly, then I might stick with Unitful as it would completely eliminate the cost. But since the API is pretty similar you might as well try and see :wink:

The other pro is you can take full advantage of precompilation, since you don’t need to compile the simulation for all possible units, only once for a generic Quantity.


Not inlining but branch prediction. The switch statements of “if this unit” etc. are actually “free” when running in a loop, because your CPU does speculative execution and so it just starts executing one branch under the assumption that it’s correct and if it’s correct there’s no cost. When run in a loop, this has a process of effectively “learning” the branches to predict (internal on the CPU, some cool stuff) and so the branches predict better and the cost of having the branches is effectively zero. See:

Given this fact, I think way too many people overestimate the utility of putting this unit information into the type domain. That said, the existence of branches do cause LLVM to omit SIMD, and so if you did have a loop that SIMD’d very well you would pay the price of that being eliminated because the runtime checks would interfere with the assumptions of SIMD.

But that said, for a lot of ODE/PDE models, the main cost is in the LU-factorizations and if that just ignores units, this shouldn’t be too noticable of a hit to the f evaluations and so it shouldn’t actually show up as a major contributor in “most” profiles. You can definitely find some cases where this matters (very small cases getting very good SIMD), but I think people should really go to this technique first before trying Unitful in almost every case because Unitful is a lot more work for a performance optimization that only exists in a minority of cases (especially because many cases with Unitful will hit Array{Any} and actually be orders of magnitude slower due to dynamic dispatching).

Moral of the story, DynamicQuantities.jl is really good.


I think the issue is that the additional overhead due to units is often due to running something like unit1 * unit2 = unit3 repeatedly in a loop? This computation isn’t a branch but rather some arithmetic operations that are redundantly performed each iteration.

Using a QuantityArray we could hope for this to be lifted out of the loop since the compiler should in theory be able to prove that unit1 and unit2 are loop invariant, but this only seems to happen in limited cases dependent upon inlining heuristics.

(Spitballing here, but if pure branch computations are free, I wonder if we could optimize unit1 * unit2 to be a pure branch computation in common cases via a lookup table-esque approach…)

The compiler doesn’t lift it out in this case, but branch prediction can make the cost trivial anyways because there is no cost to well-predicted branches, and a simple structure will predict well. Again, what you lose in performance is just SIMD.

1 Like