[ANN] UnitfulTensors.jl: Efficient arrays with units

Hello everyone!

I wrote a package UnitfulTensors.jl for working efficiently with arrays of physical quantities.

It implements most of the LinearAlgebra functionality, but with physical units (including arrays with mixed units).

The inclusion of units results in a moderate runtime overhead, often negligible. There is also a zero-overhead mode: check the units on a small-scale problem > turn them off with units_off() > proceed with a large-scale problem.

Showcase

using UnitfulTensors, LinearAlgebra

# h-parameters
H = UnitfulTensor([3.7u"kΩ" 1.3e-4
                   120      8.7u"μS"])
Zₛ = 50u"Ω" # Source impedance
Yₗ = 20u"mS" # Load admittance
Hₛ = deepcopy(H)
Hₛ[1, 1] += Zₛ
Zout = inv(Hₛ)[2, 2] # Output impedance
Hₛₗ = H + Diagonal(UnitfulTensor([Zₛ, Yₗ]))
Gₜ = 4 * Zₛ * Yₗ * (Hₛₗ[2, 1] / det(Hₛₗ))^2 # Transducer power gain
Gₜ₂ = 4 * Zₛ * Yₗ * (inv(Hₛₗ)[2, 1])^2 # Equivalent expression
println((Zout, value(Zout), Zout/1u"kΩ"))
println((Gₜ, Gₜ₂))

# output

(220264.31718061675 kg m^2 A^-2 s^-3, 220264.31718061675, 220.26431718061676)
(10.2353526224919, 10.2353526224919)

Performance

Dense 3×3 matrix

using UnitfulTensors, BenchmarkTools
        
N = 3

A = randn(N, N)
b = randn(N)

uA = UnitfulTensor(A, 𝐋 * nodims(N, N))
ub = UnitfulTensor(b, 𝐌 * nodims(N))

println("Matrix * vector:")
@btime $A * $b
@btime $uA * $ub
println("Matrix * matrix:")
@btime $A * $A
@btime $uA * $uA
println("Solving a linear system:")
@btime $A \ $b
@btime $uA \ $ub

# output

Matrix * vector:
  119.563 ns (1 allocation: 80 bytes)
  160.231 ns (1 allocation: 80 bytes)
Matrix * matrix:
  82.297 ns (1 allocation: 128 bytes)
  115.884 ns (1 allocation: 128 bytes)
Solving a linear system:
  671.548 ns (3 allocations: 288 bytes)
  861.569 ns (5 allocations: 576 bytes)

Laplacian on a 10×10 grid, dense and sparse

using UnitfulTensors, LinearAlgebra, SparseArrays, BenchmarkTools

Nx = 10
N = Nx^2

Δ1d = Tridiagonal(fill(1., Nx - 1), fill(-2., Nx), fill(1., Nx - 1))
𝟙 = I(Nx)

Δ = kron(Δ1d, 𝟙) + kron(𝟙, Δ1d)
Δs = sparse(Δ)
ρ = randn(N)

uΔ = UnitfulTensor(Δ, 𝐋^-2 * nodims(N, N))
uΔs = UnitfulTensor(Δs, 𝐋^-2 * nodims(N, N))
uρ = UnitfulTensor(ρ * u"C/m^3")

println("Matrix * vector:")
@btime $Δ * $ρ
@btime $uΔ * $uρ
@btime $Δs * $ρ
@btime $uΔs * $uρ
println("Matrix * matrix:")
@btime $Δ * $Δ
@btime $uΔ * $uΔ
@btime $Δs * $Δs
@btime $uΔs * $uΔs
println("Solving a linear system:")
@btime $Δ \ $ρ
@btime $uΔ \ $uρ
@btime $Δs \ $ρ
@btime $uΔs \ $uρ

# output

Matrix * vector:
  1.618 μs (1 allocation: 896 bytes)
  2.038 μs (1 allocation: 896 bytes)
  755.763 ns (1 allocation: 896 bytes)
  1.128 μs (1 allocation: 896 bytes)
Matrix * matrix:
  56.211 μs (2 allocations: 78.17 KiB)
  57.104 μs (2 allocations: 78.17 KiB)
  16.128 μs (6 allocations: 35.50 KiB)
  16.492 μs (6 allocations: 35.50 KiB)
Solving a linear system:
  150.873 μs (4 allocations: 79.92 KiB)
  153.623 μs (6 allocations: 85.67 KiB)
  455.481 μs (125 allocations: 253.85 KiB)
  455.188 μs (127 allocations: 259.60 KiB)

Brief API overview

Feature requests and other suggestions are welcome!

Pinging those who might be interested: @ChrisRackauckas, @briochemc

10 Likes

Very nice! Did you benchmark performance against arrays of Unitful? For large arrays I’d expect to see a significant speedup

A benchmark against QuantityArrays from DynamicQuantities.jl could be nice as well.

4 Likes

Unitful.jl sometimes works well for arrays with homogeneous units, but as soon as you have heterogeneous arrays, you are doomed.
Even with homogeneous arrays there can be slowdowns by orders of magnitude, see the matrix*matrix benchmark below.
\ doesn’t work with Unitful.jl.

using Unitful, LinearAlgebra, SparseArrays, BenchmarkTools

Nx = 10
N = Nx^2

Δ1d = Tridiagonal(fill(1., Nx - 1), fill(-2., Nx), fill(1., Nx - 1))
𝟙 = I(Nx)

Δ = kron(Δ1d, 𝟙) + kron(𝟙, Δ1d)
Δs = sparse(Δ)
ρ = randn(N)

uΔ = Δ * u"m^-2"
uΔs = Δs * u"m^-2"
uρ = ρ * u"C/m^3"
uΔhet = hcat(uΔ[:, 1:N÷2], uΔ[:, N÷2+1:end] * u"m^2")
uΔshet = hcat(uΔs[:, 1:N÷2], uΔs[:, N÷2+1:end] * u"m^2")
uρhet = [uρ[1:N÷2] * u"m^2"; uρ[N÷2+1:end]]

println("Matrix * vector:")
@btime $Δ * $ρ
@btime $uΔ * $uρ
@btime $Δs * $ρ
@btime $uΔs * $uρ
println("Matrix * matrix:")
@btime $Δ * $Δ
@btime $uΔ * $uΔ
@btime $Δs * $Δs
@btime $uΔs * $uΔs
println("Matrix * vector, heterogeneous units:")
@btime $Δ * $ρ
@btime $uΔhet * $uρhet
@btime $Δs * $ρ
@btime $uΔshet * $uρhet

# output

Matrix * vector:
  1.625 μs (1 allocation: 896 bytes)
  2.599 μs (1 allocation: 896 bytes)
  755.117 ns (1 allocation: 896 bytes)
  754.833 ns (1 allocation: 896 bytes)
Matrix * matrix:
  57.442 μs (2 allocations: 78.17 KiB)
  1.156 ms (6 allocations: 108.94 KiB)
  16.610 μs (6 allocations: 35.50 KiB)
  16.015 μs (6 allocations: 35.50 KiB)
Matrix * vector, heterogeneous units:
  1.622 μs (1 allocation: 896 bytes)
  655.666 μs (20301 allocations: 318.06 KiB)
  756.780 ns (1 allocation: 896 bytes)

MethodError: no method matching zero(::Type{Any})
1 Like

QuantityArrays are nice, except for two minor inconveniences:

  • for homogeneous arrays they don’t work
  • for heterogeneous arrays they don’t exist
using DynamicQuantities

QuantityArray([1u"m";;]) * QuantityArray([1u"m";;])

# output

DimensionError: Quantity{Float64, Dimensions{DynamicQuantities.FixedRational{Int32, 25200}}}[6.9154917529978e-310 m;;] and 1.0 m² have incompatible dimensions

Maybe what you meant was

QuantityArray([1u"m";;]) .* QuantityArray([1u"m";;])

or

QuantityArray([1;;],u"m") .* QuantityArray([1;;],u"m")

which both work fine and return

1×1 QuantityArray(::Matrix{Float64}, ::Quantity{Float64, Dimensions{DynamicQuantities.FixedRational{Int32, 25200}}}):
1.0 m²

Although I agree that matrix product returns an error even with compatible rows and column size, which I have overlooked at first, sorry about that. That looks like a bug doesn’t it ?

EDIT : There is actually a PR about this in the works Linear Algebra extension: `svd`, `inv`, `eigen`, `(\)` for `QuantityArray` by ggebbie · Pull Request #75 · SymbolicML/DynamicQuantities.jl · GitHub

Yep (although I personally might not call it a bug, rather, a lack of an implementation :wink: – the difference being that this fails loudly, not that it is incorrect). DQ.QuantityArray is missing some linear algebra which is indeed what the PR adds. That PR author seems to not have had much time to finish it off so in case someone is interested, feel free to start a branch and I can help with it.

But I can also just implement Base.:* and other basic operators on QuantityArray now, that seems pretty easy to do. I’m surprised this hasn’t come up earlier! I guess I was just assuming people would use the vectorized interface instead.

Hm, I’m not sure I follow. This is sort of the entire reason you would use DynamicQuantities in the first place:

julia> using DynamicQuantities

julia> H = [3.7u"kΩ" 1.3e-4
            120      8.7u"μS"]  # Type stable!
2×2 Matrix{Quantity{Float64, Dimensions{FixedRational{Int32, 25200}}}}:
 3700.0 m² kg s⁻³ A⁻²               0.00013
               120.0   8.7e-6 m⁻² kg⁻¹ s³ A²

julia> Zₛ = 50u"Ω" # Source impedance
50.0 m² kg s⁻³ A⁻²

julia> Yₗ = 20u"mS" # Load admittance
0.02 m⁻² kg⁻¹ s³ A²

julia> Hₛ = deepcopy(H)
2×2 Matrix{Quantity{Float64, Dimensions{FixedRational{Int32, 25200}}}}:
 3700.0 m² kg s⁻³ A⁻²               0.00013
               120.0   8.7e-6 m⁻² kg⁻¹ s³ A²

julia> Hₛ[1, 1] += Zₛ
3750.0 m² kg s⁻³ A⁻²

All parts of this are type-stable. You can also keep the original non-SI units by using us"kΩ" instead:

julia> H = [3.7us"kΩ" 1.3e-4
            120      8.7us"μS"]
2×2 Matrix{Quantity{Float64, SymbolicDimensions{FixedRational{Int32, 25200}}}}:
 3.7 kΩ  0.00013
 120.0     8.7 μS

(Only recommended for displaying; not for computation. Use uexpand to move to SI units)

I guess the missing parts are the LinearAlgebra stuff like inv which that PR is supposed to add.

Does this just refers to missing parts of the LinearAlgebra interface on QuantityArray? I agree we definitely need it.

2 Likes

I was referring to QuantityArrays specifically, not to Array{<:Quantity}.
Array{<:Quantity} indeed can be heterogeneous, but it also has limited linear algebra functionality and adds quite a lot of overhead even when it works:

using DynamicQuantities, LinearAlgebra, SparseArrays, BenchmarkTools

Nx = 10
N = Nx^2

Δ1d = Tridiagonal(fill(1., Nx - 1), fill(-2., Nx), fill(1., Nx - 1))
𝟙 = I(Nx)

Δ = kron(Δ1d, 𝟙) + kron(𝟙, Δ1d)
Δs = sparse(Δ)
ρ = randn(N)

uΔ = Δ * u"m^(-2)"
uΔs = Δs * u"m^(-2)"
uρ = ρ * u"C/m^3"

println("Matrix * vector:")
@btime $Δ * $ρ
@btime $uΔ * $uρ
# uΔs * uρ doesn't work
println("Matrix * matrix:")
@btime $Δ * $Δ
@btime $uΔ * $uΔ
@btime $Δs * $Δs
@btime $uΔs * $uΔs
# \ doesn't work

# output

Matrix * vector:
  1.606 μs (1 allocation: 896 bytes)
  46.078 μs (1 allocation: 4.06 KiB)
Matrix * matrix:
  55.533 μs (2 allocations: 78.17 KiB)
  4.723 ms (6 allocations: 421.06 KiB)
  15.873 μs (6 allocations: 35.50 KiB)
  28.545 μs (6 allocations: 104.25 KiB)

UnitfulTensors.jl stores just the row and column dimensions of a matrix instead of separate dimensions for each element. This significantly reduces both the runtime and memory overhead.

UnitfulTensors.jl and DynamicQuantities.jl have somewhat complementary functionalities. Your package supports things like broadcasted operations, fill, hcat, push!, conj, abs, but not much linear algebra. Mine supports linear algebra, but misses general array manipulation, elementwise functions, and some important functions outside the scope of linear algebra.

Sure. It might be easier to implement NoAxisDimensions (representing a dimensionally homogeneous axis without explicitly storing a lot of NoDims) in UnitfulTensors.jl. Then you will get almost zero-overhead homogeneous arrays which support bunchkaufman, condskeel, and lowrankdowndate (even if you don’t need them :wink:).

These people have previously participated in discussions about unitful linear algebra or did some relevant work, so I will do a little more targeted advertising:
@goretkin, @tim.holy, @singularitti, @cstjean, @j-fu, @mschauer

1 Like

Just to avoid veering off into this type of discussion, I should mention that I don’t consider DynamicQuantities “my” package. It’s a community package for type-stable units that I happened to contribute a lot to (since I had a strong need for it) and maintain. But anybody who contributes to it can consider themselves part “owner” of it. Actually I’ve thought about moving it to the JuliaPhysics org to share admin privileges too.

Along this line of thinking, maybe you could contribute the LinearAlgebra stuff to DQ? Seems more productive than splitting into separate packages and comparing API completeness. As linked earlier, there is a stalled PR on the DQ repo for a LinearAlgebra interface. Would be hugely appreciated if you could help out!

4 Likes

I’m definitely not into proliferating packages for no reason. But how exactly should UnitfulTensors and DynamicQuantities collaborate?

I see the following options:

  • Merge UnitfulTensors into DynamicQuantities. Doesn’t look like a good idea. There must be a reason LinearAlgebra is outside Base.
  • Adapt all UnitfulTensors functionality to QuantityArrays and put this into DynamicQuantities. This is duplication of code and effort.
  • Adapt some of UnitfulTensors functionality to QuantityArrays and put this into DynamicQuantities. Again duplication, just on a smaller scale. Why have incomplete functionality, if we can have a separate fully functional package for unitful linear algebra?
  • Keep UnitfulTensors and DynamicQuantities separate, but interoperable. UnitfulTensors will focus on arrays, DynamicQuantities will focus on scalars. This is the option I prefer.

Here is a quick and dirty example of DynamicQuantities and UnitfulTensors working together:

using DynamicQuantities, LinearAlgebra
import UnitfulTensors: UnitfulTensors, SIDimensions, AbstractUnitfulScalar, AbstractUnitfulTensor, dimexps, dimscale, nodims, ishomogeneous

Base.convert(::Type{SIDimensions}, x::Dimensions) = SIDimensions((x.time, x.length, x.mass, x.current, x.temperature, x.amount, x.luminosity))
Base.convert(::Type{Dimensions}, x::SIDimensions) = ((t,l,m,i,θ,n,j) = dimexps(x); Dimensions(l,m,t,i,θ,j,n))
    
struct ScalarWrapper{T, D} <: AbstractUnitfulScalar{T, SIDimensions}
    parent::Quantity{T, D}
end

struct TensorWrapper{T, N, D, Q, V} <: AbstractUnitfulTensor{ScalarWrapper{T, D}, N}
    parent::QuantityArray{T, N, D, Q, V}
end

ScalarWrapper(val, dims) = ScalarWrapper(Quantity(val, convert(Dimensions, dims)))
function TensorWrapper(vals, dims)
    ishomogeneous(dims) || throw(DimensionMismatch("QuantityArray supports only homogeneous dimensions"))
    return TensorWrapper(QuantityArray(vals, convert(Dimensions, dimscale(dims))))
end

UnitfulTensors.value(A::ScalarWrapper) = ustrip(A.parent)
UnitfulTensors.dimensions(A::ScalarWrapper) = convert(SIDimensions, dimension(A.parent))
UnitfulTensors.values(A::TensorWrapper) = ustrip(A.parent)
UnitfulTensors.dimensions(A::TensorWrapper) = convert(SIDimensions, dimension(A.parent)) * nodims(size(A.parent)...)
UnitfulTensors._scalar_type(::Type{<:ScalarWrapper}) = ScalarWrapper
UnitfulTensors._tensor_type(x::TensorWrapper) = TensorWrapper

@inline function Base.getindex(A::AbstractUnitfulTensor, inds...)
    dims = getindex(UnitfulTensors.dimensions(A), inds...)
    vals = getindex(UnitfulTensors.values(A), inds...)
    T = UnitfulTensors.promote_unitful(getindex, vals, dims, A)
    return T(vals, dims)
end

A = TensorWrapper(QuantityArray([1u"m" 0u"m"; 0u"m" 200u"cm"]))
b = TensorWrapper(QuantityArray([3u"s", 4u"s"]))
display((det(A), tr(A), norm(b), b ⋅ b))
display.((A \ b, inv(A), eigvals(A), eigvecs(A)));

# output

(2.0 m^2, 3.0 m, 5.0 s, 25.0 s^2)
2-element TensorWrapper{Float64, 1, Dimensions{Float32}, Quantity{Float64, Dimensions{Float32}}, Vector{Float64}}:
 3.0 s m^-1
 2.0 s m^-1
2×2 TensorWrapper{Float64, 2, Dimensions{Float32}, Quantity{Float64, Dimensions{Float32}}, Matrix{Float64}}:
 1.0 m^-1  0.0 m^-1
 0.0 m^-1  0.5 m^-1
2-element TensorWrapper{Float64, 1, Dimensions{Float32}, Quantity{Float64, Dimensions{Float32}}, Vector{Float64}}:
 1.0 m
 2.0 m
2×2 TensorWrapper{Float64, 2, Dimensions{Float32}, Quantity{Float64, Dimensions{Float32}}, Matrix{Float64}}:
 1.0  0.0
 0.0  1.0