Help with simplified StaticArrays implementation

Poblem: StaticArrays.jl is too large of a dependency and has too many extra features.

To simplify my dependencies, I need a simplified variant of SVector, MVector, and SizedVector.

Solution: In AbstractTensors.jl they are replaced with Values, Variables, and FixedVector.

julia> using AbstractTensors # loads 10x faster than StaticArrays

julia> a = Values(1,2,3)
3-element Values{3,Int64} with indices SOneTo(3):

My goal is to have an implementation that is robust as in StaticArrays but with much more simplified functionality, only limited to AbstractVector instead of AbstractArray and without all the matrices.

Any pull requests, advice or suggestions for improvements would be welcome.

Grassmann.jl will no longer need to depend on StaticArrays with this simplified implementation.


from zulip:


By the way, that’s a very early implementation that’s far from ready for extensive use. It’s based on the idea that we could probably replace our entire set of dense arrays with something that wraps a linear collection and a tuple of axes. So we can compose the array like this:

struct DArray{T,N,Axs,D} <: AbstractArray{T,N}
    data::D     # vector/tuple/etc
    axes::Axs # e.g., NTuple{N,OneTo{Int}}

I’m using ArrayInterface as the common point for the array API, so theoretically it could be light weight.


I’m rewriting PaddedMatrices (may change the name of the library) to be based on ArrayInterface, specifically my pending PR.
Size and strides, for example, are both represented by SDTuple, the hybrid dynamic-static tuple to mix known and unknown information.

Things like views and adjoints will just return another one of these arrays with correspondingly adjusted size, strides, and shifted pointer, rather than a wrapper.
This simplifies the need for defining many overloads, e.g. f(::PaddedMatrices.StrideArray), f(::Adjoint{T,<:PaddedMatrices.StrideArray}), etc.

I’m inclined to add “offset” just to cover the gauntlet (representing the num-based indexing, 1 for most arrays, 2 if using TwoBasedIndexing, etc).

There’s a conflict between not committing type piracy and keeping things light weight.
One of the goals of that ArrayInterface PR is to try and reduce what’s needed to implement an array type and get good LoopVectorization support. But perhaps we should focus more specifically on such an array interface.

1 Like

There’s still a lot I think we could add to ArrayInterface to make generic support for things like static arrays possible. I have a lot of ideas but I only have so much time to put together the stuff. I should probably take a closer look at PaddedMatrices.

I’m inclined to add “offset” just to cover the gauntlet (representing the num-based indexing, 1 for most arrays, 2 if using TwoBasedIndexing, etc).

I’d be interested in seeing what approach you have for this. I have some pretty well tested stuff for managing offset indexing but I need to clean it up before the inner workings are presentable.

Can you qualify or quantify? What makes it a ‘too large’ d?


I haven’t pushed the rewrite code yet. You’ve probably already seen most of the interesting stuff in ArrayInterface, although not examples of using it. I didn’t want to merge the PR in case I want to change things or think of better representations.

It’d just be another SDTuple, that would default to entirely static 1s for most array types, and entirely dynamic for OffsetArrays. But now it’d have the advantage of supporting hybrids.
But maybe it’d be best to just leave this to wrapper types like OffsetArrays.jl.

A disadvantage of the current SDTuple approach is that it encodes the statically known information as a tuple, e.g. (3,4) rather than a Tuple type, as in Tuple{3,4}.
Tuples are far nicer to work with, and I grew tired of the boiler plate. But Tuple types let you do what StaticArrays does:

const SVector{M,T} = SArray{Tuple{M},T,1,M}
const SMatrix{M,N,T,L} = SArray{Tuple{M,N},T,2,L} 

So we’d just have to have a function like StaticArrays.Size that’ll convert the Tuple type to a tuple value.
This adds boiler plate to the library authors/people working on internals, but I guess it’s worth it as I suspect users wouldn’t be happy with having types like MySVector{(4,)} or writing code like:

const MySVector{M,T,L} = MySArray{M,T,1,L}
function foo(x::MySVector{MT}) where {MT}
    (M,) = MT
     for m in 1:M

Hopefully methods like sdsize (static-dynamic hybrid size), sdstride (static-dynamic hybrid strides), and known_length on axes make a good foundation for working with possibly static arrays in a generic manner.

Long load time, lots of invalidations making loading other packages slow, on Julia 1.5:

julia> using StaticArrays

julia> @time using Example
  0.656540 seconds (1.33 M allocations: 66.060 MiB, 7.79% gc time)

where Example is a dummy module.

But check out the fruits of Tim Holy’s labor, Julia 1.6:

julia> using StaticArrays

julia> @time using Example
  0.002901 seconds (2.90 k allocations: 185.891 KiB)

@time using StaticArrays itself goes from about 0.59 to 0.48 s for me on Julia 1.5 vs a build of Julia 1.6 that only has some of these improvements.

Anyway, I think type piracy is also a great reason for chakravala to be interested in defining his own types. That is, he’s likely to want to define his own methods on these arrays that may conflict with the existing methods. E.g., he may want to define his own A \ b method without pirating.

This is why I think a statically sized array interface will be great to have (and why Zach_Chrisensen and I’ve been working on it): don’t reinvent the wheels you don’t plan on improving or experimenting with, and go through an interface so other unrelated packages can benefit from that static sizing information.


StaticArrays takes about 1 second to load.

AbstractTensors takes 0.1 seconds to load.

The only feature from StaticArrays I need for the entire Grassmann.jl ecosystem can load in 0.1 seconds.

Grassmann.jl loads 2x faster without StaticArrays

Why would I want to wait ~4 seconds to load my packages, when I can cut it to 2 seconds by not depending on StaticArrays?

Also, in Grassmann.jl I can solve a linear system faster than StaticArrays does, so it’s not like I can’t do linear algebra without it. I can do linear algebra without it.

So therefore, I want to make Grassmann multi-linear algebra not depend on the big package StaticArrays

Not quite true, no. While, yes, I do define my own linear algebra methods, I already had my own types for that derived from wrapping SVector. Type piracy was never the issue. The main issue is that depending on SVector is too large of a dependency, because it needs to load all sort of extra things for matrices.

Note that the size in my static variant is always 1 dimensional, I never support higher dimensional arrays, because I can represent those as nested vectors.

So in AbstractTensors there is no Size it is always only a simple length and no other dimensions.

That’s what makes it simplified and different.

As you can see, I am in alignement with this approach, since I believe to only construct linear indexing initially, and then matrices get constructed from that later on.

I agree on this, I am only doing this as an experiment, and it is work i’d rather not have to do, but it seems to be making a big difference in load times.

1 Like

Switch to deprecate StaticArrays from Grassmann v0.6 has now been released. The Grassmann packages no longer depend on the StaticArrays package.

if haskey(ENV,"STATICJL")
    import StaticArrays: SVector, MVector, SizedVector, StaticVector
    const Values,Variables,FixedVector,TupleVector = SVector,MVector,SizedVector,StaticVector

However, an environment variable option has been provided in AbstractTensors, should an advanced user decide that they’d prefer to use the types form StaticArrays instead.

1 Like

You can not do that if you don’t actually depend on StaticArrays though:

$ STATICJL=true julia -e 'using AbstractTensors'
ERROR: LoadError: ArgumentError: Package AbstractTensors does not have StaticArrays in its dependencies:

Also, you forgot to remove StaticArrays from the project file, so you still depend on it in that sense, just never load it.

That is intentional, Grassmann depends on StaticArrays for documentation generation, but not for the main Grassmann module. It is also intentional that AbstractTensors does not have StaticArrays in it, but I suppose it could be added if users want that.

You don’t need that since you explicitly add it when generating docs:
However, I would recommend you simply add those extra packages to docs/Project.toml and use the recommended approach:


I just came across a package by @andyferris, maybe this is what you were looking for.


StaticArrays takes 0.6 seconds to load in Julia v1.6 while Grassmann is 0.75 seconds to load, which makes Grassmann a good alternative to StaticArrays now.

I’m having a hard time replacing Matrix objects with Grassmann equivalents. I simply do not understand enough about the Grassmann algebra. Any recommended reading to do or maybe a tutorial for some plain old real valued matrices?
I think your package is really cool and I’d like to understand. :smiley:

1 Like

There are several possibilities for representing matrix-like objects in Grassmann. If you need an element of the orthogonal group, the outermorphisms can be used for that. Otherwise, if it is an actual matrix representation you are after, you can get it by nesting column vectors inside a row vector:

julia> using Grassmann; basis"2"
(⟨××⟩, v, v₁, v₂, v₁₂)

julia> outer(1v1+2v2,3v1+4v2)
(3v₁ + 6v₂)v₁ + (4v₁ + 8v₂)v₂

julia> ans ⋅ (v1+v2)
7v₁ + 14v₂

julia> using StaticArrays

julia> SVector(1,2)*SVector(3,4)'
2×2 SArray{Tuple{2,2},Int64,2,4} with indices SOneTo(2)×SOneTo(2):
 3  4
 6  8

julia> ans * SVector(1,1)
2-element SArray{Tuple{2},Int64,1,2} with indices SOneTo(2):

So the basic API for matrices in Grassmann will be column vectors nested in row vectors. However, the API for this is not yet finalized and the outer method will be changed to probably for example, so there will be breaking changes and more features added in the future yet.

There will be an official tutorial announcement explaining this in the future, when I finalize the API.


Cool! Thank you. I’ll try that out and looking forward to the tutorial!

1 Like