[RFC/ANN] ComponentArrays.jl for building composable models without a modeling language

That’s funny because I actually slightly preferred the name ComponentArray to CArray (CArray sounds like it would be an array passed into a C call or something) but I ended up going with CArray to follow the convention I’ve seen in other array type packages (LArray in LabelledArrays.jl, SArray in StaticArrays.jl, etc.). I’ll have to reconsider though before I go too much further.

The other thing that pulled me in the direction of CArray is that I hate that ComponentArray takes away easy autocompletion for ComponentArrays when I want to use an unexported function in the repl as I’m working on the package. But that’s not really a good reason.

2 Likes

Interesting. Looks like the issue is just with nesting. ArrayPartition broadcast is completely non-allocating and fast if it’s only one level:

using RecursiveArrayTools, Test

xce0 = ArrayPartition(zeros(2),[0.])
xcde0 = copy(xce0)
function foo(y, x)
	y .= y .+ x
	nothing
end
foo(xcde0, xce0)
@test 0 == @allocated foo(xcde0, xce0)
function foo(y, x)
	y .= y .+ 2 .* x
	nothing
end
foo(xcde0, xce0)
@test 0 == @allocated foo(xcde0, xce0)
using BenchmarkTools
@btime foo(xcde0, xce0) # 29.548 ns (0 allocations: 0 bytes)

but falls off if many levels are added. I wonder why. At least this doesn’t the effect the implementation of the symplectic integrators which only use one level.

1 Like

I wonder if it has to do with the different eltypes of the contained arrays. A couple of those in the example are Int64s. I didn’t really think that through when I made the example because CArrays promote everything to a common type and one of the things I was testing when I originally wrote that example ca was it’s ability to do that. But that probably makes for an unfair comparison.

When I can get to a computer I’ll try it out with all Float64[...] inner arrays.