[RFC/ANN] ComponentArrays.jl for building composable models without a modeling language

Hey everyone. ComponentArrays.jl is my first package and I’m looking for some feedback and to gauge interest before I think about going down the path of registering it. I wrote this package mostly for myself for my work in control systems engineering where it’s important to be able to swap out components of your model (controllers, actuators, subsystems, etc.) quickly or pull them out and run them as standalone systems for debugging. This is easy to do in modeling languages like Simulink, but I wanted to do this with DifferentialEquations.jl.

MultiScaleArrays.jl serves the same general purpose and has some cool features that ComponentArrays.jl does not, such as the ability to add components on the fly (which, for my models at least, would be a neat way to handle rocket staging and spacecraft separation). The advantage of ComponentArrays.jl is simpler syntax and easy incorporation of existing models that weren’t written with ComponentArrays in mind. Composing models is just regular old function composition.

CArrays are the main exported type of ComponentArrays.jl. In their one-dimensional form, they act like arbitrarily nested NamedTuples that can be passed through a solver. I say “solver” and not “differential equation solver” because they work nicely with other types of solvers as well, such as those from Optim.jl. Here is what they look and act like:

julia> c = (a=2, b=[1, 2]);
  
julia> x = CArray(a=1, b=[2, 1, 4], c=c)
CArray{Float64}(a = 1.0, b = [2.0, 1.0, 4.0], c = (a = 2.0, b = [1.0, 2.0]))
  
julia> x.c.a = 400; x
CArray{Float64}(a = 1.0, b = [2.0, 1.0, 4.0], c = (a = 400.0, b = [1.0, 2.0]))
  
julia> x[5]
400.0
  
julia> collect(x)
7-element Array{Float64,1}:
   1.0
   2.0
   1.0
   4.0
 400.0
   1.0
   2.0

julia> typeof(similar(x, Int32)) === typeof(CArray{Int32}(a=1, b=[2, 1, 4], c=c))
true

At the moment, all indexing, whether symbolic or numeric, uses views rather than slices because that’s what makes most sense for composable modeling. This might seem surprising if you try to slice into them like x[1:4]. Maybe I’ll play around with rules for that later, but it seems like the right way to handle it right now. I’ll also probably make a slice function and @slice macro at some point.

Higher dimensional CArrays can be created too, but it’s a little messy right now. It’s probably best to avoid creating them directly. The nice thing for modeling, though, is that dimension expansion through broadcasted operations can create higher-dimensional CArrays automatically, so Jacobian cache arrays that are created internally with false .* x .* x' will be CArrays with proper axes.

julia> x2 = x .* x'
7×7 CArray{Tuple{Axis{(a = 1, b = 2:4, c = (5:7, (a = 1, b = 2:3)))},Axis{(a = 1, b = 2:4, c = (5:7, (a = 1, b = 2:3)))}},Float64,2,Array{Float64,2}}:
 1.0  2.0  1.0   4.0  2.0  1.0  2.0
 2.0  4.0  2.0   8.0  4.0  2.0  4.0
 1.0  2.0  1.0   4.0  2.0  1.0  2.0
 4.0  8.0  4.0  16.0  8.0  4.0  8.0
 2.0  4.0  2.0   8.0  4.0  2.0  4.0
 1.0  2.0  1.0   4.0  2.0  1.0  2.0
 2.0  4.0  2.0   8.0  4.0  2.0  4.0
 
julia> x2[:c,:c]
3×3 CArray{Tuple{Axis{(a = 1, b = 2:3)},Axis{(a = 1, b = 2:3)}},Float64,2,SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}}:
 4.0  2.0  4.0
 2.0  1.0  2.0
 4.0  2.0  4.0
 
julia> x2[:a,:a]
 1.0
 
julia> x2[:a,:c]
CArray{Float64}(a = 2.0, b = [1.0, 2.0])

julia> x2[:b,:c]
3×3 CArray{Tuple{Axis{NamedTuple()},Axis{(a = 1, b = 2:3)}},Float64,2,SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}}:
 4.0  2.0  4.0
 2.0  1.0  2.0
 8.0  4.0  8.0

As you might have been able to see above, idea behind CArrays are to store the data as a dense array and abuse the value-as-type ability of julia’s type system and some @generated functions for indexing. The nice thing about doing it this way is large homogeneous arrays of CArrays (such as those that are built in differential equations solves) don’t have to all have to separately store the information needed to index into them. Or at least I think they don’t, right? It also makes broadcasting, similars and copy/copytos much faster than all of the other ways I tried (I went through about 6 iterations of this, including keeping the NamedTuple data structure and replacing its elements with views into an array).

Here’s an completely arbitrary and non-physical example of the use case. There are two subsystems governed by the Lorenz and Lotka-Volterra equations and a top-level system that handles coupling between the subsystems. Note that the composed system could have its own state variables or could be itself a component of a higher-level system, but it doesn’t and it isn’t because this is just a simple example.

using ComponentArrays
using DifferentialEquations
using Parameters: @unpack


tspan = (0.0, 20.0)


## Lorenz system
function lorenz!(D, u, (p, f), t)
    @unpack σ, ρ, β = p
    @unpack x, y, z = u
    
    D.x = σ*(y - x)
    D.y = x*(ρ - z) - y - f
    D.z = x*y - β*z
    return nothing
end

lorenz_p = (σ=10.0, ρ=28.0, β=8/3)
lorenz_ic = CArray(x=0.0, y=0.0, z=0.0)
lorenz_prob = ODEProblem(lorenz!, lorenz_ic, tspan, (lorenz_p, 0.0))


## Lotka-Volterra system
function lotka!(D, u, (p, f), t)
    @unpack α, β, γ, δ = p
    @unpack x, y = u
    
    D.x =  α*x - β*x*y + f
    D.y = -γ*y + δ*x*y
    return nothing
end

lotka_p = (α=2/3, β=4/3, γ=1.0, δ=1.0)
lotka_ic = CArray(x=1.0, y=1.0)
lotka_prob = ODEProblem(lotka!, lotka_ic, tspan, (lotka_p, 0.0))


## Composed Lorenz and Lotka-Volterra system
function composed!(D, u, p, t)
    c = p.c #coupling parameter
    @unpack lorenz, lotka = u
    
    lorenz!(D.lorenz, lorenz, (p.lorenz, c*lotka.x), t)
    lotka!(D.lotka, lotka, (p.lotka, c*lorenz.x), t)
    return nothing
end

comp_p = (lorenz=lorenz_p, lotka=lotka_p, c=0.01)
comp_ic = CArray(lorenz=lorenz_ic, lotka=lotka_ic)
comp_prob = ODEProblem(composed!, comp_ic, tspan, comp_p)


## Solve problem
# We can solve the composed system...
comp_sol = solve(comp_prob)

# ...or we can unit test one of the component systems
lotka_sol = solve(lotka_prob)

There is an example in the examples folder that shows this same problem, but with composed Jacobians as well.

As far as speed goes, there is generally very little overhead penalty to using CArrays over regular Arrays. I’ve been benchmarking as I go along and don’t have anything formal yet, but that will come if there is interest in the package. The one issue is indexing like x[:a] rather than x.a is a little spotty speed-wise even though both are handled the same way. Sometimes the constant propagation works and sometimes it doesn’t and I don’t have a good feel for when or why. Since multidimensional CArrays don’t have the x.a option, you can use Vals as indexes like x[Val(:a), Val(:b)]. If anyone has some ideas on this, let me know. I might also just be benchmarking them incorrectly.

20 Likes

Would it be an over simplification to say this is a mutable named tuple that has array operations supported on the values? If so, it’s definitely useful!

1 Like

Yep, that’s pretty much it. Actually, that’s almost exactly what I wrote in the README:

In fact, a good way to think about them is as arbitrarily nested, mutable NamedTuple s that can be passed through a solver.

1 Like

I can think of a lot of use cases for code readability here. Thanks for sharing, contributing and explaining!

1 Like

Cool, thanks for making this and sharing. One question: how does that compare to RecursiveArrayTools? I’m familiar with ArrayPartition from there, which seems to address similar use cases

1 Like

Shameless advertising: You may also want to take a look at ValueShapes, some of the stuff I have in there (esp. NamedTupleShape) is quite similar to what ComponentArray does. Maybe we could converge some of the functionality, even?

3 Likes

@jonniedie, I played a bit with ComponentArrays.jl, I really like what you’ve done there. If you register it, I’d love add support for it in ValueShapes.jl. ValueShapes provides a generic way to describe/reify the structure of things - e.g. taking your example above, the shape of

CArray(x=0.0, y=0.0, z=0.0)

would be

vshape = NamedTupleShape(
    x = ScalarShape{Real}(),
    y = ScalarShape{Real}(),
    z = ScalarShape{Real}()
)

ValueShapes does offer duality of view between flat Arrays and structures, so you can do

A = [1.0, 1.0, 1.0]
x = vshape(A)

x is then semantically a 0-dimensional array of NamedTuple (not a plain NamedTuple, to make it writable). For many use cases, this is what I want - but in other use cases, your CArray (semantically still an array similar to A, but with added NamedTuple-like semantics) may be better.

So I’d like to add a way to wrap A in a CArray instead, as an alternative.

I’d offer to include ComponentArrays.jl in ValueShapes.jl, but ComponentArrays.jl is way more more lightweight, dependency-wise (ValueShapes supports Distributions, to write structures priors, etc.). So yay for registering ComponentArrays.jl!

1 Like

Cool. Have you seen GitHub - JuliaArrays/StructArrays.jl: Efficient implementation of struct arrays in Julia

1 Like

@lostella ArrayPartitions are another great way of addressing this use case. Here are some key advantages of CArrays over ArrayPartitions:

  1. Having named rather than positional partitions helps avoid passing the wrong partition for large complicated models. This is especially helpful when working on a team where different people are working on different subcomponents. I know u.motor can always be passed to the motor dynamics function regardless of which order someone entered the initial conditions in. And if they didn’t include a named motor partition, you get a nice helpful error saying such. With ArrayPartitions, if u.x[1] is the wrong partition because someone entered the initial conditions in incorrectly, it is possible for the incorrect dynamics to propagate without any errors.
  2. CArrays preserve their indexing structure through broadcasted dimension expansion, so Jacobians are also composable. With ArrayPartitions, the Jacobian cache array is just a plain array, so the ability to break out partitions into sub components is lost. For example:
julia> ca = CArray(a=100, b=[4, 1.3], c=(a=(a=1, b=[1.0, 4.4]), b=[0.4, 2, 1, 45]));
CArray{Float64}(a = 100.0, b = [4.0, 1.3], c = (a = (a = 1.0, b = [1.0, 4.4]), b = [0.4, 2.0, 1.0, 45.0]))

julia> ap = ArrayPartition([100], [4, 1.3], ArrayPartition(ArrayPartition([1], [1, 4.4]), [0.4, 2, 1, 45]));
([100], [4.0, 1.3], [1][1.0, 4.4][0.4, 2.0, 1.0, 45.0])

julia> ca_jac = ca .* ca'
10×10 CArray{Tuple{Axis{(a = 1, b = 2:3, c = (4:10, (a = (1:3, (a = 1, b = 2:3)), b = 4:7)))},Axis{(a = 1, b = 2:3, c = (4:10, (a = (1:3, (a = 1, b = 2:3)), b = 4:7)))}},Float64,2,Array{Float64,2}}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> ap_jac = ap .* ap'
10×10 Array{Float64,2}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> ca_jac[:c,:c]
7×7 CArray{Tuple{Axis{(a = (1:3, (a = 1, b = 2:3)), b = 4:7)},Axis{(a = (1:3, (a = 1, b = 2:3)), b = 4:7)}},Float64,2,SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},false}}:
  1.0   1.0    4.4    0.4    2.0   1.0    45.0
  1.0   1.0    4.4    0.4    2.0   1.0    45.0
  4.4   4.4   19.36   1.76   8.8   4.4   198.0
  0.4   0.4    1.76   0.16   0.8   0.4    18.0
  2.0   2.0    8.8    0.8    4.0   2.0    90.0
  1.0   1.0    4.4    0.4    2.0   1.0    45.0
 45.0  45.0  198.0   18.0   90.0  45.0  2025.0
  1. Changing a CArray’s eltype with a broadcasted operation like Float32.(x) preserves its full recursive structure whereas doing the same with ArrayPartitions loses the structure of any nested partitions. This is necessary for use in Optim.jl as it uses this method to fill the array with Dual types for autodiff. For example:
julia> ca32 = Float32.(ca)
CArray{Float32}(a = 100.0f0, b = Float32[4.0, 1.3], c = (a = (a = 1.0f0, b = Float32[1.0, 4.4]), b = Float32[0.4, 2.0, 1.0, 45.0]))

julia> ap32 = Float32.(ap)
(Float32[100.0], Float32[4.0, 1.3], Float32[1.0, 1.0, 4.4, 0.4, 2.0, 1.0, 45.0])

julia> ca32.c
CArray{Float32}(a = (a = 1.0f0, b = Float32[1.0, 4.4]), b = Float32[0.4, 2.0, 1.0, 45.0])

julia> ap32.x[3]
7-element Array{Float32,1}:
  1.0
  1.0
  4.4
  0.4
  2.0
  1.0
 45.0
  1. CArray allows you to have scalar elements if you want them. Note that the first element of ap above has to be wrapped in an array. This is more of a convenience feature than a necessity. It is nice, though, for components that hold subcomponents but also have their own state variables to update.

  2. Speed. CArrays are faster almost every operation I’ve tried. I should mention though that since ArrayPartitions are more eager to turn into plain arrays, they don’t carry any additional overhead after the first operation on them. Some quick comparisons:

julia> using BenchmarkTools

julia> a = collect(ca)  # so we can compare with plain arrays
10-element Array{Float64,1}:
 100.0
   4.0
   1.3
   1.0
   1.0
   4.4
   0.4
   2.0
   1.0
  45.0

julia> a_jac = a .* a'
10×10 Array{Float64,2}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> @btime $a' * $a
  12.524 ns (0 allocations: 0 bytes)
12069.210000000001

julia> @btime $ca' * $ca
  15.229 ns (0 allocations: 0 bytes)
12069.210000000001

julia> @btime $ap' * $ap
  40.299 μs (323 allocations: 6.05 KiB)
12069.210000000001

julia> @btime $a_jac * $a
  91.340 ns (1 allocation: 160 bytes)
10-element Array{Float64,1}:
      1.206921e6
  48276.840000000004
  15689.973000000002
  12069.210000000001
  12069.210000000001
  53104.52400000001
   4827.684
  24138.420000000002
  12069.210000000001
 543114.45

julia> @btime $ca_jac * $ca
  99.559 ns (1 allocation: 160 bytes)
10-element Array{Float64,1}:
      1.206921e6
  48276.840000000004
  15689.972999999998
  12069.210000000001
  12069.210000000001
  53104.52400000001
   4827.684
  24138.420000000002
  12069.210000000001
 543114.45

julia> @btime $ap_jac * $ap
  310.801 μs (2374 allocations: 37.72 KiB)
([1.206921e6], [48276.840000000004, 15689.972999999998], [12069.210000000001][12069.210000000001, 53104.52400000001][4827.684, 24138.420000000002, 12069.210000000001, 543114.45])

julia> @btime $a * $a'
  107.296 ns (2 allocations: 912 bytes)
10×10 Array{Float64,2}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> @btime $ca * $ca'
  327.107 ns (8 allocations: 1.16 KiB)
10×10 Array{Float64,2}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> @btime $ap * $ap'
  509.199 μs (3588 allocations: 60.28 KiB)
10×10 Array{Float64,2}:
 10000.0  400.0  130.0   100.0  100.0  440.0   40.0   200.0  100.0  4500.0
   400.0   16.0    5.2     4.0    4.0   17.6    1.6     8.0    4.0   180.0
   130.0    5.2    1.69    1.3    1.3    5.72   0.52    2.6    1.3    58.5
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
   440.0   17.6    5.72    4.4    4.4   19.36   1.76    8.8    4.4   198.0
    40.0    1.6    0.52    0.4    0.4    1.76   0.16    0.8    0.4    18.0
   200.0    8.0    2.6     2.0    2.0    8.8    0.8     4.0    2.0    90.0
   100.0    4.0    1.3     1.0    1.0    4.4    0.4     2.0    1.0    45.0
  4500.0  180.0   58.5    45.0   45.0  198.0   18.0    90.0   45.0  2025.0

julia> @btime similar($a, Float32)
  21.062 ns (1 allocation: 128 bytes)
10-element Array{Float32,1}:
 1.0f-44
 0.0
 7.0f-45
 0.0
 8.0f-45
 0.0
 7.0f-45
 0.0
 7.0f-45
 0.0

julia> @btime similar($ca, Float32)
  26.761 ns (2 allocations: 144 bytes)
CArray{Float32}(a = 1.0f-44, b = Float32[0.0, 7.0f-45], c = (a = (a = 0.0f0, b = Float32[8.0f-45, 0.0]), b = Float32[7.0f-45, 0.0, 7.0f-45, 0.0]))

julia> @btime similar($ap, Float32)
  6.760 μs (14 allocations: 672 bytes)
(Float32[0.0], Float32[8.0f-45, 0.0], Float32[7.0f-45]Float32[0.0, 0.0]Float32[0.0, 0.0, 2.2164372f-27, 0.0])

You can see that there are a lot of optimizations still to be made with CArrays, especially for ca * ca'. Those will be added. Also, right now linear algebra operations CArrays to plain arrays on purpose, but it might be nice for consistency to keep them CArrays.

1 Like

@datnamer: Cool. Have you seen GitHub - JuliaArrays/StructArrays.jl: Efficient implementation of struct arrays in Julia

Yes, I use StructArrays a lot (thanks for that great package, @piever).

1 Like

What about straight broadcast? Broadcast of an ArrayPartition against an Array or Adjoint isn’t specialized, but against two ArrayPartitions it is. So just ap .+ ap2 or something like that should be a lot better because it avoids the linear indexing fallback.

@oschulz ValueShapes look cool. I’ll have to play around with them to get a feel for it where they could work together. Yeah, the goal was definitely to keep ComponentArrays lightweight with dependencies.

@ChrisRackauckas Hmm. I’m still getting μs times for broadcasting on two ArrayPartitions.

julia> @btime $ca .+ $ca
  34.712 ns (2 allocations: 176 bytes)
CArray{Float64}(a = 200.0, b = [8.0, 2.6], c = (a = (a = 2.0, b = [2.0, 8.8]), b = [0.8, 4.0, 2.0, 90.0]))

julia> @btime $ap .+ $ap
  21.899 μs (136 allocations: 2.53 KiB)
([200.0], [8.0, 2.6], [2.0, 2.0, 8.8, 0.8, 4.0, 2.0, 90.0])

julia> ap2 = recursivecopy(ap)
([100.0], [4.0, 1.3], [1.0][1.0, 4.4][0.4, 2.0, 1.0, 45.0])

julia> @btime $ap .+ $ap2
  21.900 μs (136 allocations: 2.53 KiB)
([200.0], [8.0, 2.6], [2.0, 2.0, 8.8, 0.8, 4.0, 2.0, 90.0])

This is all in julia v1.4.0. I also tried in 1.3.0 to see if something changed and get a weird result:

julia> ap2 = recursivecopy(ap)
([100.0], [4.0, 1.3], [1.0][1.0, 4.4][0.4, 2.0, 1.0, 45.0])

julia> @btime $ap .+ $ap
  11.700 μs (38 allocations: 896 bytes)
(, , [2.0][2.0, 8.8])

julia> @btime $ap .+ $ap2
  11.999 μs (38 allocations: 896 bytes)
(, , [2.0][2.0, 8.8])

julia> temp = ap .+ ap2
(, , [2.0][2.0, 8.8])

julia> temp.x[1]
()

EDIT: I’m using RecursiveArrayTools v2.3.1 in both cases.

@datnamer I have. As of right now, CArrays aren’t compatible with StructArrays and I suspect it’s because I don’t have the fields of CArrays as visible properties (I didn’t want users to have to worry about name collisions with the CArray internal fields, so propertynames and getproperty only look at the user property names and getfield looks at the internal field names). I’d like my package to be compatible with StructArrays, so I’ll probably do a little work in that area. Thanks for the tip!

1 Like

Interesting, I wonder if it’s the constructor. What about in-place broadcast?

In-place broadcasting does solve the v1.3.0 weirdness.

julia> @btime $ap2 .= $ap .+ $ap
  8.733 μs (47 allocations: 944 bytes)
([200.0], [8.0, 2.6], [2.0][2.0, 8.8][0.8, 4.0, 2.0, 90.0])

This looks promising. I like that you say this could be used for passing parameters into Optim, but I don’t quite see how that would work. Could you provide a simple example?

2 Likes

This is a pretty trivial example that could have just as easily been accomplished by LabelledArrays.jl. I have some examples from work where I was doing some autopilot filter design, but I unfortunately can’t share them. I’ll try to cook up a better example and put it in the package.


using ComponentArrays
using Optim

rosen(u) = (1.0 - u.x)^2 + 100.0 * (u.y - u.x^2)^2

u₀ = CArray(x=0, y=0)
lb = CArray(x=-0.5, y=-0.5)
ub = CArray(x=0.5, y=0.5)

df = TwiceDifferentiable(rosen, u₀; autodiff=:forward)
dfc = TwiceDifferentiableConstraints(lb, ub)

val = optimize(df, dfc, u₀, IPNewton())
1 Like

But the gist of it was I wanted to test the difference between optimizing parts of the system separately and optimizing the whole system all at once.

From an interface perspective, consider calling your new type ComponentArray rather than CArray. It’s conventional in Julia to name a package after the main type it exports (DataFrames exports DataFrame), hence, CArray is a bit unexpected and perhaps more likely to conflict.

1 Like