Accessing fields within a generated function

Hi,
I would like to have generated functions that work with – and can access properties of – fields of types while compiling.

Specifically,
I wrote a lot of “parameter container” types like “Covariance Matrix” and “Positive Vector” that have constraints.
What I want is for someone to be able to build a model, and say

struct MyModelsParameters{T} <: Parameters{T}
    Σ::CovarianceMatrix{3, T}
    Θ::ProbabilityVector{4, T}
    μ::MVector{2, T}
end

and have functions defined on the abstract type “Parameters”:

@generated function Base.size(A::Parameters)
    ...
end
@generated function Base.getindex(A::Parameters, i::Int)
    ...
end
@generated function Base.setindex!(A::Parameters, v::Real, i::Int)
    ...
end

such that for an object of type MyModelsParameters, we have

julia> isa(myparams, MyModelsParameters)
true
julia> size(myparams)
9

That is, I would like to have size add 3+4+2 = 9.
Similarly for get/setindex, I want them to compile to something like

function Base.getindex(A::MyModelsParameters, i::Int)
    if i < 3
        A.Σ[i]
    elseif i < 7
        A.Θ[i - 3]
    else
        A.μ[i - 7]
    end
end

The parameter vector would be indexed many thousands of times, so performance is a major concern. But so is ease / intuition of use; having to specify 3 for the length of a 2x2 covariance matrix is pushing it.

I confess, I am new to meta-programming and don’t have the firmest of grasps on many performance issues, so I am not even sure to what extant an implementation like the above would improve performance vs something more like MultiScaleArrays.

EDIT: Now reading Reflection and introspection
I think this makes things clear. This thread can be closed.

EDIT:
I could add enough code for a working example, but

@generated function Base.size{T <: Parameters}(A::T)
  p = 0
  for i ∈ T.types
    p += length(i)
  end
  return :($p)
end

Is not doing what I thought it was.
It does return the correct answer, but is 300 times slower than

return_seven(a) = 7
@benchmark size(po)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     15.604 ns (0.00% GC)
  median time:      15.863 ns (0.00% GC)
  mean time:        16.240 ns (0.00% GC)
  maximum time:     35.409 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     998

@benchmark return_seven(po)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     0.054 ns (0.00% GC)
  median time:      0.054 ns (0.00% GC)
  mean time:        0.054 ns (0.00% GC)
  maximum time:     0.078 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

I had thought the two functions should compile into the same thing.

Also

@code_warntype return_seven(po)
Variables:
  #self#::#return_seven
  a::Any

Body:
  begin 
      return 7
  end::Int64

@code_warntype size(po)
Variables:
  #temp#@_1
  #temp#@_2

Body:
  begin 
      return $(QuoteNode(7))
  end::Int64

Hmm.

You can use getindex(getfield(A,fieldnames(A)[j]),i) to generate this function.

for j in 1:3 
  fn = fieldnames(A)[j]
  p = fn.parameters[1]
  quote 
    f = getfield($(esc(A)),$(esc(fn))
    @inbounds f[i - $p]
  end
end

Then of course put those together in the control flow statement. I believe that should get you something where the getfield will be type-inferred since fn will be known at compile-time, and i - number will inline the constant. Of course, check the compiled code.

1 Like

The performance can be improved from MultiScaleArrays since it uses bisection and dynamic arrays in there, but I was surprised at well MultiScaleArrays actually does:

I think that in many cases, the speed ups will come through avoiding indexing though. For these “types which hold arrays to build an array”, you can specialize broadcasting like:

and then it’s just broadcasts on the component arrays. which can be quite efficient.

Another thing to look at is just making it a CatView. CatViews.jl has a few tricks to make the iterator fast, and likely does some other performance tricks as well.

1 Like

I’m getting up in <6 hours, so I’m headed to bed shortly, but –

Thanks for the response ChrisRackauckas!
Your MultiScaleArrays package was the inspiration behind the parameter vectors, and one of the intentions was to pass the parameter object to Optim.

A modification of your indexing solution definitely looks like it will work.

CatView looks great.

About broadcasting—
I am doing some operations where this would be useful.
It must also be passed through Optim (although Optim doesn’t support this yet), and have autodiff find both first and second derivatives. I still have to dig into the code to see everything those entail.

The entire parameter vector must then be set equal to new values from a grid of points.
That means, all the values have to be re-assigned many thousands of times.

Maybe there is a much better way I can handle that – perhaps, I can instead work on viewing a vector (ie, one set of coordinates on the grid) as the parameters of whatever shape users specify?
If I try to do something like that, I still want a user to only have to somehow list the parameter objects they are using.
Perhaps they could define a parameter block in the same way, and pass it through a function that instantiates a model object containing a pre-allocated vector (–although I’m unsure if there is actually any need for that–), but flexibly allow the smaller parameter objects to project views onto any vector (ie, those of the grid).
I still have a lot to work out but that could then let it just pass a regular vector to Optim et al.

I tried to reproduce your QuoteNode problem, but as of Julia 0.6, @code_warntype showed return 7. Could you provide a MWE? In any case, you don’t have to write return :($p); just write return p. Generated functions can output constants. Another factor is that the return type of size(po) is unknown if the type of po is not known, but in a type-stable situation, I assume that you would get the same timing results as with return_seven. Try comparing @benchmark size($po) with @benchmark return_seven($po). The dollar sign means that the value will be spliced in, hence Julia will know which size function to call (this is the type-stable situation).

1 Like

Reproducible example:

julia> abstract type Parameters{T} <: AbstractArray{T,1} end

julia> abstract type ConstrainedParameters{p,T} <: Parameters{T} end

julia> abstract type ConstrainedVector{p,T} <: ConstrainedParameters{p,T} end

julia> @generated function Base.show{T <: Parameters}(io::IO, ::MIME"text/plain", Θ::T)
         quote
           for j in fieldnames(T)
             println(getfield(Θ, j))
           end
         end
       end

julia> Base.IndexStyle(::Parameters) = IndexLinear()

julia> struct PositiveVector{p, T} <: ConstrainedVector{p, T}
         Θ::Vector{T}
         x::Vector{T}
       end

julia> PositiveVector{T}(x::Vector{T}) = PositiveVector{length(x), T}(log.(x), x)
PositiveVector

julia> Base.length{p,T}(::Type{PositiveVector{p,T}}) = p

julia> Base.getindex(x::ConstrainedVector, i::Int) = x.x[i]

julia> Base.show(io::IO, ::MIME"text/plain", Θ::ConstrainedVector) = print(io, Θ.x)

julia> Base.size(x::ConstrainedVector) = size(x.x)

julia> struct param{T} <: Parameters{T}
         pv1::PositiveVector{3,T}
         pv2::PositiveVector{4,T}
       end

julia> @generated function Base.size{T <: Parameters}(A::T)
         p = 0
         for i ∈ T.types
           p += length(i)
         end
         (p, )
       end

julia> po = param(PositiveVector(rand(3)), PositiveVector(rand(4)))
[0.582707, 0.0199256, 0.439869]
[0.372666, 0.168147, 0.204537, 0.224757]


julia> size(po)
(7,)

julia> @code_warntype size(po)
Variables:
  #temp#@_1
  #temp#@_2

Body:
  begin 
      return $(QuoteNode((7,)))
  end::Tuple{Int64}

Oddly, if we rerun the exact same definition for the size method, we get a different result

julia> @generated function Base.size{T <: Parameters}(A::T)
         p = 0
         for i ∈ T.types
           p += length(i)
         end
         (p, )
       end

julia> @code_warntype size(po)
Variables:
  #self#::Base.#size
  A::Any

Body:
  begin  # line 2:
      return (7,)
  end::Tuple{Int64}

As for running @benchmark size($po), you predicted correctly:

julia> using BenchmarkTools

julia> @benchmark size(po)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     15.628 ns (0.00% GC)
  median time:      15.837 ns (0.00% GC)
  mean time:        16.879 ns (0.00% GC)
  maximum time:     34.214 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     998

julia> @benchmark size($po)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     0.053 ns (0.00% GC)
  median time:      0.054 ns (0.00% GC)
  mean time:        0.054 ns (0.00% GC)
  maximum time:     0.118 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

julia> return_seven(A) = 7;

julia> @benchmark return_seven(po)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     0.053 ns (0.00% GC)
  median time:      0.054 ns (0.00% GC)
  mean time:        0.056 ns (0.00% GC)
  maximum time:     1.795 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

I think the following helps contextualize the above:

julia> a = randn(7);

julia> @benchmark size(a)
@bencBenchmarkTools.Trial: 
  memory estimate:  16 bytes
  allocs estimate:  1
  --------------
  minimum time:     24.832 ns (0.00% GC)
  median time:      25.506 ns (0.00% GC)
  mean time:        28.013 ns (2.91% GC)
  maximum time:     1.398 μs (91.99% GC)
  --------------
  samples:          10000
  evals/sample:     996

julia> @benchmark size($a)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     1.944 ns (0.00% GC)
  median time:      2.157 ns (0.00% GC)
  mean time:        2.133 ns (0.00% GC)
  maximum time:     20.403 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     1000

Ie, 15 ns is nothing to worry about.

I still find the QuoteNode appearing and then disappearing strange, suggesting there is a lot going on I don’t understand.

PS. I am a fan Julia 0.6’s easy copy and paste for the REPL.

1 Like

Glad to hear it! :slight_smile: