How do I call a function in a lookup table while ensure type stability?

A simple example is:

julia> f() = 1
f (generic function with 1 method)

julia> g() = 2
g (generic function with 1 method)

julia> table = (f=f, g=g)
(f = f, g = g)

julia> get_val(x::Symbol) = table[x]()::Int
get_val (generic function with 1 method)

julia> @code_warntype get_val(:f)
1 ─ %1 = Base.getindex(Main.table, x)::Any
│   %2 = (%1)()::Any
│   %3 = Core.typeassert(%2, Main.Int)::Int64
└──      return %3

The programmer may extend get_val by defining their own function, make sure it returns an Int, and put it into table.


julia> @ballocated get_val(:f)

julia> @btime get_val(:f)
  42.482 ns (0 allocations: 0 bytes)

julia> @btime f()
  1.332 ns (0 allocations: 0 bytes)

Although I didn’t see any allocation when calling get_val, it’s way slower than calling functions in table directly. Does that has anything to do with the Any types reported by @code_warntype? And how can I make get_val fast?

Possible ways I cound think of are:

  1. Use @eval to generate the function definition of get_val and make it a huge if-else expression based on table. I failed to figure out how to do that (I do have some experience writing Lisp macros, though).
  2. Use FunctionWrappers.jl. I haven’t tried but assume it would work. However I’m curious if we can solve the problem without using external packages.


What underlying problem are you trying to solve? i.e. why do you need a lookup table of functions? Maybe there is a more idiomatic (and performant) approach.

1 Like

An approximate description of my problem is: I have a struct called Field, underlying which is an array recording some variables on the grid of a physical field.

Some variables are not recorded, but can be calculated on the fly using the recorded variables, and let’s call them “derived variables”. I want to get their value as if I’m getting a value from an actual array, so I rolled my own array which looks something like

struct DerivedArray{T, N} <: AbstractArray{T, N}

Base.size(x::DerivedArray) = size(x.base.underlying_array)

function Base.getindex(x::DerivedArray, I::Vararg{Int, N}) where N
    get_val(x.base, x.var, I...)

function get_val(x::Field, var::Symbol, inds...)
    table[var](x, inds...)

You can see the get_val function looks like get_val in the OP.

Some options:

  1. Instead of storing var::Symbol in your DerivedArray, store the actual function that you will use to compute the value. Better yet, use: GitHub - JuliaArrays/MappedArrays.jl: Lazy in-place transformations of arrays

  2. Make var a type parameter V rather than a field, and dispatch on it. e.g. struct DerivedArray{V, T, N} <: AbstractArray{T, N}, and then dispatch on it. e.g. for each operation foo, define Base.getindex(x::DerivedArray{:Foo}, I...) = get_foo(x.base, I...)

  3. Same as (2), but instead of making V a Symbol like :Foo, make it a type. This gives you more flexibility because you can add parameters to the type, do other things besides call it, and so forth.

But in all three cases the key is to define the desired operation as a part of the type of your array, and then dispatch on it. This allows the compiler to specialize, infer the type of the result, and so forth.


Thanks! I took your second approach. I’ve rewritten it like

Base.getindex(x::EulerDerivedArray{V, T, N}, I::Vararg{Int, M}) where {V, T, N, M}
    get_val(x.base, Val(V), I...)

function get_val(x::Field, ::Val{V}, inds...) where V
    table[V](x, inds...)

and benchmark on get_val is the same as directly calling functions in table.

I don’t quite understand “other things besides call it” in the 3rd solution, though.