Function inside struct allocates when referenced

Thanks for the concrete example!

Why do you need an array of functions in your use case?

The Box struct is meant to represent a network/array of elements, where each element has an associated “behavior” function. There can be many elements (dozens/hundreds), but there are only a handful of behavior functions - hence the function array. (the actual meaning of the “behavior” function is not relevant for the MWE).

Instead of an array of functions, would it be better to have an array of strings holding function names and then add a “function lookup” function based on that string, e.g.?

struct Box
    fname_array :: Array{String, 1}

function lookup(x, name)
    if name == "f1"
        return f1(x)
    elseif ...

# example: call the i-th element function on variable x
lookup(x, box1.fname_array[i])

or maybe there’s a better way altogether.

I don’t think it really matters in your case how you store the information of which function to call. There will always be a dynamic dispatch since the compiler can never known which functions there will be. Making things more stable in the struct only shifts this to a different place.

What you can try to improve things (assuming all behavior function have the same signature): use FunctionWrappers.jl. I have never used it so I don’t know how well it will work.
In principle it allows you to put a concrete type on the vector like so

struct Box
    fname_array :: Array{FunctionWrapper{Outputtype,(Input1, input2,...}, 1}

What are the behavior functions like? Giving them all the same type would be a solution.

Or you could do something like this to prevent run time dispatch:

struct Box

execute_behavior(code::UInt8, args...) =
  if code == 0
  if code == 1

Nice dispatch. Have you considered using multiple disparch?

execute_behavior(code::UInt8, args...) = bf(Val(code), args...)
bf(::Val{0x0}, args...) = 0
bf(::Val{0x1}, args...) = 1
bf(::Val{0x2}, args...) = 2

I specifically said right there that the goal is to prevent run time dispatch. Is that not the goal here?

@mkitti 's solution doesn’t use runtime dispatch because Val tells the compiler to consider the parameter value during compilation. See Performance Tips · The Julia Language

You can see below that Julia is able to constant propagate through the multiple dispatch.

julia> f() = execute_behavior(0x1)
f (generic function with 1 method)

julia> f()

julia> @code_typed f()
1 ─     return 1
) => Int64

julia> @code_warntype f()
MethodInstance for f()
  from f() @ Main REPL[12]:1
1 ─ %1 = Main.execute_behavior(0x01)::Core.Const(1)
└──      return %1

It would probably help to know a bit more about what you precisely want to achieve.
Most likely, Functionwrappers will do the job.
Personally, when I needed a collection of functions in the past, the correct solution turned out to be a vector of functors (callable structs).
This is imo the best way to approach this when your functions are all similar in some sense but depend on additional data which is different for. For example

F1(x)  = 2sin(3x)
F2(x)  = 3cos(1x)

Can be represented by callable structs which save the amplitude and phase of the sine function.

Of course it might also be that your functions are user defined, in which case functionwrappers are probably the only way to go.


What is this meant to prove? It doesn’t seem like 0x1 would be a constant in the real code.

Moving a value into the type domain involves multiple dispatch, that’s exactly what I was referring to.


There is no runtime dispatch involved in my use of multiple dispatch. Your if statement and my use of multiple dispatch can be compiled down to the same machine code.

In fact, due to simplicity Julia inlined the entire thing even before using LLVM’s JIT.

Yeah, but this is specifically because in your example the run time dispatch gets constant folded away.

Only when the argument is a compile-time constant. Which would be pointless.


Thanks for helping me wrap my mind around this. A couple of questions:

Can you comment on what “giving them all the same type” means?
In my case, all bf’s functions take the same Float input (coordinate x) and return a Float output.

In your example, can code be a string and behavior_function_code a string array?

Finally, it was noted in the comment above:

I don’t think it really matters in your case how you store the information of which function to call. There will always be a dynamic dispatch since the compiler can never known which functions there will be.

but it seems your approach avoids dynamic dispatch?

By default, a function is an instance of a singleton type, meaning that type only has 1 instance, the function itself. However, that is not necessarily the case; if a function were to contain data from a local scope, then it would not have a singleton type. The non-singleton function factory example nsajko provided is a bit indirect, it’d be clearer to write out the underlying functors with a type that subtypes Function, making them functions. There would still only be 1 method table associated with the functors’ type, but the functors would act differently because of the data they contain. Of course, this still restricts the functors to do almost the same thing, so it might not be applicable to your use case. FunctionWrappers.jl lets the multiple input functions have different method tables and types as a tradeoff for attaching a call signature restriction. (Normally, a function has multiple methods, and each method can be compiled for multiple call signatures; a call is dispatched to a method based on the function’s type as well as the arguments’ types).

To expand on the explicitness, unlike writing out a functor type to contain data in manually annotated fields, closures handle it implicitly so it’s not visible in one place:

julia> f(n, m) = let n = n, m = m
         foo(x) = x+n  # 1st method captures n
         # maybe a lot of other code
         foo() = m     # 2nd method captures m
f (generic function with 1 method)

julia> f(1, 1.0) # type parameters means function contains both n and m
(::var"#foo#1"{Int64, Float64}) (generic function with 2 methods)

Instantiating functors also don’t have to deal with variable scoping rules; closures are local scopes that technically capture variables, not data at instantiation, and any reassignment prevents the compiler from inferring the variable’s type well, even if unnecessary and accidental.


In Julia there is no connection between the type of a function and possible type signatures. Type signatures belong to methods, not functions (while a function may have arbitrarily many methods attached to it). Benny already expanded on the meaning, however if you want us to help, you have to give us more information on your functions. Also it’s important if there’s just a fixed number of them, or will users of your package be able to add more, etc. Are the functions all defined directly in your source code? If possible just give your source code.

Sure, but that wouldn’t be very efficient. I just used the UInt8 as an example, in practice I’d probably use something enum-like, for clarity. There’s a few different user packages that have Enum as a prefix to their name, so you can pick and choose. There’s also some enum-like functionality built-into Julia, but I don’t like that.


1 Like

I’ll try to explain more in-depth the whole type-stability story with functions and the different options and what my thought process was to get to this statement:


Suppose we have some agents that can show variable behaviors. Let’s parametrize the behaviors to be able to explore different ways of implementing this. So the Agent looks something like this

struct Agent{T}

and we have multiple behaviors like:

function behavior1(x)
function behavior2(x)
function behavior3(x)

Note each behavior function is its own type:

julia> typeof(behavior1)
typeof(behavior1) (singleton type of function behavior1, subtype of Function)
julia> typeof(behavior1) == typeof(behavior2)

A way to test for dispatch:

function testbehaviors(agent)
    for behavior in agent.behaviors
function testagents(agents)
    for agent in agents

Variant 1: Vector{Function}

This is what you originally did. Just put different behaviors in a vector and stick it in the Agent

julia> Agent([behavior1, behavior2])
Agent{Vector{Function}}(Function[behavior1, behavior2])
julia> @btime testbehaviors(Agent([behavior1, behavior2]))
  56.770 ns (3 allocations: 96 bytes)

This gives allocations, because Julia needs to do a dynamic dispatch, because Vector{Function} is a abstractly typed container, meaning Julia cannot infer what function to call from its type.

But the upside here is that every Agent constructed this way has the same type:

julia> typeof(Agent([behavior2, behavior3])) == typeof(Agent([behavior1, behavior2]))

Variant 2: use Tuple instead of Vector

This puts the information which behaviors an Agent has into the type-domain, so Julia knows which functions to call at compile time:

julia> Agent((behavior1, behavior2))
Agent{Tuple{typeof(behavior1), typeof(behavior2)}}((behavior1, behavior2))
julia> @btime testbehaviors(Agent((behavior1, behavior2)))
  8.600 ns (0 allocations: 0 bytes)

However this has the downside, that now Agents with different behaviors have different types:

julia> typeof(Agent((behavior1, behavior2)))
Agent{Tuple{typeof(behavior1), typeof(behavior2)}}
julia> typeof(Agent((behavior3, behavior1)))
Agent{Tuple{typeof(behavior3), typeof(behavior1)}}

So if you where to stick these different Agents into a Vector then your type-stability is gone again and you have just moved it to a different place. This is part of what my comment alluded to.

julia> agents = [Agent((behavior1, behavior2)), Agent((behavior3, behavior1)), Agent((behavior2, behavior3))];
julia> @btime testbehaviors.($agents)
  58.179 ns (0 allocations: 0 bytes)

Interestingly, this dynamic dispatch does not cause allocations and I am not sure why. However it still is slower than if Julia knowns all the types:

julia> @btime testagents($(Tuple(agents)))
  40.242 ns (0 allocations: 0 bytes)

Variant 3: Use some external lookup like Dict

The next idea was to just store some indices inside the Agents and perform a lookup in some external structure, f.e.

julia> function testbehaviors_lookup(lookup, agent)
            for behavior in agent.behaviors
julia> function testagents_lookup(lookup, agents)
            for agent in agents
                testbehaviors_lookup(lookup, agent)

julia> agents_lookup = [Agent([1,2]), Agent([2,3]), Agent([3,1])] # concretely typed container!
3-element Vector{Agent{Vector{Int64}}}: ...

julia> lookupdict = Dict(1=>behavior1, 2=>behavior2, 3=>behavior3);
# is Dict{Int, Function}, so still type-unstable and has (relatively) slow lookup of Dict
julia> lookupvector = [behavior1,behavior2,behavior3];
# still type-unstable, but faster lookup then Dict
julia> lookuptuple = (behavior1,behavior2,behavior3);
# type-stable for indices known at compile time, but still type-unstable in dynamics context

julia> @btime testagents_lookup($lookupdict, $agents_lookup)
  103.832 ns (6 allocations: 96 bytes)
julia> @btime testagents_lookup($lookupvector, $agents_lookup)
  90.372 ns (6 allocations: 96 bytes)
julia> @btime testagents_lookup($lookuptuple, $agents_lookup)
  47.434 ns (0 allocations: 0 bytes)

Essentially this just puts the type-instability elsewhere again. This is the second thing, that lead me to my comment.

Actually the last one could be considered sort-of type-stable, since the tuple is very short and thus Julia can perform union-splitting, which essentially turns it into static dispatch. One can verify this using julia> @code_warntype testbehaviors_lookup(lookuptuple, agents_lookup[1]) which shows no red types and just highlights the union of the 3 behavior functions. (compared to a evil, red annotation you get for @code_warntype testbehaviors_lookup(lookupvector, agents_lookup[1]))

Variant 4: Static dispatch

This was proposed by @nsajko . Essentially instead of some dynamic structure like Vector or Tuple, we could just write out every case. This makes the dispatch explicit and static, so nothing left at runtime but a (unavoidable) branch. Note that this is essentially what Julia did for us automatically in the Tuple case above.
Also note that the idea of @Salmon is also closely related to this static dispatch idea.

This approach is fully type-stable (as long as the behavior functions have the same return type) but is not flexible.

Variant 5: Use FunctionWrapper

If all function have the same signature, then ideally we could communicate this to Julia and then Julia could treat the functions just as something like a function pointer which it blindly follows. This functionality is not built-in but provided by FunctionWrappers.jl.
So we can now do (need to slightly restructure the type):

julia> import FunctionWrappers: FunctionWrapper
julia> struct Agent2
            behaviors::FunctionWrapper{Agent2, Tuple{Agent2}}
julia> agents_fwrap = [Agent2([behavior1,behavior2]), Agent2([behavior3,behavior1]), Agent2([behavior2,behavior3])]
julia> @btime testagents($agents_fwrap)
  31.057 ns (0 allocations: 0 bytes)

This approach seems the fastest and is also the most flexible but relies on an external dependency that uses Julia internals, which can be considered a downside.


FunctionWrappers is not this blind, it safeguards the types. The call signature restriction parametizes the wrapper with the intended return type and argument types, and the wrapper constructor takes a function as an argument.

  1. Taking the function as an argument is almost self-explanatory, we are trying to group function calls together in a language where functions don’t have fixed input types and return types. So, a concrete wrapper type constructor must take any function as an input.

  2. Fixing the return type provides type stability of the functions’ output, which is the bare minimum of categorizing functions in statically typed languages. Since there’s no guarantee of type stability of 1 method, let alone several functions, a convert step of the return value is added after calling the wrapped function. This accommodates type-unstable methods, but a convert with an uninferred input would need a runtime dispatch so it’s best to stick to type-stable methods if you’re going this far to avoid runtime dispatches and allocations.

  3. Fixing the argument types provides the rest of the necessary information to dispatch and compile a function call given the wrapped function, and the wrapper stores the compiled code to skip over the typical dispatch mechanism. A convert step of the input arguments is also added before calling the wrapped function. (Hypothetically, function calls could be grouped by return type alone so the argument types could be a field instead of parameters of the wrapper type, but it seems a little crazy to micromanage that for every element of an array of call signatures only to probably input the same argument types every iteration.)

So a FunctionWrapper incurs a runtime cost to make sure what goes in and out of the wrapped compiled code is type-safe, though several converts are usually cheaper than a runtime dispatch. EDIT: those converts might be no-ops if the compiler infers the inputs already are the target type. Also the stored compiled code is not updated when the original method is edited, so use reinit_wrapper on existing instances or replace them entirely.

I’ve never used it, but there is a package FunctionWranglers.jl that doesn’t incur this cost. Rather than wrapping a function one by one, it takes in entire arrays of functions and uses recursively inlined generated functions to unroll loops over them in a set of higher order functions. Think the if-statements checking the function in Variant 4, but the if-statements check the index so the same function at different indices still get their own branch. I’ve never reached for this because the map-like function requires the same input values for each call, indexing the wrangled array isn’t constant time (again, massive unrolled loop), and you can forget about any array mutations after it gets wrangled into recursive immutable struct instances. The repository hasn’t been touched for a couple years but I’m guessing it works because the package has no dependencies and I haven’t spotted any internal or unstable features in the short source file.


This is actually a great summary. I have thought before that it would probably be a great idea to summarize all approaches into a blog post on the Julia Forem, this is a good start for something like this.

By the way this code also counts the allocation of the array in the constructor (although it does not make much of a difference here)

Perhaps I misunderstand but are you saying that a FunctionWrapper can be faster than a functor? The way I see it, a functor should be the most performant as it can be fully inlined without need for pointers to functions.

1 Like

The variants writeup did not consider functors because functors are restricted to the same method table of their 1 type, which the setup rejects: “Note each behavior function is its own type”. You are right to prefer functors over any of those variants if you can.


Alternatively, functors could be seen as a variant of what I wrote under “static dispatch”. Consider

struct MyBehaviors
function (behavior::MyBehavior)(x)
    if behavior.index == 1
    elseif behavior.index == 2

So the functors are, in my view, a bit of generalization of static dispatch because you could pass some additional parameters along but in essence have the same limititations as static dispatch.

@Benny thanks for clarifying FunctionWrappers.jl some more! I actually never it myself either :smile:

Glad you like it! It was actually part of my motivation to put this somewhere where it could be helpful for more people, so the Forem seems like a good idea :slight_smile:
I’d like to expand it a bit more with explanations (and fix the slight error you found). Tbh, I ran a bit out of time this morning and had to cut things a bit short and couldn’t reread the whole thing again :sweat_smile:

So please everyone give feedback on how to improve this to make it a good resource for everyone :slight_smile:


If you’re interested in more feedback here are some thoughts :).

  • I think it really is worth making the example of functors more explicit. Your example mentions it as a special case of completely different functions where one performs a runtime check, but there are also many other cases that people might naively think o using vectors of functions, i.e. polynomials, interpolations, rotation transformations e.t.c. for which functors are perfectly suited. Personally, I think the first question one should ask oneself before using a collection of functions, is if there is any way one could use a functor instead.
  • there is a package called FunctionWrapperWrappers.jl. I have no idea what it is used for, but if you do, perhaps this is also useful to mention?

Anyway it’s awesome that you do this, I have seen questions about vectors of functions appear quite a few times :slight_smile: and not so many resources about the solutions