# Iterating over singleton structs

Hello everyone,

My question is related to dynamic dispatch. I will formulate it precisely at the end of my post.
I have searched the forum and the docs, but I could not find the answer. I apologize in advance if this topic duplicates an existing one.

I have the following code:

``````struct A1 end
struct A2 end
struct B1 end
struct B2 end

myfunction(prop1::A1 , prop2 :: B1) = 1.0
myfunction(prop1::A1 , prop2 :: B2) = 2.0
myfunction(prop1::A2 , prop2 :: B1) = 3.0
myfunction(prop1::A2 , prop2 :: B2) = 4.0

``````

In my real calculation, the methods myfunction perform more complicated computations and call other functions that dispatch on the structs A1(), A2(), B1() and B2().
I want to iterate over the singleton structs to get the final result:

``````function mycalculation()
result =0.0;
for i in [A1() A2()]
for j in [B1() B2()]
result+= myfunction(i,j)
end
end
return result
end
``````

I have 3 questions:

1. Does the code above generate dynamic dispatch or is it problematic for a different reason?
2. I donâ€™t really understand what dynamic dispatch is. I have searched the documentation, but I could not find the relevant section. How can I predict when it will happen?
3. If the code above generates dynamic dispatch, how could I avoid it?

Olivier

Dynamic dispatch is when the compiler cannot determine what concrete method needs to be called, because it doesnâ€™t know what the types of the arguments are in advance, often by inference.
The code is problematic for various reasons, such as why are matrices used, instead of vectors? `[a b]` creates a 1x2 matrix, `[a, b]` creates a 2-element vector. If you look at the code generated, it ends up calling `hcat` to create the matrices. If you need a fixed list that you then iterate over, then a tuple would probably do better, i.e. `(a, b)`
If you have a fixed pattern that you are calling this function, and not too many types, you could simply unroll it manually,
i.e.

``````   mycalc(a1, a2, b1, b2) = myfunction(a1, b1) + myfunction(a1, b2) + myfunction(a2, b1) + myfunction(a2, b2)
mycalculation() = mycalc(A1(), A2(), B1(), B2())
``````

On v0.7 master, the compiler is able to actually able to totally compile away everything, leaving just loading a constant

``````@code_native(mycalculation())
.section	__TEXT,__text,regular,pure_instructions
; Function mycalculation {
; Location: REPL[3]:1
movabsq	\$4621434792, %rax       ## imm = 0x113757FA8
vmovsd	(%rax), %xmm0           ## xmm0 = mem[0],zero
retq
nop
;}
``````
1 Like

The improvement is really impressive.

Iâ€™m not sure however, if I understand your definition of dynamic dispatch:

If the compiler cannot determine what concrete method to call, wouldnâ€™t that raise a â€śmethod errorâ€ť?

Many thanks again

No, it cannot statically know what method to call but it can be looked up based on the run time types.

Sorry for the delay,
Just to be sure I understand correctly, Iâ€™m trying to put together an example that has dynamic dispatch.

``````mysquare(x) = x*x
v = [1 ,  2.3  , "hello"]

function mainfunction(v)
for i in 1:length(v)
println(mysquare(v[i]))
end
end
``````

Would you say the above example generates dynamic dispatch?
What I find hard to understand is the distinction between dynamic dispatch and the fact that a new version of the function has to be compiled when I change the vector v. Is this the same?
I apologize if my questions are very naive, but I have been struggling with this for a while.

Many thanks again

Dynamic dispatch means that the choice of what specific method to call is being done during run time. Compilation of functions happens no matter what and I would say they are not that tightly related.

If Julia can pick out which method needs to be called at compile time, because the types of all the arguments that are dispatched on are known then, it can directly call that method, or do things like in-line that method.
Note: you can have a function like `mysquare`, which can be used with any arguments. When that function is called, if there is no method already compiled for that combination of argument types, then Julia will, at run-time, compile a method for that function specialized to those particular types (which allows for generic code that can get performance as good as hand optimized C code, in my experience). That method will be also be cached, so that the (substantial) overhead of compilation doesnâ€™t usually happen more than once (there are a lot of implementation details, and changes coming to actually not compile, but interpret, in cases where it makes sense, like when evaluating at the top level, like when including a file).

Many thanks to both of you for the explanations!
This much clearer now.