Has to call different methods of compute at each iteration. That will happen if the type of p1 or the type of p2 change from iteration to iteration. That should be avoided.
If you do this, you donât need to explicitly tell Julia that the vectors might have different element types in your function signature. Julia will automatically generate specialized versions of compute_interaction for those vectors, and will probably inline your functions! If not, well, you end up paying the cost of a function call - but thatâs not incredibly expensive:
function compute_interaction(particles_i::Vector, particles_j::Vector)
for p_i in particles_i, p_j in particles_j
p_i.force += compute_force(p_i, p_j)
end
end
Though I question the practicality of having a force accumulator in the Particle object itself because it might get messy to track.
EDIT: Maybe I should withdraw this previous statement. There is something somewhat natural about this representation. It might actually work out, if done well.
I will point out that I highly suggest you at least specify that particles_i::Vector & particles_i::Vector are Vector-s - unlike what @lmiq suggested:
This definition would try to trap all two-argument calls to compute_interaction (but your implementation clearly expects Vector-s).
More reading: Function Barriers
I think this is what Julia developers call function barriers. I have read discussions about them, but I canât find any of them at this time.
Not exactly. What matters is whether or not Julia can tell whatâs in each cell of those vectors. This is when I try to pretend I am the Julia compiler (or whatever we call the component that deals with this logic):
When Julia canât anticipate
If particles_i is a Vector{AbstractParticle}, and particles_j is a Vector{ConcreteParticle1} then, for each loop iteration:
Juila doesnât know what type of particle will be in particles_i[i].
Even if Juila does know that particles_j[j] is of type ConcreteParticle1.
âŚSo for each iteration, Julia has to do something like:
if isa(particles_i[i], ConcreteParticle1)
compute_force(particles_i[i]::ConcreteParticle1, particles_j[j]::ConcreteParticle1)
elseif isa(particles_i[i], ConcreteParticle2)
compute_force(particles_i[i]::ConcreteParticle2, particles_j[j]::ConcreteParticle1)
elseif isa(particles_i[i], ConcreteParticle3)
...
end
And this is where you loose performance. Compute pipelines are broken when the compiler canât predict what data it will encounter (or calls it will need to perform).
When Julia CAN anticipate
On the other hand, if Julia KNOWS that particles_i is a Vector{ConcreteParticle2}, and particles_j is a Vector{ConcreteParticle1}, then it knows exactly which version of the code to execute:
Juila does know that particles_i[i] is of type ConcreteParticle2.
AND Juila does know that particles_j[j] is of type ConcreteParticle1.
Very true. I have a mental block with using AbstractVector, partly because I avoid using it. I think the reason I avoid AbstractVector so much is that it keeps me from writing:
I am totally with you. I donât like that either, and in functions that I had to accept views I finally ended many times removing completely the annotations.
I just stumbled onto this recently. I donât fully appreciate why this is under the âmachine learningâ umbrella, but this looks somewhat similar to what you are describing.