The simplest answer is that, naively, higher level dynamic languages need to ask meta-questions about everything in order to figure out what to do. That is, given a function like:
function mysum(array)
r = zero(eltype(array))
for elt in array
r += elt
end
return r
end
A traditional high-level language doesn’t know what elt
will be — and it might not always be the same thing. So in every single iteration the language needs to ask:
- what type is
elt
? - what type is
r
? - what method of
+
should I call to sum them together?
Those meta questions take real CPU operations and time to answer. It’s time that you’re not spent doing your algorithm and they get in the way of very powerful multiplicative performance gains from SIMD. Now the interesting thing is that since Julia is a dynamic language, you can actually get dynamic-language-like performance quite easily! In the example above, you just need to use Any[]
arrays. Or @nospecialize
macros. Or disregarding all the points in the performance tips chapter.
Traditional JITs can work around the problem by tracing the execution, noticing that elt
is almost always a Float64
, compiling a specialized version of that code segment, and swapping over to it… but you still need an escape hatch if something crazy changes (like if someone eval
s something into your local scope).
Julia’s JIT is more akin to a just-barely-ahead-of-time compiler, (almost) always specializing on the arguments it got to try to ensure that the generated machine code can avoid those meta-questions… and from the get-go Julia disabled some dynamic language features (like eval
in local scope) to ensure that’ll be possible more frequently.