Personal experience: in Julia, if a language feature is annotated by @
instead of keyword, then it means Julia core developers still don’t make their final decisions on this feature (maybe because its semantics is unsound). Even old macros like @fast_math
and @inbounds
can be removed (see recent discussions on github).
This is only part of the problem.
Firstly, no matter how hard you work on improving performance of dynamic dispatch, you still need to introduce virtual functions as alternative of FunctionWrapper.jl
, why bother with patching current implementation of dynamic dispatch? This is a more controllable way to structure people’s code.
Secondly, the bigger problem I want to emphasize here is that, performace of dynamic Julia is too bad:
- Type inference is too slow and annoys everyone.
- Even if you use compilation cache, you still need to compile at least once. Currently Julia’s installation time becomes much longer. Since people need to upgrade their packages regularly, it’s no hard to see that this will become a new problem (TTFI : Time To First Installaton).
- Dynamic dispatch is slow compared to Python. I don’t know why. I think simple hash table look up shouldn’t be this slow. Currently Julia already applies a lot of optimization to dynamic dispatch, like method instance cache and call stack address hashing.
I want to take about point 3. Many Julia programs are not written with pure interpretation in mind. They do a lot of specialization and generate many wrapper functions, which hurts interpretation performance greatly. You also need to calculate hash for input type and perform table lookup. In contrast, OOP language has speculation JIT, so most of the time the expensive property lookup is avoided. Anyway, currently Julia’s pure interpreter mode is just slow (maybe someone should look into it and improve it).
Maybe we can do this for Julia – a powerful multiple-layer JIT that matches and outperforms Python interpreter’s performance. It can hide compilation latency by firstly running interpreter mode then compiling the function in the background.
But besides the technical difficulties, why bother with this approach? Yeah, it sounds really fancy, however no one can say for sure that this approach must work because traditional JIT techiques don’t quite apply here. It still took Python’s developers several years to implement its JIT. And in Julia people are greedy, there are a lot of unfinished features and yet so limited supply of developers (kindly remind : we have multithreading, debugger, improved type inferencer, effect, cloud precompilation, XXX). That would require multiple years.
It also has many dramatic effects on other language designs. For example, code analyzers can’t handle dynamic codes well. If dynamic codes spread everywhere in every library, then it exponentially decreases usefulness of the checker. Even in Python people nowadays write more types instead of less, so linter can give more suggestions.
And you get what? Make dynamic Julia codes run faster? Is that an important problem? Maybe. But in Julia, we have many weird features, even if you find solutions for this problems, then you lose chances for other problems and introduce new problems.