Julia compiles code into native code, but it may take time, and at the opposite end you can use Julia as an interpreter with
julia --compile=min but it’s very slow at runtime (only the non-compilation step is fast).
Python DOES compile into bytecode, and I was thinking, do we need such as a middle-ground (in some cases, even for a limited subset of Julia), then a question which bytecode to choose. It just occurred to me, Julia already includes another language FemtoLisp (used for the parser, though it’s on the way out for that).
the fastest lisp interpreter I could in under 1000 lines of C […] it is fast, ranking among the fastest non-native-compiled Scheme implementations. It achieves this level of speed even though many primitives (e.g.
for-each) are written in the language instead of C. femtolisp uses a bytecode compiler and VM, with the compiler written in femtolisp. Bytecode is first-class, can be printed and read, and is “human readable” (the representation is a string of normal low-ASCII characters).
compiler team is already thin and stretched, let’s not derail ourselves into directions that benefits virtually nobody (“just use Python?”)
I’m not asking anyone in the core team to do it (nor asking if already possible), just would it be viable. And if done, would it be wanted integrated into Julia?
You’re right it’s already possible to compile Julia to Python, and that would be one way. I’m thinking something you could annotate packages with, compile=min is already possible there, and I’m not sure Python [VM] would be wanted there.
If you use Python you need the Python runtime, which is larger.
compiling to femptolisp wouldn’t make much sense. the better way to get this type of tradeoff would be too gave an interpreter over Julia bytecode with a simplified type inference and inliner that gives up more often but is faster.
There is “no Julia bytecode”, right? You mean it could be defined? Yes, I’m not sure what’s optimal (while I have given VMs a lot of thought), for Julia, why not FemtoLisp’s (or Python’s, or Graal or whatever is available)?
so the answer is no, that’s not how Julia currently works at all, we don’t compile to femptolisp first; not sure if this is technically super accurate, but femptolisp parses Julia source code into expressions which is then handled by Julia compiler. what femptolisp sees is Julia source code, and we don’t stay very long in “domain?”
Julia has an ir format that is basically a bytecode.
The inliner is part of the (optimization) problem. Optimization is costly, especially it, and it’s only on and off, but some middle ground likely a good idea. I would be ok with no inlining and just bytecode, since a lot of code isn’t sped-critical.
Do you know if compile=min interprets that LLVM IR I forgot about (or source code directly?), and why it’s so slow at runtime (much slower than Python’s). I’m not sure it was ever meant for interpretation. Then you need LLVM, and I would like interpretation without requiring that huge dependency.
Thanks, wasn’t aware of (or had forgotten) that, which isn’t part of Julia. I recall the debugger very slow, because if this I guess, any idea why or what you have in mind with better heuristics? Like just useing that on some code, not hot-code/loops?
the main reason the debugger is slow is that when you are debugging, it’s very hard to perform optimizations without missing breakpoints.