Benchmark for latest julia?

There’s pretty much a speed limit. But the issue is what kinds of codes can get there. Numba and Cython architectures don’t compose. I’ve already shown that if you stick a Numba function into odeint (a wrapper over a Fortran function) you still get something slow because of the Python in the middle and the function call cost:

http://juliadiffeq.org/2018/04/30/Jupyter.html

Julia code all composes, even between other Julia code, because you don’t compile functions/packages/etc. separately but together when possible. Numba and Cython are like if you compiled everything as a shared library while Julia is like statically compiling everything together. This means that Numba and Cython can put out much better microbenchmarks than it can put out real-world solutions, like seen in that example above.

LuaJIT is a tracing JIT and compiles really late like Julia. Julia and LuaJIT are pretty much head to head. The Julia benchmarks show Julia slightly ahead of LuaJIT and this uses newer versions than the one posted on the SciLua page:

These are using SciLua as well, so same benchmarks and newer version of Julia. And in another test, you can see LuaJIT and Julia pretty much head-to-head in C FFI.

Funnily enough, Julia and LuaJIT beat C itself. This is because of runtime magic to magic shared library calls compile more like statically linked libraries (further reinforcing how Julia keeps an advantage over bolted-on solutions which compile things in isolation)

https://nullprogram.com/blog/2018/05/27/

LuaJIT is fine, Julia is fine. Maybe under different circumstances LuaJIT would’ve done well. But what Julia really has is the genericism. Julia has multiple dispatch and generic functions that let everything compose. All of these benchmarks are all testing the use of integers and floating point numbers because it’s not trivial to run those same benchmarks with dual numbers or numbers with uncertainty or numbers with units. In Julia, swapping that out is trivial though since Julia doesn’t treat operations on floating point numbers as special. I think this is a bigger feature than most people think, and I think I’ll only be able to convince people of that when I go show them all of the ways to use it so I’ve got work to do.

But yes, speed limit, in some cases other strategies can hit it, Julia consistently gets there and is easy to optimize. Oh and the creator of LuaJIT left. Julia is in a pretty good spot for many many reasons beyond the surface level.