Thanks for the write-up.
A.
I noticed here, likely added after posted date(?), in case others missed that link: Web appendix for the Python, R, Julia and Matlab language comparison in 2022 on Jon Danielsson's ModelsandRisk.org
p.s.
It is possible to significantly speed up the calculations by taking advantage of SIMD. That is easily accomplished in Julia with @turbo. We have not explored how to do the same in the other 4 languages, but are open to suggestions.
[code with @turbo]
Using
@turbo
speeds this Julia code up 2.3 times, but that is not a fair comparison to the other languages.
So it looks like your other Julia code is 1.54x slower than C (and slower than Python+Numba), showing in the graphs, is actually not the fastest, with that Julia code 33% faster than C? I think it’s actually fair to Julia to show best case, is it really unfair to the other languages? At least if such is not available there (or much more complicated to implement)?
B.
When using Julia, we use two versions, standard and without bonds checking, @inbounds.
You have a typo “bond”, but I would clarify further:
When using Julia, we use two versions of the code, standard and code applying the @inbounds macro, i.e. disabling bounds-checking.
It felt like you were saying you used non-standard Julia [tools], but using @inbound
is very much standard practice, especially in packages. Note though you can disable globally (and that’s the fair comparison to C) with:
$ julia --inline=no
I think actually you can fix your code to avoid bounds-checks without @inbounds
or the global option, and maybe merging the loops back is possible (was it really, a needed, part of the speed-up trick?).
B.
On reading compressed CSV files, and you using CSV.jl. It was the fastest package of all language (when subtracting first-use latency).
I’m not sure which is fastest or easiest API, others to consider:
C.
All four languages easily support GPU programming.
A bit surprising, I thought others not as good for GPU use, and at some point at least Python not good at all.