Interesting arXiv preprint… TLDNR: (Julia crushes Numpy…)
It appears as if they have failed to read the performance tips section of the documentation. Lots of benchmarking outside of functions and global variables
As in other cases, I think that the best approach is (if someone wants to do it) improve their code sharing ideas here, and finally invite the authors to read the thread. Particularly without invading any of their spaces.
It’s interesting though how much faster Julia is than Numpy without “advanced” Julia optimizations…
This is interesting, and they are honest that they are not as familiar with Julia. I have been going through their code and making some improvements in this branch. Already the benchmark for the heat test is much faster when timed using BenchmarkTools.jl
.
I have only touched one of their files, but I am working on improving the others.
Interesting take. Bear in mind the “easiness” axis in their papers (it looks we tend here to focus on performance).
I expect implementing common tips will both increasing the performance … and perceived difficulty (to a much lesser extent, especially if it’s just about wrapping in a function, adding $
when using @btime
, etc.). Maybe it’s simple for most of people found on this discourse, but requires a refined understanding of what’s happening behind the curtain. Hence the expected increased in difficulty (easy to implement, but requires knowledge).
In that sense, the “Julia” point of the benchmark is not wrong, but should read “Julia (Naive)” and should be complemented by a “Julia (performance tips)”.
Curiously the chapel implementation is all inside functions, probably because the language requires it. A more or less direct Julia translation from that would be faster in Julia maybe.
If Chapel is so fast and easy then why it is not so famous ? They said that Python is easier than Julia.