I certainly see that this would be the case with Python and R since they were designed as interpreted languages and give a would-be compiler few clues as to what it’s actually expected to do. I guess my question is, aren’t most of the things that are being said about performance in this thread things that can equally apply to Java or Scala?
A year ago, when I would tell people I use Julia the first question they’d ask me is “Why not use Python?” and the second question they’d ask me is “Why not use Scala?” Of course I’m sold on Julia because I like the language design much better than Scala, but I’m wondering if the performance advantages are anything more than incidental. (Interestingly, Scala use seems to have fallen off a cliff and it seems to be kept alive mainly by Spark lately.)
Like Go and Rust, those are also statically typed languages. Everyone knows that you can get good performance from any decent static language. But static languages aren’t well suited to interactive usage/exploration. That’s why the usual points of comparison for Julia are other dynamic languages like Matlab, Python, and R.
For what it’s worth, https://github.com/ParRes/Kernels has ports of interesting HPC kernels to Julia, Python, Rust, Octave/Matlab, C++ (+many threading models), Fortran 2008 (+multiple parallel models), C89-ish (+many parallel models, both shared and distributed), and Chapel (in a branch at the moment).
I found that Julia was significantly better than Python for wavefront parallelism, because unlike independent data parallelism, Numpy has no intrinsic for this (that I know of). All other performance comparisons are left as an exercise for the reader.
Most of the shared-memory ports of the PRKs took me a few hours, so you should have no trouble implementing new languages that you want to compare to Julia (if you create these, please submit a pull request).
I believe - nowdays there is no reason to speak about the such tremendous speedups without taking in accounts software and hardware details. On the second hand… one of the Julia\s authors goals - to be only on 10-20% SLOWER then compile-time workhorses upon most common numerical calculations in practice - QUITE FEASIBLE.
I implemented a pure-Julia arbitrary-dimensional version of the Genz-Malik algorithm at:
It should be fully functional, and a big advantage of being pure Julia is that it is type-generic. (e.g. your integrand function can be Float32, Complex{BigFloat}, Matrix{Float64}, etcetera and it will all work.)