Basic Comparison of Python, Julia, Matlab, IDL and Java (2018 Edition)

I know it’s old, but the results are pretty good. I wonder how v1.0+ compares.

https://modelingguru.nasa.gov/docs/DOC-2676

The timing setup is not very clear, but I am under the impression that it includes compilation time.

If that is the case, I am not sure I would consider this comparison useful; running a single Fibonacci calculation per process may not be a common use case.

The report seems to be a little bit messy, e.g.
Problem 1: Fibonacci Number

Language	Option	n=25	n=35	n=45
R                       0.034	0.034	0.034

The same for the recursive version with 0.008
Why is that?
The source code does not contain the R code! :thinking:

It would be so cool if there was a package that:

  1. checks which languages are locally available
  2. run a set of tests on those languages
  3. submits the results with all the other hardware stats to some repo
  4. and then some stats can get compiled on all the uploaded results.

Since this is something all the involved languages would be equally interested in, a joint effort might make most sense?

How would this be different from eg

?

No idea, didn’t know about that. Cool!

Really good points. Thank you.

I am guessing that R is probably using a pre-computed table.

Probably, but using this in a benchmark? Questionable! :wink: