Benchmark for latest julia?

Please add Excel to the benchmarks :joy:

btw, the MicroBenchmarks package mentions languages such as Rust, Scala and Stata yet these results donā€™t seem to be shown anywhere (and I wasnā€™t able to find them?)

Exactly what I wrote previously.

I also thought about importing his ideas.
Yet @StefanKarpinski wrote that this canā€™t be done as LuaJit and Julia are inherently different.

We can still cross fingers :-).

MATLAB in academia is about 500ā‚¬ for a single license, I think, for a first time buy, and 20% in annual upgrade cost. Toolboxes are about 300ā‚¬ for a single license. OK ā€“ this is what it used to be some years ago.

Student licenses are ca. 50-60ā‚¬ or so, I think.

I never said he should be dragged into the Julia community to redo the ideas he had in the past.
Somebody that smart Iā€™m sure could have a whole bunch of new ideas, relevant to Julia, once he got his feet wet
(and might love a chance to start on something new - I know I felt like a kid in a candy store when I discovered Julia, but I didnā€™t want to just recreate what Iā€™d done in the past, but explore new things that I hadnā€™t been able to before, because I was held back by all of the legacy decisions made before my time, and the tools (C/C++/asm) I had available at the time)

1 Like

Well,

I meant that too.
Brilliant people as Julia creators and Mike Pall are welcome in any project.

1 Like

Yes, it was more Stefanā€™s comment that I was referring to - just because somebody has been around for a while, and has some expertise outside of university and Julia, doesnā€™t meant that they canā€™t come up with new tricks! :grinning:
(Tim Holy for example, is no spring chicken, but heā€™s done a ton for Julia, IMO!)

The Rust microbenchmarks are running again, thanks to some great work by Enet4 (github handle). If I remember correctly Rust is competitive with C, Julia, and SciLua on the microbenchmarks. But Iā€™m waiting for julia-0.7.0 to drop before submitting the next set of benchmark updates to julialang.org.

Microbenchmark results for Scala and Stata are missing because I havenā€™t gotten these languages running on my reference machine, a Linux box running opensuse leap 15.

3 Likes

@John_Gibson,

Could we add more simple Micro Benchmarks?
Something like bisection method to find a root of a simple function, solving simple question with Dynamic Programming?

I can write them in MATLAB.

I think a nice overall performance micro benchmark of language performance is the non-recursive Heaps algorithm which generates all permutations of given length. The non-allocating (to avoid measuring perf of malloc) version in Julia:

function heaps(N)
    elts = collect(1:N)
    c = ones(Int, N)
#     print(elts)
    n = 1
    k = 0
    count = 1
    while n <= length(elts)
      if c[n] < n
         k = ifelse(isodd(n), 1, c[n])
         elts[k], elts[n] = elts[n], elts[k]
         c[n] += 1
         n = 1
         count += 1
#          print(elts)
      else
        c[n] = 1
        n += 1
      end
   end
   return count
end

All good suggestions. All need implementations in many languages to be useful cross-language benchmarks.

here You have it, in C:

void swap(int *p, int a, int b){
   int tmp;
   tmp = p[a];
   p[a] = p[b];
   p[b] = tmp;
}

void pprint(int l, int *p){
   int i;
   for (i = 0; i < l; i++){
      printf("%d ", p[i]);
   }
   printf("\n");
}

long heaps(int N){
   long count = 0;
   int n;

   int *p = malloc(sizeof(int)*N);
   int *c = calloc(N, sizeof(int));

   for (n=0; n < N; n++){
      p[n] = n+1;
   }

   // pprint(N, p);
   count += p[1];

   for (n=0; n < N;){
      if (c[n] < n){
         swap(p, (n%2 ? c[n] : 0), n);

         count += p[1];
         // pprint(N, p);

         c[n]++;
         n = 0;
      } else {
         c[n] = 0;
         n++;
      }
   }
   free(p);
   free(c);
   return count;
}

Thereā€™s 10 other languages left. And donā€™t put it here, just PR it to: GitHub - JuliaLang/Microbenchmarks: Microbenchmarks comparing the Julia Programming language with other languages

3 Likes

Can you explain the y axis of this benchmarking chart. which values are being used to plot the graph

that is time, in log scale, compare to C efficiency having unity time.

The y axis is the execution time of each algorithm in each language, normalized so that the execution time of each algorithm in C is 1.0.

This used to be described somewhere on the benchmarks page, but it appears we dropped it during a reorganization of the website for 1.0. Thanks, Iā€™ll make sure it gets back in.

1 Like

I am familiar with Stata for Linux though I have not used your distro: What is the error message, if any? Have you contacted Stata Tech Support? Statalist? Are you trying to install the latest version 15 or an older version?

TBH, the chart is very unreadable with using just hardly distinguishable colors that are overlapping in such a way that one sees a maximum of 6 circles out of 8 for several languages, like Julia, Luajit, and Fortran. The chart couldā€™ve been much readable if different markers were used. Here is a very rough example which shows a more-readable version of the benchmark (the original svg is awesome but I canā€™t upload it here):

The label for Lua should be changed to LuaJIT, it would be interesting (at least to me) to see what the results for Lua (the interpreter, and standard for the language, LuaJIT is not 100% compatible from what Iā€™ve heard) are also.
I think itā€™s more useful also to order the language by (at least approximately) relative performance, to make it easier to compare the results of C++, Rust, Julia, LuaJIT, and Go, for example, where the results are very close.

2 Likes

Itā€™s not written in stone and the code to generate it is open source. If you think it can be improved, please do so!

4 Likes