Do rules for good problems for benchmarking Julia exists?

Few times I read discussions here, where someone try to check Julia speed, but take problem that was ill-suited for that. Two examples. Some times ago one person post examples in which for loop compute sum of arithmetic series of integers 1 + 2 + \ldots + 1 000 000 (or some other big number), unaware that compilator replace this loop by \frac{ 1000000 \cdot ( 1000000 + 1 ) }{ 2 } = 500000500000. To be honest, to this point I also didn’t know this is properties of decent compiler. Recently, one person was berchmanking multiplying matrices by running code

function foo1()
    temp = Array[]
    for i = 1:1000
        push!(temp, randn(400,400)*randn(400,1000))
    end
end

function foo2()
    for i = 1:1000
        randn(400,400)*randn(400,1000)
    end
end

As was pointed, in reality this check speed of randn(...) function. And have few additional problems.

Since some people struggle with that, is it possible to give some rules on how to find good problems for testing Julia speed? Even if its not, we can write down dozen of such tests, so maybe collecting them for newcomers can be beneficial? I don’t know and I will be glad to hear your opinions.

Documentation already have a good section on “Performance Tips”, but it is about how to write fast code independent of problem. This is of course one of most important rule for good benchmark: write it code according to “Performance Tips”. Not how to recognize that example is ill-suited for speed test. And of course there is great example of summing 10000000 random Float64’s in Julia is fast part of official tutorial, which I use time and again as illustration of Julia speed.

2 Likes

I think quite often, real problems (tasks or challenges) are well suited for benchmarking. So, just take what your are currently working on (maybe simplify it, or carve out a few parts).

4 Likes

I would suggest that the whole concept of “checking” Julia is not a meaningful exercise.

Julia is fast. You can easily get within a factor 2 of C with little effort once you know the language. This is by construction, since it compiles down to LLVM. That’s not where the magic is.

Users “checking” Julia are effectively checking their understanding of programming Julia. This leads to surprises when they attribute differences in speed to “the language”.

Instead, new users should be evaluating Julia with the following question in mind: once I learn the language, will it make it convenient for me to write fast code? How much effort will generic constructs and composability save for me in the long run? But this is a project that takes about a week even for an experienced programmer, and much more difficult to quantify.

9 Likes

I think what people care about in the end is the performances when doing concrete, standard or custom analysis one would do one a daily basis when doing data exploration, scientific computing, etc.

I made a small package last year that tried to benchmark some of these use cases, but it doesn’t cover much.

1 Like

I agree, but I want to point few additional things.

If Julia was as C or C++, language which speed is well known and unquestionable, I would not open such discusion. But, judging from my experience, it is not, people must test its speed to believe. This is specially true in my country, Poland, which is unreasonable technological conservative in areas. And because very small difference in Julia code can produce factor 30 in speed, newcomers can check few programs, which are written in bad way, see that they are not very fast, say “This high-performance is just BS” and go somewhere else.

Fact that people still putting on this forum topics such “Julia is slower than Python” or analysis of ill-suited testes proves to me, that it is still much work to do in making people aware of this problem.

If Julia became standard, well recognized language for scientific computing (I hope that this happen soon, but I’m not a prophet), newcomers will not be put of by not-to-great performance of they first program. But until this time, Julia community have a lot work to do.

This is at list how I see this and I can be wrong.

3 Likes

The problem is that they need to learn something to be able to “test” Julia.

I fail to see the problem with that. I can walk into the hardware store, buy the finest jigsaw and start using it without learning the basics, chop my fingers off, and blame the jigsaw. That will be my problem, and not someone else’s.

“Getting people on board” is a common fallacy of language evangelism. There is a benefit to having a larger community of contributors, but people who are superficial enough to make judgements like this may be a net liability anyway. The Julia language community is now large enough to be self-sustaining and produce high-quality packages on a natural, organic growth path.

To quote the second paragraph of the manual,

Because Julia’s compiler is different from the interpreters used for languages like Python or R, you may find that Julia’s performance is unintuitive at first. If you find that something is slow, we highly recommend reading through the Performance Tips section before trying anything else. Once you understand how Julia works, it’s easy to write code that’s nearly as fast as C.

I am not sure what else could be done. This may be one of those “you can lead a horse to water” problems, diminishing returns to effort are kicking in.

Members of the Julia community have a lot of cool things to work on at this point, so I am not sure that harping on this will be a priority. In fact, I am somewhat surprised that one can still trigger experts into what is effectively a free consulting/code optimization session just by making zero investment at learning the language and then claiming that “Julia is slow”. I am kind of wistfully looking forward to the day when “so go use Python” will be the standard answer to “Python is faster than Julia”.

7 Likes

I would like to contribute my opinion as an HPC engineer here.
Quite often I see microbenchmarkw discussed here - tight benchmarks with loops or individual functions.
I agree with @tamaspass that real world application benchmarks are more relevant.
(OK - optimisation of real codes comes down to finding those crucial loops or functions and optimising them)

The classic HPC cluster benchmark is HPL, which is a synthetic benchmark
https://www.netlib.org/benchmark/hpl/
The HPC community agree this is not a good predictor of real world performance, but it is what is used for Top500 comparisons and is generally a good method for stress/heat testign a system and weeding out faulty components.

Another popular benchmark suite is the NAS Parallel Benchmarks

I guess no-one is going to recode benchmarks like this in Julia.
I wonder though should there be a set of Julia codes which can be run to benchmark a clustered system?

1 Like

I really like the benchmark used in this talk:

@bkamins uses an Asian option monte-carlo pricer for comparison of multiple languages. It is close to a micro-benchmark, but still a practically relevant use-case.

1 Like

Yes, this is mind boggling that you can past "Julia is slow’’ here and you, Steven G. Johnson, etc., will respond in no time and for free. You are awesome, this is only explanation that I can found.

1 Like

Thanks for your kind words, but actually recently I have been staying away from the “Julia is slow because I didn’t read the manual” topics (especially ones which don’t strike the right tone) for the reasons explained above.

1 Like