The Julia Challenge


#1

I distilled the problems that I think Julia solves the most elegantly into a challenge:

https://nextjournal.com/sdanisch/the-julia-challenge

I’m genuinely curious what other languages can express those problems as efficiently and elegant as Julia!
I think this will be a great learning experience for all of us. It will be pretty interesting to actually see how some of our favorite features can be expressed in some other language :wink:

I hope it gets spread out wide to get some interesting solutions! ( please consider upvoting on hackernews) :slight_smile:


#2

Site doesn’t work for me.


#3

Still doesn’t work for me too. It’s been quite a long time and I’ve notified the Next Journal people but I don’t know what it is. But… thanks for posting. Now we know it’s not some local fluke, and it’s not regional (unless you’re in SoCal right now).


#4

That’s indeed weird - hope it gets fixed soon -.-


#5

Link worked fine for me.


#6

It works for me as well.


#7

Works now when I stopped tethering internet from my phone.


#8

@sdanisch
maybe just delete section 1: Rules.
or at least the 3rd point. about “Helpers”

It seems to be confusing people, and doesn’t add much.

Similar most other things that can be taken as rules.
Like Code-Size.

If they give code that more or less works, it will be clear anyone reading it how good it is on there own criteria of judgement.

This is only interesting (as a challenge) if we have people in other languages actually attempting to do it.
And posting their attempts.
Without that, this is just a cool demo, written in a weird way.


#9

The code

  1. relies on quite a bit of Base functionality (eg CartesianIndex),

  2. uses some constructs, eg @propagate_inbounds, which would be opaque to an outsider (actually, even an “power user” of Julia may not use these), these are a distraction.

So I am not sure it achieves its purpose when the audience is not Julia users.

Also, I would be cautious of asking others to replicate something that took many iterations to arrive at in its present form in Julia itself. I am sure that other languages, especially Rust, could demonstrate simple yet powerful features that would be nontrivial to replicate.


#10

@oxinabox, i did delete that section just now - lots of people got really hung up on it^^

@Tamas_Papp

So I am not sure it achieves its purpose when the audience is not Julia users.

My thinking was, that people actually trying to complete the challenge won’t really need to look at the julia implementation but rather at the spec - after all the julia code is just some reference.

Also, I would be cautious

I’m not sure why I should be cautious - I’d love to see that!


#11

it looks like there was an accepted PR from NIM, but it doesn’t show up in the timings on NextJournal? Are you planning to update the text to discuss it? Did it meet all your requirements?


#12

Yeah I plan to write an updated article! Will take a while, since I currently have quite a few other things on my plate :wink: I’m not a 100% sure if it met all requirements, but it is looking pretty good! I’ll need to analyze it some more!


#13

Ping after another moth passed.

In the meantime we could look at this:

which seems to support @Yifan_Liu’s idea: “My 2 cent is that Julia should push to get better ASAP, since C++ is getting easier […]”.

But not only C++ get easier (cite from link above):

This issue is still open. Is it why we still wait for udpates here?

BTW somebody could ask if we just didn’t lose (first round of) this challenge!


#14

12x faster than the Julia reference

Well, it’s not such a serious bug, considering that C++ has the same problem, as soon as you use a function that isn’t a compiler intrinsic - which isn’t the case for Julia, since sin is implemented in pure Julia nowadays… And it’s pretty easy to fix in this particular case anyways, so no performance problem for real code.

I’m also not sure if a solution that took much longer to write, contains around 10x more boilerplate and needs to have every function you want to broadcast be defined ahead of time can be called a winner of the challenge :wink:

It’s no doubt impressive and fun to see that you can push C++ 17 this far, but i think what this mostly demonstrates is that if you need to use C++ it’s not the end of the world, but it’s still much more awkward than Julia. I also haven’t tested all the cases yet, so I’m not 100% sure if actually all cases work like in Julia (main reason I haven’t written an update yet)!


#15

There is also 115 line long code using xtensor library which is (IMHO) much more readable than your 101 (with benchmark) line long Julia code.

Could you explain what do you mean with “ahead of time” please? Do you mean before compilation? Because I think Julia is not different here.

In real life you just wont be happy to buy car where you need to fix engine often.

And one more cite:
To win the performance part, your implementation needs to be at least as fast as Julia’s base broadcast implementation for arbitrary dimensions and argument combinations.

I think that good challenge cannot change criteria. It was you who chose race track! You who chose sin function! And your implementation is going 10 miles per hour while other is going 120 miles per hour. It is pretty big difference isn’t it?

I am looking forward on C++20 where concepts could radically improve readability. And we could hope that metaclasses will be there too. (Maybe we need to sell Julia with similar future fruits! For example: possibility to create independent so libraries in 2019/2020?)

People will judge this. My humble opinion is that xtensor solution looks nicer :stuck_out_tongue:

To be honest I was not able to compile xtensor solution! :wink: But it is not my challenge so I did not give it too much time. :stuck_out_tongue:


#16

There is also 115 line long code using xtensor library which is (IMHO) much more readable

Well, that’s funny, since the xtensor submission isn’t actually solving the challenge, because it’s not implementing the broadcast - and it’s still longer than the Julia version that includes the whole implementation :smiley:

Could you explain what do you mean with “ahead of time” please?

If I’m not mistaken, before you compile the binary and ship it, you’ll need to fix the set of functions you can broadcast. That’s not true for Julia.

I think that good challenge cannot change criteria. It was you who chose race track!

Ok sure, you can say I lost the challenge, since I designed it badly - but saying Julia lost the challenge would be pretty far fetched - considering that C++ has the same problem and it actually took Wolf a bit of time to find this bug to have it faster than the Julia version :stuck_out_tongue: Also, you could likely just define llvm_sin via llvm call and be done with it :wink:


#17

Since it’s not quite clear to me: Is it that bug which is mainly causing the performance difference? Can the gap be closed, or nearly so?


#18

Can the gap be closed, or nearly so?

the problem is:

large_array .+ sin.(1.0)

the above instructs the broadcast machinery to call sin(1.0) large_array times. If the compiler can’t figure out that sin is pure, it needs to call sin(1.0) over and over again. Both Julia and gcc don’t manage to infer this from a sin written in pure Julia/C++ - while they do if you use the sin compiler intrinsic.
So the “fix” is to do:

large_array .+ sin(1.0)

And you’re done… or use llvm_sin.(1.0), if you really want to put the dot there :wink:

The performance is then exactly the same as the C++ solution.


#19

As I understand template solution means that library is source code. So yes - you create function and compile code to run. I still think it is same in Julia. You create function and compile it to run.

Are you sure? From documentation “xtensor provides - an extensible expression system enabling lazy broadcasting” (source: https://xtensor.readthedocs.io/en/latest/). As I understand it, it is main purpose of this library!

See also comment in solution:

        xt::xtensor<double, 2> a = xt::random::rand<double>({1000, 1000});
        xt::xtensor<double, 1> b = xt::random::rand<double>({1000});
        double c = 1.0;

        // Un-evaluated broadcasting exprression
        auto expr = a + b - std::sin(c);
        auto res = xt::xtensor<double, 2>::from_shape({1000, 1000});
        xt::noalias(res) = expr;        // Evaluate the expression.

And see how it is simple, readable (and quick). What do you mean this is not broadcasted? Or maybe better question: What is your definition of “broadcasted”? I see expr expression “distributed” over matrix a, vector b and constant c. What is missing?

I don’t say this! Maybe I am here a little more provocative than you deserve.

And I think it is just bad luck which could happened. (especially if one is overconfident :stuck_out_tongue: )

Here I am surprised. Wolf is using std::sin function from standard library! It is quit standard solution, right? What he used before?

:slight_smile: xtensor solution has benchmark and solution in one file. So you need to compare it fairly. Try to count bytes and xtensor solution is smaller (3272<3434)! :stuck_out_tongue_winking_eye:

But this is a little childish (we both know, right?), because we count commentaries and empty lines and slightly different solutions.

What is really interesting (especially for people who needs calculation simulated by your challenge), that there is nice C++ library which could simply, quickly broadcast user defined functions.


#20

Which one? Maybe xtensor solution is slower?