Funny Benchmark with Julia (no longer) at the bottom

But that was done by someone working on the Go implementation, not by the owner.

Also - if that is merged Julia time will be like: 1 compilation time + 2 runs of the same problem

Here:

3 Likes

I’m fine with just remove that runtime dispatch and use UInt32 honestly… we’re already slower, might as well at least make source code cleaner

1 Like

I don’t want to be rude, but the issue raised in that PR is fueled by blunt ignorance about Julia.

I don’t impose that Go developers should be knowledgeable about Julia - but if they jump and change the code, then they better be :slight_smile:

I don’t think we should change the source code because someone who does not understand the language has some issues.

2 Likes

Reconciliation took place:

2 Likes

Benchmarks comparing languages should have two phases:

  1. The problem is released and people have 1 hour to provide a solution, using any tool. This is the first benchmark.

  2. Let people do whatever they want and see how fast languages can get with unlimited time and effort. This is the second benchmark,

The language true utility will probably be somewhere in the middle.

4 Likes

Thanks, again this scales worse than Go, same as for non-concurrent, so maybe not surprising. But why does Julia scale worse, or rather way does Go do well?

This is actually a cool idea, you could just plot time of submission after release against benchmark speed. Then you could see how much time you approximately need to spend for performance in each language. But it’s of course quite confounded by skill of developers, motivation to do well in the benchmark, how many people contribute, etc.

5 Likes

It would be totally unfair for other languages: it is well known that Julia users can spend a lot of time to optimize other’s code while they are importing packages

3 Likes

Let’s define POWV(L), AKA “pack of wolves”-ness, the function that associates with a language L the ability/ferocity/speed of its community to optimize to the bone every piece of algorithm that is put in front of it.

The new metric proposed above gives a good estimate of the POWV function for each language.

Whether large values of POWV(L) can be said to be positive markers for the language L remains to be discussed.

My manager said to me this morning, looking over my shoulder at my discourse window, that he was wondering about Julia’s productivity: “It’s clearly a hyper-productive language, but it doesn’t seem to help focus attention on paid tasks”.

16 Likes

This is not our problem other languages are not cool :sunglasses:

3 Likes

I fixed up my method above, and it should be faster than the MVector approach, even without the MVector.

Let’s see if it would suffice: Improve Speed (Julia) by SyxP · Pull Request #227 · jinyus/related_post_gen · GitHub

Edit: It was not faster on their machine :frowning:

4 Likes

Impostor syndrome crisis (in a good way) after looking at this.

Thank you, @Lilith.

Also, Julia + StaticArrays is ~2.95 s. But the owner does not allow the usage of StaticArrays: I don’t think bad faith is involved - maybe just some lack of understanding of Julia’s ecosystem.

10 Likes

I’d say that while some communities optimizes like a pack of wolves, the Julia community optimizes like a pack of lions. Each expert contributes to their aspect of optimization until Julia is hellishly optimized. Sometimes this optimization comes from a library, where other “lions” already optimized it. And sometimes we contribute to each other without even talking!

1 Like

The exclusion of StaticArrays is unfortunate because its use really is idiomatic in writing Julia code.

I’d be curious what kind of benchmark times are possible in code that is still largely or wholly iodiomatic (for some reasonable definition). i.e. code that uses the standard toolbox of packages and which otherwise avoids strange optimization hacks.

1 Like

Not sure how I feel about arms race, still want StaticArrays back because it seems reasonable to other languages contributors except repo owner

3 Likes

It’s good publicity, It shows the Julia’s strengths.

not really, I can at least think two ways this might not be ideal: onlookers would think this is some superiority complex of our community (to go so far as to make a specialized package just to be at the top).

or another possibility is Go or Rust also implement this and we will be slower again. In this case everyone is writing so much (first glance) incomprehensible code there’s no bonus points for Julia

It’s not a very stable point to be in, I don’t like arms race I guess

1 Like

On the other hand I think @Lilith 's example highlights one of the core strengths of Julia: Achieving absolute performance while still having highlevel code. Yes the library might be involved and hard to understand, but the code solving the problem is not more complicated than in the ‘normal’ Julia version. So from the perspective of an user this is quite the optimum I’d say! (And if the superiority complex of the communities means you get ultra-fast libraries for every problem by just posting a benchmark to their discourse, then this is even more of a bonus :laughing: )

5 Likes

Sorry, you can’t look at that SuperDataStructure package and tell me that’s a generally useful package we should put into General Registry and people would use it.

Because of such, you can’t say the “user” code is simple, someone can just put the entire Related.jl into a package and user code is just main("posts.json") – that’s not useful way of looking at this I feel

There are pros and cons in every kind of publicity.
Languages are tools, I am using Julia because I find it solve my problems with the least amount of friction. Since most of these languages use LLVM as backend you will get more or less the same performance when you optimize them with the same algorithms. Similar thing can be achieved with Rust or zig or… the point is with what level of abstraction those languages achieve this. I am looking forward to see the same optimized techniques fruits how much complexity in those languages.

1 Like