But that was done by someone working on the Go implementation, not by the owner.
Also - if that is merged Julia time will be like: 1 compilation time + 2 runs of the same problem
Here:
But that was done by someone working on the Go implementation, not by the owner.
Also - if that is merged Julia time will be like: 1 compilation time + 2 runs of the same problem
Here:
I’m fine with just remove that runtime dispatch and use UInt32
honestly… we’re already slower, might as well at least make source code cleaner
I don’t want to be rude, but the issue raised in that PR is fueled by blunt ignorance about Julia.
I don’t impose that Go developers should be knowledgeable about Julia - but if they jump and change the code, then they better be
I don’t think we should change the source code because someone who does not understand the language has some issues.
Reconciliation took place:
Benchmarks comparing languages should have two phases:
The problem is released and people have 1 hour to provide a solution, using any tool. This is the first benchmark.
Let people do whatever they want and see how fast languages can get with unlimited time and effort. This is the second benchmark,
The language true utility will probably be somewhere in the middle.
Thanks, again this scales worse than Go, same as for non-concurrent, so maybe not surprising. But why does Julia scale worse, or rather way does Go do well?
This is actually a cool idea, you could just plot time of submission after release against benchmark speed. Then you could see how much time you approximately need to spend for performance in each language. But it’s of course quite confounded by skill of developers, motivation to do well in the benchmark, how many people contribute, etc.
It would be totally unfair for other languages: it is well known that Julia users can spend a lot of time to optimize other’s code while they are importing packages
Let’s define POWV(L), AKA “pack of wolves”-ness, the function that associates with a language L the ability/ferocity/speed of its community to optimize to the bone every piece of algorithm that is put in front of it.
The new metric proposed above gives a good estimate of the POWV function for each language.
Whether large values of POWV(L) can be said to be positive markers for the language L remains to be discussed.
My manager said to me this morning, looking over my shoulder at my discourse window, that he was wondering about Julia’s productivity: “It’s clearly a hyper-productive language, but it doesn’t seem to help focus attention on paid tasks”.
This is not our problem other languages are not cool
I fixed up my method above, and it should be faster than the MVector
approach, even without the MVector
.
Let’s see if it would suffice: Improve Speed (Julia) by SyxP · Pull Request #227 · jinyus/related_post_gen · GitHub
Edit: It was not faster on their machine
Impostor syndrome crisis (in a good way) after looking at this.
Thank you, @Lilith.
Also, Julia + StaticArrays
is ~2.95 s. But the owner does not allow the usage of StaticArrays
: I don’t think bad faith is involved - maybe just some lack of understanding of Julia’s ecosystem.
I’d say that while some communities optimizes like a pack of wolves, the Julia community optimizes like a pack of lions. Each expert contributes to their aspect of optimization until Julia is hellishly optimized. Sometimes this optimization comes from a library, where other “lions” already optimized it. And sometimes we contribute to each other without even talking!
The exclusion of StaticArrays is unfortunate because its use really is idiomatic in writing Julia code.
I’d be curious what kind of benchmark times are possible in code that is still largely or wholly iodiomatic (for some reasonable definition). i.e. code that uses the standard toolbox of packages and which otherwise avoids strange optimization hacks.
Not sure how I feel about arms race, still want StaticArrays back because it seems reasonable to other languages contributors except repo owner
It’s good publicity, It shows the Julia’s strengths.
not really, I can at least think two ways this might not be ideal: onlookers would think this is some superiority complex of our community (to go so far as to make a specialized package just to be at the top).
or another possibility is Go or Rust also implement this and we will be slower again. In this case everyone is writing so much (first glance) incomprehensible code there’s no bonus points for Julia
It’s not a very stable point to be in, I don’t like arms race I guess
On the other hand I think @Lilith 's example highlights one of the core strengths of Julia: Achieving absolute performance while still having highlevel code. Yes the library might be involved and hard to understand, but the code solving the problem is not more complicated than in the ‘normal’ Julia version. So from the perspective of an user this is quite the optimum I’d say! (And if the superiority complex of the communities means you get ultra-fast libraries for every problem by just posting a benchmark to their discourse, then this is even more of a bonus )
Sorry, you can’t look at that SuperDataStructure package and tell me that’s a generally useful package we should put into General Registry and people would use it.
Because of such, you can’t say the “user” code is simple, someone can just put the entire Related.jl into a package and user code is just main("posts.json")
– that’s not useful way of looking at this I feel
There are pros and cons in every kind of publicity.
Languages are tools, I am using Julia because I find it solve my problems with the least amount of friction. Since most of these languages use LLVM as backend you will get more or less the same performance when you optimize them with the same algorithms. Similar thing can be achieved with Rust or zig or… the point is with what level of abstraction those languages achieve this. I am looking forward to see the same optimized techniques fruits how much complexity in those languages.