https://jochenschroeder.com/blog/articles/DSP_with_Python2/
Just posting here in case anyone’s interested in having a look.
https://jochenschroeder.com/blog/articles/DSP_with_Python2/
Just posting here in case anyone’s interested in having a look.
Loops over columns instead of rows. Unnecessary copies. The same mistake in every single one of these silly blogs.
Julia […] was developed with the aim of being a replacement for matlab […]
Really?
I have not spend the same amount of time as I did for optimizing the cython code on the other methods. I believe that it is generally still a fair comparison, because all other methods promote that they essentially achieve the speed-gains with very little additional effort.
I think he has a point here.
I said this earlier: Julia puts too much emphasis on the performance point. This makes people believe that Julia is just fast which isn’t true. What is true: Julia can be fast, but you have to know quite some things, or at least, you need to understand the performance tips.
Sometimes you have to benchmark trivial small code snippets to decide which possible solution you want to use, sometimes just a single line of code (especially DataFrames is noticeable for this).
The result is disappointment.
Did people actually read the whole blog post? The post is from Nov 2020, but has a Jan 2021 update that solves everything you mentioned (someone pointed out to him these novice mistakes). The 4~5 times slowdown is from after these changes. He finishes:
As you can see the changes to the Julia code resulted in a 3x speed up. It is now ~60 faster than the numpy code. Still a factor of 4-5 away from cython and pythran and it would be interesting to see where that difference is coming from (I’m sure I’m doing something non-optimal and if you know what please let me know). I was probably a bit harsh on Julia in my previous conclusion, this just goes to show that one needs to know what one is doing if you want to get the most out of a language.
Julia puts too much emphasis on the performance point
Who? I think this may come across in some places, but that’s usually due to a miscommunication.
@Keno wrote this nice little thread earlier:
As such, if we advertised performance to C/C++ users, they’d probably end up disappointed.
https://twitter.com/KenoFischer/status/1491945141506039817
Anyone working extensively with Julia knows that Julia doesn’t just magically give you optimal performance for any piece of code.
The actual point about performance in Julia is, that there is nothing in the core design, that prohibits you from reaching C performance.
There can still be lots of performance pitfalls on the way. No package or Julia Base code is guaranteed to be written in an optimal way, and not every pattern compiles down to the fastest assembly there is.
But, if you invest the time, you can get optimal performance in most cases, without the language getting in your way.
I noticed that he asserts two elements from a vector, and he is not using Static Vectors. There are so many things that could be going wrong here.
assert wxy.shape[0] == pols
assert wxy.shape[1] == pols
Good Questions… but they can’t be answered so easily. Not being able to answer them isn’t a proof that the claim is wrong.
Where ever it comes from, it is now in the world: you want to have speed? you want it easy? you are not smart enough to use and create optimized C? Use Julia!
Do you think this isn’t the case?
Asking who was it or where it comes from isn’t the way to solve this issue. And i think it is an issue.
There is much more to Julia why to use it. Performance isn’t the most important aspect as the most use cases don’t need the last microseconds of maximum performance.
Being able to create fast code is great and a very nice to have. But Julia has this already, it needs to focus on other things. It needs to broaden its usability. Therefor it needs to enter other domains than data analysis and math. Don’t focus on the domains where you are already successful to be more successful or to be the only one left, you end up being a king in a small world. After these years of data analysis Julia needs to proceed.
In the most general way, Julia shows the power of multiple dispatch. But only to a small community of data analysts. This can only be the start, the next step has to be done. But something is holding it back.
No I didn’t bother to read the whole post after seeing those simple errors. There are now 3+ posts like this every week. So yes, this blog poster may know a good deal of Python and its exotic offspring but if their “performance comparison” doesn’t even display basic knowledge of Julia then it really isn’t worth more of my time because it isn’t a comparison at all.
Also, to @oheil,
This should be obvious to people who have any bit of common sense. C and Fortran are also fast, no one is questioning that statement, yet they have their own set of “performance tips” that programmers need to adhere to in order to achieve that performance. Heck, the blog post was the greatest example of this. Python is fast if you follow their performance tips too: you just need to use Pytherexoxthran!
You are right that there is plenty of work to do to make top performance more accessible and more achievable “for free”, but it will always require some effort on the programmers part to get there. Nobody ever said it will automagically be fast.
No, that’s not what I want. It isn’t possible like “for free”.
What I want is to be happy with the performance we have (because it is outstanding, and if you want more, do your homework) and now let Julia move on to other domains.
I think that is the case, and I think that statement is true. Which does not mean that you can do it without knowing anything.
I’m not sure if that is actually true for Fortran. It is hard to get a lousy performance in Fortran. First rounders will very likely obtain faster code in Fortran than in Julia, and the tradeoffs are of other order.
Concerning the code of the blog post:
Something like sum( x .* exp.(y) )
would be better if it didn’t allocate a new array. I don’t think that is obvious from the performance tips. Quickly realizing that and not using that pattern comes with some practice.
It should be perhaps more advertised for new comers that you can add @views
to the whole function and then slices won’t allocate.
These and the order row/col order of loops are the only things I readily see to improve on that code. If those things don’t make the code C-speed like, we are on the realm of more specific optimizations (loop vectorization, etc.).
Perhaps for you it isn’t the most important aspect, but for a lot of people, performance (without loss of usability) is literally the reason to use Julia. Without it, Julia would be a mere curiosity to many many of its current users.
Don’t focus on the domains where you are already successful to be more successful or to be the only one left, you end up being a king in a small world.
On the contrary, king in a small world is exactly the way to be: (keeping to the analogy,) consolidate your kingdom before venturing out into the unknown and spreading yourself too thin.
It isn’t possible like “for free”.
Well, not free - but at a reasonable price.
As I indicated in a previous post, SBCL Common Lisp, another dynamic language with type inference, has performance hints when you compile the code.
You could say:
Etc.
I’m trying to argue for Julia in our engineering business. From what I’ve seen of Julia, it is so easy to hand performance away. Too easy, in fact.
I think the Matlab focus early on was very clear. A lot of what folks like me pushed on during the evolution from 0.0 to 1.0 was to separate Julia more from Matlab’s influence.
So the Matlab claim is actually true? Can’t believe it, but I never had to use Matlab so, I just didn’t see it. And I started with Julia with 0.3 (or a bit before) so it may be already late to recognize this.
So, the Matlab influence was because Matlab was widely used at the MIT? (I still can’t believe it)
I think it depends on what the exact claim is. It’s surely true Julia is meant to complete with Matlab and that it took more inspiration from Matlab than other competitors at the start – I think Stefan’s love of Ruby is the only other striking influence.
The original Julia founder group is very small – most of the reasoning that should matter is from Alan Edelman, Jeff, Viral and Stefan. Alan, Jeff and Viral had a very clear knowledge of Matlab – they’d had a company closely linked to Matlab before Julia.
Does anyone knows how to generate a more or less similar input set for the code in the blog, in Julia?
Working on it.
There are at least two more problems: 1) apply_filter is not type stable because Xest is initialized as complex int and 2) in cma, the view into E should have the colon first. Otherwise the view into memory has a stride > 1. Therefore E should be transposed.
I have two further suspicions as well:
+=
operates in place for passed function arguments. Therefore, the update of wxy
in the function cma
is missing the dot, i.e., in Julia, it should be replaced by .+=
.apply_filter
function vectorise? In structure, it looks very similar to the standard example sum
function used in @simd
tutorials. On that note, does a @simd
annotation of a loop propagate into its called functions?