In the section Access arrays in memory order, along columns, there is the final sentence
(Of course, if you just do
f.(x) then it is as fast as
fdot(x) in this example, but in many contexts it is more convenient to just sprinkle some dots in your expressions rather than defining a separate function for each vectorized operation.)
I find it a bit confusing. Given the content of the section, it seems to me that the above sentence wants to highlight that
- using the dot notation is faster in the given example, but in general it will not be the case,
- even though in general the dot notation is not the most performant approach, in several contexts it is more convenient.
Based on my guess, I proposed a PR which was rejected, on the grounds that my version deviates from the intended message. However, I didn’t get what the actual intended message is.
The last comment to the PR says that the dot notation shouldn’t be slower, but the quoted message which I tried to rephrase IMHO suggests the contrary.
It means there is a trade-off between ease of use and speed.
a) If you are in dire need for performance, then you define dot functions manually.
b) If performance isn’t that important, then it’s way more intuitive to just append a dot to the function name and do the same task as your custom function would.
Yea I see the confusion. There are actually three
f calls floating around. The highlighted sentence talks only about the two that are explicitly broadcasted, ie
fdot(x) instead of just plain
f(x). I think the point is that “adding dots” allows for performance improvements because it helps eliminate intermediate allocations.
I think the general point might be that you can pretty much strive to write code that works for scalars (or just one object) and then
fun.(x) will allow you to easily and efficiently broadcast it over vectors, arrays, and in general just things that allow broadcasting.
Thanks @anon37204545, I agree with your interpretation. I proposed to change the quoted sentence to
Note also that, while
f.(x) is as fast as
fdot(x) in the above example, in general defining a separate function will lead to improved performance. Nevertheless, in many contexts it is more convenient to just sprinkle some dots in your expressions rather than defining a separate function for each vectorized operation.
@tbeason has it right here — the “defining a separate function” is talking about defining
f itself (not
fdot), but boy can I see how this is confusing. It’s defining the functions as a way of demonstrating the performance of:
3x.^2 + 4x + 7x.^3 vs.
@. 3x^2 + 4x + 7x^3… but then since it has
f already defined, it notes that it could simply be
f.(x). The clarification is simply saying that you needn’t define an
f just to get this performance if you happen to have a polynomial expression like one of the above in the body of a more complicated function.
So it’s in dire need of a change, but the clarification should go the other way.
I should be clear: thank you for opening the PR! This is how we find things like this and make them better.
I am not sure about this. I think the section conveys the message that the dot notation is convenient, and broadcasting in general will be as fast as any alternative, and frequently faster than the naive approach of chaining vectorized functions, because it avoid temporary allocations. This is the whole point of this example.