Difference btween dot operator and operator?

what is difference between thse two exprresiion?
5 + 3 -----> 8
5 .+ 3 -----> 8
is there any benefit to put . before operator?

the . is of no use for simple numbers, it’s (fusing) broadcasting, i.e. apply the function/operator element-wise.

julia> [2,4] + 4
ERROR: MethodError: no method matching +(::Array{Int64,1}, ::Int64)
For element-wise addition, use broadcasting with dot syntax: array .+ scalar
[...]

julia> [2,4] .+ 4
2-element Array{Int64,1}:
 6
 8
7 Likes

and it works 3 ways:

julia> [2, 4] .+ 4
2-element Vector{Int64}:
 6
 8

julia> 4 .+ [2, 4]
2-element Vector{Int64}:
 6
 8

julia> [2, 4] .+ [1, 2] # note these are of the same length
2-element Vector{Int64}:
 3
 6
3 Likes

I don’t mean to hijack the post, but a related question came up in my coding yesterday. Is int1 / int2 .* vec1 any faster than int1 .* vec1 ./ int2? The latter looks more like the book equation I am trying to represent, but the former seemingly requires fewer operations.

1 Like

It is worth noting that 3 .+ 6 works, as does 3 .+ [1, 2] and [1, 2] .+ 3 just because numbers themselves are zero-dimensional containers of their own values. Other data structures will not behave the same way without workarounds. And this behavior of numbers may induce to some subtle bugs:

for x in 5
    # Will not error, just loop for the number five
    # the user wanted 1:5 but forgot the first part
end
2 Likes

Does the compiler optimize to the first method before running or is the operation just cheap enough that it doesn’t matter?

I was also wondering if
int1 ./ int2 .* vec1
could in some cases be slower than
int1 / int2 .* vec1
if the compiler decided to do the operations from right to left for some reason? The standard operator could maybe force the operations to be done in the right order? Anyone know how these expressions are parsed under the hood?

The compiler will not decide to do these operations from right to left.

2 Likes

My read of order of operation is that the first one divides int1 by int2 then multiplies the result by all the elements in vec1. The second would be multiply all the elements in vec1 by int1 then divide divide all the elements in vec1 by int2…so the first should be faster:

julia> v = rand(Int, 100_000)

julia> @btime 5 / 3 .* $v
  112.892 μs (2 allocations: 781.33 KiB)

julia> @btime 5 .* $v ./ 3
  280.847 μs (2 allocations: 781.33 KiB)

Additionally the results are not the exactly the same:

julia> v = rand(Float64, 10)
10-element Array{Float64,1}:
 0.6822666767203107
 0.8621218010031695
 0.08488887993041017
 0.024156214950365573
 0.6498916306553097
 0.06862022671341128
 0.33611709696877967
 0.29794115597733617
 0.023073589261145555
 0.4549859329862016

julia> (5.0 / 3.0 .* v) .- (5.0 .* v ./ 3.0)
10-element Array{Float64,1}:
 0.0
 0.0
 0.0
 6.938893903907228e-18
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
2 Likes

I tend to use parentheses for this (to ensure performance and reduce needless headscratching):

(int1 / int2) .* vec1
1 Like

Thanks for the concrete performance metrics. (I get confused about the proper way to do those.) I am surprised that the results differ.

Okay, that probably is the best approach. I suppose my main question was, “Is it worthwhile to rearrange an equation for performance, or does Julia fix it for you at compile time?” @pixel27’s timing results prove that rearranging does make a difference.

yes and @pixel27, nicely done
If I understood you correctly, the proper thing to do was to use parentheses to make clear what you wanted to apply to what.

Starting off in Floats keeps the differences minor. If the conversion from Int’s to Float’s happens at different times, well you get bigger differences…

julia> v = rand(Int, 10)
10-element Array{Int64,1}:
 -2203814620830076632
 -2877705152597182017
  6734421012006017801
 -1108821341357642966
  2738129021123542017
  5056953375491690984
  -283354281385024843
  1980335854149908150
  7441289658945453492
  8017928522033622324

julia> (5 / 3 .* v) .- (5 .* v ./ 3)
10-element Array{Float64,1}:
   -6.148914691236518e18
   -6.148914691236517e18
    1.2297829382473036e19
 -256.0
    6.148914691236518e18
    6.148914691236517e18
  -64.0
    6.148914691236518e18
    1.2297829382473034e19
    1.2297829382473034e19
2 Likes

Order of operations matters because of the limited precision of floating point. Values get truncated to fit the size. Pretend you can represent decimal values to one place after the decimal and you evaluate the expression 1/3 * 3 and also 1 * 3 / 3. Because * and / are evaluated left to right, in the first, 1/3 would be evaluated first and represented as 0.3, then multiplying by 3 gives 0.9. In the second 1 * 3 gives 3.0, and dividing that by 3 gives 1.0.

4 Likes