Usage of lazy vector instead of function

Hi everyone,
recently I stumbled across a nice benchmark between different languages and was having a look at the Julia implementation:

Now in general, I understand why there is a use case for lazy arrays, but is this really one?
As I understand it, this code should essentially be lowered to the following:

function f(rounds)
    pi = 1.0

    @simd for i in 2:(rounds + 2)
        x = (-1)^iseven(i)
        pi += x / (2 * i - 1)

    return pi*4

This vector was introduced in a commit on purpose so there is probably a reason for it.
Is @simd doing more magic if it is seeing a vector instead?
I would be interested if there can be a performance gain by using a lazy array instead of essentially calling a function.

I do not know about lazy stuff, but I am pretty sure that @simd or @turbo would largely prefer the loop written as

function f2(rounds)
    pi = 1.0

	ub = floor(rounds/2)+1

    @simd for i in 1:ub
        pi +=  -2 / (16i*i-1)
	if iseven(rounds)
		pi -= 1 / (4ub+3)
    return pi*4

Where I took out the factor 4 and dealt with the parity of rounds separately. Less things inside the loop is always better: here there is only one addition, two multiplicatins and one division, while in your two versions, there was fetching values and/or powers.

If you are allowed to break a little the API, then i would add one to ub and remove the if block completely.

Edit: I might have the signs wrong, I’m on my phone :slight_smile:

There is no difference, in fact the version here came before the one with the lazy array. The only question was whether computing x inside the loop was legal under the implicit rules of the game; IIRC no other solutions did this, all updated x = -1.0 * x in the loop, but the R solution did pre-compute an array of x values. This Julia version was 2x faster than C etc, because it allows for SIMD.

Afterwards, many others were updated to compute x inside the loop, catching up with Julia again. Julia could be too, to exactly this code.

And some were updated to re-order the sum. In
this PR @giordano proposes just f(n) = sum(i -> 4/(4i-2n-3), 1:n).

So far the prize for the most devilish answer (not mine!) goes to this even faster version:

function f(rounds)
     рi = 1.0
     @simd for i in 2:(rounds + 2)
         x = (-1)^iseven(i)
         рi += x / (2 * i - 1)
     рi *= 4
     return float(pi)

@lrnv this seems to give the wrong answer, f(10^6) - pi ≈ 1.367f0. And is also slower.


Yes I saw, my bad. I corrected it with a correct answer, whihc is indeed much slower.

For the curious, this is the benchmark:


Fantastic, not only does it run faster than all others, but it also has O(1) complexity and is by far more accurate than the others even for a small number of rounds. Truly, a good algorithm can make all the difference :stuck_out_tongue: .

Jokes aside, trying out a bit, I think a faster way is not to rely on iseven:

function f(rounds)
    pi = 1.0

    @simd for i in 2:2:(rounds + 2)
        pi += -1. / (2 * i - 1)
        pi += 1. / (2 * i + 1)

    return pi*4

But I presume this is against the rules, since I am doing some reordering and dont end up with the (numerically) same result. Anyway, it was fun :slight_smile:

1 Like

On my machine, the runtime is exactly the same. Note also that simd IS reordering stuffs anyway

It isn’t like there are well defined rules and the maintainer of the repo has been pretty open to manual loop unrolling so far.

Interesting. Ive only tried it on my laptop (for which simd doesnt seem to be so great, admittedly) and here its quite a bit faster.

This might be sorta not fair, but combining two divisions into one cuts time in half.

function f2(rounds)
    p = 1.0
    @simd for i in 2:2:(rounds + 2)
        p += -2.0 / (4*i^2 -1)
    return p*4

Is it possible for Julia to understand this tranformation natively?


With one less multiplication per iter :

function f3(rounds)
    p = 2.0
    @simd for i in 2:2:(rounds + 2)
        p += 1 / (0.25-i*i)
    return 2p

Although it’s negligeable compared to the division cost indeed.

EDIT : The @simd seems to be by default, so that :

f4(n) = 4-2*sum(i -> 1/(i*i-0.25), 2:2:(n+2))

is actually enough…


This is pretty beatiful, especially when you compare this nice one-liner to the equally fast avx c++ code which looks quite cumbersome :slight_smile:
would be quite cool if this would be the julia code in the benchmark I think. While you can optimize the hell out of your code if you need performance, It is especially cool that most of the time you already hit the mark by simply writing the code that you might think of first.

Yeah, except telling people that “this is the code that we think of first” is clearly a lie in this case :wink:

Also there might be a little correctness problem. Contracting two terms for efficiency needs a correction on even or odd inputs (but it is an O(1) correction).


julia> [ f.(1:10) f4.(1:10)]
10×2 Matrix{Float64}:
 3.46667  3.46667
 2.89524  3.33968
 3.33968  3.33968
 2.97605  3.28374
 3.28374  3.28374
 3.01707  3.25237
 3.25237  3.25237
 3.04184  3.23232
 3.23232  3.23232
 3.0584   3.2184

And thus the value is the same when n is even, but is the next one then n is odd, so we are in fact more precise without the correction. It is not that we omit one iteration, but we do one more.

If combining 2 terms leads to a big saving… why not keep going?

using Underscores

g(n,t=1) = @_ 4*sum(ifelse(isodd(_),-1,1)*(1/(2*_*t+1)), 0:(n+1))
g2(n,t=1) = 4*(@_ sum((2/(4*(_*t+1)^2-1)), 0:2:(n+1)))
g4(n,t=1) = 4*(sum(x -> (b = (x+2)*t; a = 4*b^2; 4*((a+3)/((a+3)^2-(8*b)^2))), 0:4:(n+1)))

we have:

julia> g(6)

julia> g(6) ≈ g2(6) ≈ g4(6)

But the real Julia fun begins with the second parameter:

julia> @_ map(g(_,1//1),1:6)
6-element Vector{Rational{Int64}}:

julia> @_ map(g2(_,1//1),1:6)
6-element Vector{Rational{Int64}}:

and also:

julia> using Symbolics

julia> @variables k
1-element Vector{Num}:

julia> g(4,k)
4.0 + -4 / (1 + 2k) + 4 / (1 + 4k) + -4 / (1 + 6k) + 4 / (1 + 8k) + -4 / (1 + 10k)

julia> g2(6,k)
2.6666666666666665 + 8 / (4((1 + 2k)^2) - 1) + 8 / (4((1 + 4k)^2) - 1) + 8 / (4((1 + 6k)^2) - 1)

julia> g4(6,k)
2.895238095238095 + (48 + 64((2 + 4k)^2)) / ((3 + 4((2 + 4k)^2))^2 - ((16 + 32k)^2))

and g4 is faster:

julia> begin
       @btime g(1000)
       @btime g2(1000)
       @btime g4(1000)
  1.287 μs (0 allocations: 0 bytes)
  989.600 ns (0 allocations: 0 bytes)
  765.291 ns (0 allocations: 0 bytes)
1 Like

Again, removing a few multiplications give a ~20% speedup (g5), and removing the t gives a ~20% more (g6):

using Underscores

g(n,t=1) = @_ 4*sum(ifelse(isodd(_),-1,1)*(1/(2*_*t+1)), 0:(n+1))
g2(n,t=1) = 4*(@_ sum((2/(4*(_*t+1)^2-1)), 0:2:(n+1)))
g4(n,t=1) = 4*(sum(x -> (b = (x+2)*t; a = 4*b^2; 4*((a+3)/((a+3)^2-(8*b)^2))), 0:4:(n+1)))
g5(n,t=1) = 16*(sum(x -> (b = 4((x+2)*t)^2; a = b+3; a/(a^2-16b)), 0:4:(n+1)))

g6(n) = 16*(sum(x -> (b = 4(x+2)^2; a = b+3; a/(a^2-16b)), 0:4:(n+1)))

@assert g(6) ≈ g2(6) ≈ g4(6) ≈ g5(6) ≈ g6(6)

using BenchmarkTools
@btime g(10000)
@btime g2(10000)
@btime g4(10000)
@btime g5(10000)
@btime g6(10000)

outputs :

  4.429 μs (0 allocations: 0 bytes)
  2.444 μs (0 allocations: 0 bytes)
  1.930 μs (0 allocations: 0 bytes)
  1.690 μs (0 allocations: 0 bytes)
  1.360 μs (0 allocations: 0 bytes)

These are fast but lose accuracy at large n:

julia> g2(10^6) - pi  # similar to original

julia> g6(10^6) - pi  # similar for g4, g5. Much worse at 10^9
1 Like

The reason for accuracy loss, is Int64 overflow (there is no numerical methods reason for accuracy loss).

And with:

g6f(n) = 16*(sum(x -> (b = 4(x+2.0)^2; a = b+3; a/(a^2-16b)), 0:4:(n+1)))

(note +2.0 to get Float64)

julia> g6f(10^6) - pi

julia> g(10^6) - pi

and it is faster in Float64,

julia> @btime g6f(10000)
  3.881 μs (0 allocations: 0 bytes)

julia> @btime g6(10000)
  6.365 μs (0 allocations: 0 bytes)

That must be it. For me the fix slows it down again, unfortunately (M1 mac):

julia> @btime f_in(10^6) - pi  # top message
  335.333 μs (0 allocations: 0 bytes)

julia> @btime f4(10^6) - pi  # from above, g2 similar
  230.916 μs (0 allocations: 0 bytes)

julia> @btime g6(10^6) - pi  # with overflow
  134.291 μs (0 allocations: 0 bytes)

julia> @btime g6f(10^6) - pi  # version with x+2.0
  194.208 μs (0 allocations: 0 bytes)

I believe the StepRange 0:4:(n+1) may cause issues for static compilation (due to error cases, zero step?).

Edit: solving the StepRange also speeds this up for me:

julia> @btime g7(10^6) - pi   # version from below
  185.083 μs (0 allocations: 0 bytes)

julia> g8(n) = 16*(sum(x -> (b = float(8x-4)^2; a = b+3; a/(a^2-16b)), 1:n÷4));

julia> @btime g8(10^6) - pi  # with UnitRange not StepRange
  129.416 μs (0 allocations: 0 bytes)