# Differences in collecting algebraic terms or not

Why is there a difference between

``````function sum_of_squares(n::Int)
div(n*(n+1)*(2n+1), 6)
end

function square_of_sum(n::Int)
div(n*(n+1),2)^2
end

function difference(n::Int)
square_of_sum(n) - sum_of_squares(n)
end
``````

And

``````function difference2(n::Int)
(1.5n^4 + n^3 - 1.5n^2 - n) / 6
end
``````

When I run

``````difference(2)
difference2(2)

const num = 10_000
@benchmark difference(num)
@benchmark difference2(num)
``````

Not like there’s a massive gap, but I don’t understand the reason. Perhaps one of you could help me out.

Not directly related to performance, but it seems that the two functions don’t compute the same thing, I get

``````difference2(10_000) - difference(10_000) # = 3333.5
``````

I guess you have some typo somewhere?

2 Likes

Ops you’re right, that’s a bit embarrassing. It’s supposed to be `-n` in the difference2 function.

I still don’t get the same bm outputs though. Edited now

1 Like

difference2 uses floating point coefficients, and so the answer is converted to a floating point number. Whereas difference uses entirely integer coefficients everywhere and its calculated as an exact integer.

3 Likes

As a side remark I also noticed that the timings in your benchmark are quite small

``````julia> @btime difference(10_000)
0.026 ns (0 allocations: 0 bytes)
2500166641665000

julia> @btime difference2(10_000)
1.496 ns (0 allocations: 0 bytes)
2.500166641665e15
``````

As I recently learned from the excellent notes by Chris Rackauckas, this might mean that the compiler is doing constant propagation and basically replacing the function call with the result and not doing the computation at all.
Using the same trick as from “Note on benchmarking” section of the notes we can see that this is indeed the case, as far as I understand (true for both functions):

``````julia> cheat() = difference(10_000)
cheat (generic function with 1 method)

julia> @code_llvm cheat()

;  @ REPL[8]:1 within `cheat'
define i64 @julia_cheat_12467() {
top:
ret i64 2500166641665000
}
``````

Just to further exemplify this (again I think this is what is happening, I’m not an expert):

``````julia> x = 100;

julia> @btime difference(x);
6.789 ns (0 allocations: 0 bytes)

julia> @btime difference2(x);
8.567 ns (0 allocations: 0 bytes)

julia> @btime difference(100);
0.026 ns (0 allocations: 0 bytes)
``````

But as already written while I was writing this answer, seems that the extra time difference is due to the floats vs. ints in the two functions.

2 Likes

Floating point calculations on integers and integers multiplied by powers of 2 (e.g. `1.5`) are actually exact until you exceed the maximum number of digits that are stored (52 bits), at which point their accuracy will degrade gracefully (making some errors in the least-significant bits). That happens around `n=7403` for this calculation.

In contrast, working with `Int` means that you will get catastrophic failure once you overflow 64 bits, which happens around `n == 80000` for this calculation:

``````julia> square_of_sum(80000)
-8206488072109551616

julia> difference(80000)
-8206658741976231616
``````

Broadening your signatures from `::Int` to `::Integer`, so that they accept any integer type, we can still get the exact answer (at considerably more expense) with `BigInt`:

``````julia> difference(big(80000))
10240085331733320000
``````

but doing this calculation in floating-point with `difference(n::AbstractFloat) = (1.5n^4 + n^3 - 1.5n^2 - n) / 6` gives almost the right answer (to ≈16 significant digits) much more quickly:

``````julia> difference(80000.0)
1.024008533173332e19

julia> (difference(80000.0) - difference(big(80000))) / difference(big(80000))
-3.124973959038622594888003211964321226280055103023663464806989420192726958281252e-17
``````
5 Likes