Broadly speaking, compilers do reorganize and remove redundancies (LLVM would do most of it for both Julia and Numba), but they avoid doing it if it would noticeably change results. For a simpler term-cancelling example:
julia> f(a::Number, b::Number) = a + b - a
f (generic function with 1 method)
Elementary arithmetic over common number sets taught us a+b-a
cancels to b
unconditionally. So what actually happens in Julia, or rather the LLVM optimizations?
julia> @code_llvm f(1, 2)
; Function Signature: f(Int64, Int64)
; @ REPL[38]:1 within `f`
; Function Attrs: uwtable
define i64 @julia_f_9606(i64 signext %"a::Int64", i64 signext %"b::Int64") #0 {
top:
; ┌ @ int.jl:86 within `-`
ret i64 %"b::Int64"
; └
}
Clear evidence of automatic cancelling. Let’s try another primitive type:
julia> @code_llvm f(1.0, 2.0)
; Function Signature: f(Float64, Float64)
; @ REPL[70]:1 within `f`
; Function Attrs: uwtable
define double @julia_f_9688(double %"a::Float64", double %"b::Float64") #0 {
top:
; ┌ @ float.jl:491 within `+`
%0 = fadd double %"a::Float64", %"b::Float64"
; └
; ┌ @ float.jl:492 within `-`
%1 = fsub double %0, %"a::Float64"
ret double %1
; └
}
No cancelling there. Floating point operations are not generally associative due to precision limits, so simplifying the expression would actually change results. Even when the change is better for us, the compiler is too objective to know that:
julia> 1e16 + 1.0 - 1e16 # uh oh, floating point precision is too low for +1.0
0.0
So generally, the compiler alone cannot optimize arbitrary algorithms to the extreme because only we can decide whether some optimizations are acceptable, even if it’s as simple as cancelling terms.