An assert statement makes my code run faster

For a case below I get a faster result when I include an assert statement in my code. I include the functions with 5 runs of each case (not including the timing for the first compilation run)

function with_assert(f::Integer)
    total = 0
    for i in 1:f
        @assert total >= 0 "This should not happen"
        total += total >= 0
    end
    return total
end

function with_nothing(f::Integer)
    total = 0
    for i in 1:f
        total += total >= 0
    end
    return total
end

number = 1000000
@time with_assert(number)
# 0.000235 seconds (5 allocations: 176 bytes)
# 0.000234 seconds (5 allocations: 176 bytes)
# 0.000234 seconds (5 allocations: 176 bytes)
# 0.000234 seconds (5 allocations: 176 bytes)
# 0.000484 seconds (5 allocations: 176 bytes)

@time with_nothing(number)
# 0.000722 seconds (5 allocations: 176 bytes)
# 0.000724 seconds (5 allocations: 176 bytes)
# 0.000708 seconds (5 allocations: 176 bytes)
# 0.000735 seconds (5 allocations: 176 bytes)
# 0.000699 seconds (5 allocations: 176 bytes)

This is a contrived example but just for interest, is there any reason this could be happening? I would expect that the assert case would be marginally slower but instead it is about 3 times faster.

If you look at the native code for with_assert, you’ll notice that there aren’t any instructions related to the assertion. The compiler figured out that since the initial value of total is 0 and you’re only adding positive things to it (without the possibility of overflow), it can remove the exception throwing code entirely. In fact, the compiler figured out that total >= 0 is always true, so the immediate value 1 is added in every loop cycle.

In generating the code for with_nothing, the compiler was not able to figure this last bit out. So by adding the assertion you basically gave the compiler a hint that allowed it to generate more efficient code in this particular case.

Ideally, this shouldn’t be necessary. It might be interesting to investigate why this optimization isn’t happening without the assertion ‘hint’. Even more ideally, the compiler would recognize that both of these are just inefficient ways of writing integer abs.

5 Likes

I am not an expect on compiler technology, and I am always amazed at what LLVM can optimize these days. But I am wondering if it is reasonable to expect the compiler to become an algebra/proof engine and consider mathematically valid transformations it missed as something to improve upon.

My understanding is that each of these would increase compilation time.

For example, in

function f(a::Integer, b::Integer, c::Integer, n::Integer)
    @assert a > 0 && b > 0 && c > 0
    @assert n > 2
    a^n + b^n == c^n
end

can be optimized away (Wiles 1995). Is it worth it though?

2 Likes

Nor am I (pretty far from it).

Yeah, it’s a trade-off of course. But there is precedent for similar transformations. For example with

function f(n)
    ret = 0
    for i = 1 : n
        ret += i
    end
    return ret
end

LLVM essentially replaces the loop with the formula n * (n + 1) ÷ 2 (runtime is constant). For the Fermat example, it wouldn’t make sense for the compiler to prove it starting from axioms, but someone could go and write an LLVM pass that recognizes this pattern and applies the theorem. However, it’s probably not very likely that this optimization would be applicable often enough to warrant spending time on it.

2 Likes

Perhaps I was not clear: what I am concerned about here is the extra compilation time (and, to a certain extend, code complexity in the compiler).

No, that was clear to me.

1 Like