I have a package failing tests (with julia 0.6 and nightly on Travis and AppVeyor) due to fact that Pkg.test runs julia with --check-bounds=yes flag. If I manually run the test from the command line with --check-bounds=no (or no such flag) the test passes.

Basically, I compute a quantity, CC, in a for loop and outside the loop I compute SS = 1 - CC (SS and CC are squared sines and cosines) and then compute another quantity which is a number divided by SS. With the --check-bounds=no flag CC is something close to 1 but not exactly 1, so SS !== 0. Instead, with --check-bounds=yes flag CC is reduced to 1.0, so SS == 0 and then I get an infinity when I divide by SS.

An easy workaround is to compute SS in the same way I compute CC, but the trick of SS = 1 - CC saves me some time. I’d like to know why checking bounds should make CC reduce to 1.0, I thought it should affect only safety of accessing arrays, not the actual result of computation.

With bounds checking enabled, input to the compiler is different, so floating point operations may be done in a different order, and intermediate results may differ by roundoff. In your example this could happen during the summing of terms for CC or earlier when the function inputs are computed.

I recommend revising your implementation so that division by zero or roundoff-dominated values is avoided, since (a) infinity does not seem to be a valid result of your algorithm for finite inputs, and (b) the roundoff-dominated case will give plausible-looking but unreliable results.

Ok, so bounds checking does affect results, good to know. Thank you.

I just managed to produce a MWE reproducing the issue:

x = collect(linspace(0, 10))
w = 1 ./ collect(linspace(0.5, 1.5)) .^ 2
w ./= sum(w)
ω = 2pi * 2.45
CC = 0.0
for i in eachindex(x)
ωt = ω*x[i]
W = w[i]
c = cos(ωt)
CC += W*c*c
end
CC

Without bounds checking CC is 1.0000000000000002, with bounds checking it’s 1.0.

You mean @test a ≈ b? I know and already use that extensively for testing, the point is that I get infinities only when running tests. They’re not quite close to anything It took me a while to understand that the problem was in bounds checking, I was getting different results between the REPL and Pkg.test.

Oh I see. Then yeah, I highly suggest @Ralph_Smith’s suggestion of a “numerical analysts fix”. The algorithm shouldn’t be that sensitive, and there are tons of tricks one has to employ to fix it.

When bounds checking is explicitly enabled, the @simd macro stops working, and this macro is used in the sum function. Summing with or without SIMD will give different answers so perhaps that is what is making the small difference.

This is a nasty corner case for the interaction of bounds checking and SIMD optimization. It would be nice if we could make the @simd annotation guarantee a certain consistent reordering of operations, independent of other optimizations, but that’s probably not practical, especially given that @simd can result in different orderings on different architectures.

I don’t know. I think this is fine. If your algorithm can switch between good to infinity just by re-ordering a few operations, I think it’s good that a test points that out since this instability could easily effect users! If an algorithm is this sensitive and just barely works for the one case it needs to, then it really should have every check on and turn off compiler optimizations which can change accuracy. Since floating point will always have these edge cases, I think of the ability to do this as a feature of Julia.