Package test failing due to bounds checking enabled

I have a package failing tests (with julia 0.6 and nightly on Travis and AppVeyor) due to fact that Pkg.test runs julia with --check-bounds=yes flag. If I manually run the test from the command line with --check-bounds=no (or no such flag) the test passes.

Basically, I compute a quantity, CC, in a for loop and outside the loop I compute SS = 1 - CC (SS and CC are squared sines and cosines) and then compute another quantity which is a number divided by SS. With the --check-bounds=no flag CC is something close to 1 but not exactly 1, so SS !== 0. Instead, with --check-bounds=yes flag CC is reduced to 1.0, so SS == 0 and then I get an infinity when I divide by SS.

An easy workaround is to compute SS in the same way I compute CC, but the trick of SS = 1 - CC saves me some time. I’d like to know why checking bounds should make CC reduce to 1.0, I thought it should affect only safety of accessing arrays, not the actual result of computation.

For the record, the for loop in the code is wrapped by @inbounds macro, but removing it doesn’t change anything, irrespective of the value of --check-bounds flag. If someone wants to look to the code, it’s at https://github.com/giordano/LombScargle.jl/blob/d5fd402426cd0b07d0be991886f6b0d35fa03e07/src/gls.jl#L122

With bounds checking enabled, input to the compiler is different, so floating point operations may be done in a different order, and intermediate results may differ by roundoff. In your example this could happen during the summing of terms for CC or earlier when the function inputs are computed.

I recommend revising your implementation so that division by zero or roundoff-dominated values is avoided, since (a) infinity does not seem to be a valid result of your algorithm for finite inputs, and (b) the roundoff-dominated case will give plausible-looking but unreliable results.

Ok, so bounds checking does affect results, good to know. Thank you.

I just managed to produce a MWE reproducing the issue:

x = collect(linspace(0, 10))
w = 1 ./ collect(linspace(0.5, 1.5)) .^ 2
w ./= sum(w)
ω = 2pi * 2.45
CC = 0.0
for i in eachindex(x)
    ωt  = ω*x[i]
    W   = w[i]
    c   = cos(ωt)
    CC += W*c*c
end
CC

Without bounds checking CC is 1.0000000000000002, with bounds checking it’s 1.0.

I simply implemented the algorithm in http://dx.doi.org/10.1051/0004-6361:200811296, see equation (20), the denominators CC and SS are the sums of cosines and sines.

1 Like

See the stuff on approximate testing:

http://docs.julialang.org/en/release-0.4/stdlib/test/#test-framework

You likely want to use that for floating point checks.

I’m not suggesting a real change of algorithm, just a numerical analyst’s fix like going from

P[n] = (abs2(YC_τ)/CC_τ + abs2(YS_τ)/SS_τ)/YY

to something like

ratc = ifelse(abs(CC_τ) < 2*length(x)*eps(), 0.0, abs2(YC_τ) / CC_τ)
rats = ifelse(abs(SS_τ) < 2*length(x)*eps(), 0.0, abs2(YS_τ) / SS_τ)
P[n] = (ratc + rats)/YY

(based on a quick reading of how you’ve weighted the coefficients, and leaving the type-adjustments to you).

You mean @test a ≈ b? I know and already use that extensively for testing, the point is that I get infinities only when running tests. They’re not quite close to anything :wink: It took me a while to understand that the problem was in bounds checking, I was getting different results between the REPL and Pkg.test.

Oh I see. Then yeah, I highly suggest @Ralph_Smith’s suggestion of a “numerical analysts fix”. The algorithm shouldn’t be that sensitive, and there are tons of tricks one has to employ to fix it.

Thanks for the tip. I eventually went with something simpler:

frac_C = ifelse(iszero(CC_τ), nil, abs2(YC_τ) / CC_τ)
frac_S = ifelse(CC_τ == 1,    nil, abs2(YS_τ) / (1 - CC_τ))
P[n] = (frac_C + frac_S)/YY

because the problem arises only when CC_τ is exactly 1 (or 0).

When bounds checking is explicitly enabled, the @simd macro stops working, and this macro is used in the sum function. Summing with or without SIMD will give different answers so perhaps that is what is making the small difference.

4 Likes

Uhm, I don’t use sum nor an explicit @simd in the loop. Anyway, thanks for the information.

There is a call to sum above the loop though.

Ah, you mean in the definition of w, right.

This is a nasty corner case for the interaction of bounds checking and SIMD optimization. It would be nice if we could make the @simd annotation guarantee a certain consistent reordering of operations, independent of other optimizations, but that’s probably not practical, especially given that @simd can result in different orderings on different architectures.

I don’t know. I think this is fine. If your algorithm can switch between good to infinity just by re-ordering a few operations, I think it’s good that a test points that out since this instability could easily effect users! If an algorithm is this sensitive and just barely works for the one case it needs to, then it really should have every check on and turn off compiler optimizations which can change accuracy. Since floating point will always have these edge cases, I think of the ability to do this as a feature of Julia.