One of the recurring roadblocks in improving IntervalArithmetic.jl is the lack of support for changing the floating-point rounding mode (R.I.P. setrounding).
LLVM supports changing rounding modes (see also: Setting Floating Point Rounding Mode through LLVM intrinsics · Issue #48812 · JuliaLang/julia · GitHub), but, as I understand it, Julia’s optimizer may still violate a user-selected rounding mode.
So changing the rounding mode is currently not much of an option for validated numerics, and computer-assisted proofs.
At the moment, our strategy is a mix of RoundingEmulator + CRlibm + MPFR. Sadly, the more modern alternative CORE-MATH relies on fesetround for rounding up or down, and is thus incompatible with Julia
.
Now,
- Can the Julia compiler’s behavior regarding directed rounding modes be made predictable?
For instance, there is one piece of code in IntervalArithmetic that changes the rounding mode to implement a fast interval matrix multiplication algorithm; the logic of the function is
function foo(r::RoundingMode)
old = getroundingmode()
setrounding(r)
# `ccall` to the library OpenBLASConsistentFPCSR_jll
setrounding(old)
end
Tests seem to indicate that it works as expected, but what guarantees do we really have that Julia respects the rounding mode and does not silently override it during optimization?
- Is it possible to have some sort of mechanism that prevents the compiler from assuming round-to-nearest within a given function?
For instance some sort of macroBase.@consistent_fpcsrthat can preface a function, likefooabove. In a very different context, there isBase.@assume_effectsthat somehow talks to the compiler…