One of the recurring roadblocks in improving IntervalArithmetic.jl is the lack of support for changing the floating-point rounding mode (R.I.P. setrounding).
LLVM supports changing rounding modes (see also: Setting Floating Point Rounding Mode through LLVM intrinsics · Issue #48812 · JuliaLang/julia · GitHub), but, as I understand it, Julia’s optimizer may still violate a user-selected rounding mode.
So changing the rounding mode is currently not much of an option for validated numerics, and computer-assisted proofs.
At the moment, our strategy is a mix of RoundingEmulator + CRlibm + MPFR. Sadly, the more modern alternative CORE-MATH relies on fesetround for rounding up or down, and is thus incompatible with Julia .
Now,
Can the Julia compiler’s behavior regarding directed rounding modes be made predictable?
For instance, there is one piece of code in IntervalArithmetic that changes the rounding mode to implement a fast interval matrix multiplication algorithm; the logic of the function is
function foo(r::RoundingMode)
old = getroundingmode()
setrounding(r)
# `ccall` to the library OpenBLASConsistentFPCSR_jll
setrounding(old)
end
Tests seem to indicate that it works as expected, but what guarantees do we really have that Julia respects the rounding mode and does not silently override it during optimization?
Is it possible to have some sort of mechanism that prevents the compiler from assuming round-to-nearest within a given function?
For instance some sort of macro Base.@consistent_fpcsr that can preface a function, like foo above. In a very different context, there is Base.@assume_effects that somehow talks to the compiler…
I don’t think we want to model the rounding mode as implicit dynamic state. Instead, there should be explicit selection, either at the type or operation level to opt into different rounding modes. For ISAs that do not have instruction-level rounding mode control, there’ll need to be a compiler pass to explicitly schedule rounding mode transitions, which doesn’t exist, but that’s a much saner design model than implicit global state.
setrounding(r) # call `fesetround` under the hood
x+y
such that x+y is performed before the setrounding call.
My intuition is that this would not happen for ccall; but I could not find anything explicitly written about this.
Static semantics for rounding modes sound really great; the linked discussion is insightful.
If implemented in LLVM, is that straightforward to port the feature in Julia?
This function prefix Base.@consistent_fpcsr macro suggestion was also trying to make the FP environment more of an explicit effect.
I do not know how Base.assume_effects works, but couldn’t Base.@consistent_fpcsr also communicates some info about the function to the compiler?
Instructing that the function modifies the FP register, and to get a guarantee that nothing spooky (since it feels like magic from my point of view ) occurs. In my example where the function is essentially just doing a call to fesetround and ccall, you just want to make sure that the specified temporal execution order is respected.
Though maybe implementing such a macro is just asking for static semantics?