Removing bounds checking for HPC - `--check-bounds=unsafe`?

An imo much more appropriate approach would be to use the interpreter at compile time:

  1. If the computation of g(1) takes very long time, like approximately forever, then we want to be able to abort (say, recursive non-memoized fibunacci of 100), that is a side-effect.
  2. Side effects! We can just run g(1) in the interpreter until we see a side effect, then abort and decide that this cannot be speculatively run.
  3. UB. UB is a side-effect. Suppose the computation of g(1) contains incorrect @inbounds or funny pointer arithmetic etc that corrupts the runtime or pops a shell. The user asked for a shell to be popped and we shall do so, but not speculatively!

What you’re describing is roughly how Julia worked prior to 1.9. The big downside of an approach like this is that it’s really slow (100-1000x slower than using the compiler). Julia in 1.9 introduced the “effect system” which tracks all the of the possible types of effects (constency, non-termination, UB, side effect free, and a couple extras). This makes running code at compile time (when safe to do so) much faster as well as allowing for some other optimizations (like dead code elimination) which you can’t perform simply via interpretation.

This does seem questionable – consider

julia> fib(n) = n < 0 ? 1 : fib(n-1) + fib(n-2)
julia> fub(x) = x ? fib(5) : 0

We want that speculatively executed!

julia> fib(n) = n < 0 ? 1 : fib(n-1) + fib(n-2)
julia> fub(x) = x ? fib(1000) : 0

No way do we want to execute that speculatively, that wouldn’t terminate until heat death!

For speculative execution, we really want to specialize the effects on constants. We don’t care about the effects of fib(n::Int) (which arguably correctly infers as maybe-not-terminating), we only care about the effects of fib(5) (and we only care about that until we spot the first side-effect / inconsistency!).


That is a very valid point: It can be fine to have code containing bounds-check violations that are valid, intended, caught and handled, a la

julia> function foo(a)
       try
       return a[100]
       catch
       0
       end
       end
foo (generic function with 1 method)

julia> foo([])
0

Similar as global --fast-math had to be removed.

If I understand the digression on speculative execution right, then the issue is that Core.Compiler is now such code, due to internal implementation details, that gets miscompiled under a naive --check-bounds=no regime.

So the change is that: Before, only a subset/dialect of julia ran correctly under --check-bounds=no (albeit with changed semantics), and this subset has now become empty?

For that, I would consider a system that allows specific modules to declare that their bounds-checks must not be removed even if --check-bounds=no. In order to make the subset/dialect of julia that is --check-bounds=no-compatible nonempty, it is sufficient to hard-code Core.Compiler? (but I guess a compile-time option in some header file would be more convenient, or even a @insist_on_boundschecks module ... end macro)

As long as --check-bounds=yes remains part of the language, the obvious consequence of removing --check-bounds=no is that everybody is incentivized to add @inbounds everywhere, and document that their package should be run under --checkbounds=yes for testing.

This is stupid!