The main (only) issue about it is that it destroys omission of null check.
Could you explain / tell me whether I got this right?
As far as I understood, Null-checks are done for pointer-arrays and access to pointer-fields, and are omitted (1) in contexts where llvm can kill the branch directly (e.g. I write before I read and LLVM understands the aliasing well enough to see that it is unreachable), (2) for types that are supposed to have no uninitialized fields, as figured out by static analysis of inner constructors.
Now, if we store a ptr-containing immutable inline, then arrays are initialized with all-zero (memset-zero is faster than going over the offsets). Therefore, we cannot use static analysis of inner constructors anymore to rule out whether ptr-fields can be null.
That, in turn, makes every access to pointer-fields in immutables slower.
Did I understand the problem right?
Naively, I would guess that the optimal solution would be to skip all this checking business and instead to kindly ask the kernel to hand us the pagefault, so that we can raise an appropriate exception in julia?
I see two problems with this latter approach: (1) The OS needs to play along, and (2) we need to be capable of properly unwinding the stack.
Do you know which of these problems are real or unsurmountable? Or are they solved already?
One could also just eat the null-pointer deref and die, unless running a debug version? (It’s not like a nullpointer-deref is exploitable in such a context or could lead to silent data corruption, the user code is already crashed)
Edit: libsigsegv looks good for such a thing. And also, why do we emit positivity checks before square-roots? Shouldn’t this also be done by setting a FPU fault handler? A predicted branch is friggin expensive compared to the zero overhead of using fault handlers.