Buffer stealing reinterpret/reshape

The main problem with reinterpreting/reshaping arrays is that we will in the end get two arrays referring to the same memory region. This makes aliasing analysis complicated, is not always respected by deepcopy (I don’t know how serialization tools deal with it), and sometimes makes code hard to understand. Also it gives all the trouble with tbaa / ReinterpretArray.

In many cases, the user does not actually need two aliasing arrays, but rather a kind of convert. So the user would need a new array of requested type and shape, that points to the old buffer; and the new array should take ownership of the buffer, while the old array is resized to zero and its data-pointer nulled.

Such a buffer-stealing reinterpret!/reshape! should make no trouble with tbaa at all, and hopefully allows lots of code to run without any aliasing arrays.

Code operating on arrays could steal the buffer, do their stuff, and give it back afterwards. That way, they would only allocate a temporary jl_array struct, which is pretty small.

“while the old array is resized to zero and its data-pointer nulled” An issue is that there might be copies of that old array around which are not affected by this

An array is effectively a

mutable struct jl_array{T,N}

That is, it is always held by reference to the jl_array struct. We can simply mutate the target, which is what happens when you resize! an array, and all references to the array will see the resize.

Of course there is a problem if someone is holding a view to the array and we null it. Access to the view will then cause a segfault close to null / nullpointer deref. This will be almost surely non-exploitable.

This is currently true as well, see https://github.com/JuliaLang/julia/issues/25743: When we hold a view and somebody resizes the underlying array, then access to the view gives us a nice read/write heap-overflow that is almost surely exploitable for code-execution. So this new feature would definitely not make the situation worse.

edit: Of course one could turn the issue into an oob-exception by re-introducing boundschecks for views, but that was rejected for performance reasons (and rightly so, I think).

This can still get unpredicatable result from alignment and is impossible on non-1d array.

So you are saying that the rewrap! must fail if the data-pointer is insufficiently aligned for the new type? That’s OK and something array.c can check. If the source from which we want to steal the buffer does not own it, we can also just fail. Likewise if the old array was 1d and had a nonzero offset, and the new array is not 1d; then we probably must fail because we cannot accomodate an offset in the jl_array struct (and need it to reconstruct the pointer for realloc/free, right?).

Why is this impossible for a non-1d array? Simply overwrite the data-pointer, length and size fields with zeros (also offset and capacity in case of 1d arrays).

It should fail if the data type alignment is wrong.

That’s illegal, and it’s assumed everywhere.

That’s a pity, thanks though :frowning:

Could you give me an example where this is currently assumed for larger than 1d arrays, but is not assumed for 1d arrays? Just so that I can see where the problems lie.

For example, llvm appears to not assume that size(M,2) stays constant for a Matrix after a ccall, for the sake of LICM/bounds-checking (as far as I can tell it always reloads the size if there was a ccall in between).

It’s not a problem. It’s the design. arraytype_constshape.

If it does so in a way that generate suboptimal code it should be fixed. Or it could be doing that simply because it doesn’t worth keeping it on the stack if it knows that the value is available in the memory anyway.

julia> function f(a, i, j)
           v1 = a[i, j]
           ccall(:jl_breakpoint, Nothing, (Any,), a)
           v2 = a[i, j]
           return v1 + v2
f (generic function with 1 method)

julia> @code_llvm f(zeros(2, 2), 1, 1)

; Function f
; Location: REPL[2]:2
define double @julia_f_36027(%jl_value_t addrspace(10)* nonnull align 16 dereferenceable(40), i64, i64) {
; Function getindex; {
; Location: array.jl:732
  %3 = add i64 %1, -1
  %4 = addrspacecast %jl_value_t addrspace(10)* %0 to %jl_value_t addrspace(11)*
  %5 = bitcast %jl_value_t addrspace(11)* %4 to %jl_value_t addrspace(10)* addrspace(11)*
  %6 = getelementptr inbounds %jl_value_t addrspace(10)*, %jl_value_t addrspace(10)* addrspace(11)* %5, i64 3
  %7 = bitcast %jl_value_t addrspace(10)* addrspace(11)* %6 to i64 addrspace(11)*
  %8 = load i64, i64 addrspace(11)* %7, align 8
  %9 = icmp ult i64 %3, %8
  br i1 %9, label %ib, label %oob

ib:                                               ; preds = %top
  %10 = add i64 %2, -1
  %11 = getelementptr inbounds %jl_value_t addrspace(10)*, %jl_value_t addrspace(10)* addrspace(11)* %5, i64 4
  %12 = bitcast %jl_value_t addrspace(10)* addrspace(11)* %11 to i64 addrspace(11)*
  %13 = load i64, i64 addrspace(11)* %12, align 8
  %14 = icmp ult i64 %10, %13
  br i1 %14, label %idxend3, label %oob

oob:                                              ; preds = %ib, %top
  %15 = alloca [2 x i64], align 8
  %.sub = getelementptr inbounds [2 x i64], [2 x i64]* %15, i64 0, i64 0
  store i64 %1, i64* %.sub, align 8
  %16 = getelementptr inbounds [2 x i64], [2 x i64]* %15, i64 0, i64 1
  store i64 %2, i64* %16, align 8
  %17 = addrspacecast %jl_value_t addrspace(10)* %0 to %jl_value_t addrspace(12)*
  call void @jl_bounds_error_ints(%jl_value_t addrspace(12)* %17, i64* nonnull %.sub, i64 2)

idxend3:                                          ; preds = %ib
  %18 = mul i64 %8, %10
  %19 = add i64 %3, %18
  %20 = bitcast %jl_value_t addrspace(11)* %4 to double addrspace(13)* addrspace(11)*
  %21 = load double addrspace(13)*, double addrspace(13)* addrspace(11)* %20, align 8
  %22 = getelementptr inbounds double, double addrspace(13)* %21, i64 %19
  %23 = load double, double addrspace(13)* %22, align 8
; Location: REPL[2]:3
  call void inttoptr (i64 139974466181328 to void (%jl_value_t addrspace(10)*)*)(%jl_value_t addrspace(10)* nonnull %0)
; Location: REPL[2]:4
; Function getindex; {
; Location: array.jl:732
  %24 = load double, double addrspace(13)* %22, align 8
; Location: REPL[2]:5
; Function +; {
; Location: float.jl:395
  %25 = fadd double %23, %24
  ret double %25

Note that the second boundscheck (as well as the size dependent index and pointer calculation) is gone.



A pity that this cannot work, then. Would have been a nice solution to the aliasing-conundrum, but I guess the value is pretty low when restricted to 1d arrays only.