That-s really surprising to me. What did I miss? Is it ‘perfectly fine’ just in this very special case (why?), or generally? My understanding so far:
Calling a julia pointer function is from a programmer’s perspective obtaining a reference, but to the compiler&runtime it is just computing some value from current object state, without any restriction on subsequent object state changes. Compiler&runtime do not ‘know’ about pointer usage.
@preserve instructs the compiler to ensure that memory belonging to the object(s) to preserve is not garbage collected during the execution of the expression guarded by @preserve.
In the occursin example, I cannot see what the compiler or runtime prevents from putting a CG call just between p = pointer(buf.data, buf.ptr) and the following @preserve code block. If this happens, and we have one of those (to me still nebulous) situations where buf memory will be garbage collected, we have the case to avoid: p is computed, then buf is garbage collected, and p points to an invalid block of memory when the @preserve code block starts executing.
At first glance, that looks extremely unlikely to happen in a scenario where CG runs asynchronously, but be aware of code reorderings which might put a lot of work between computation of p and the @preserve block. And we are talking about guarantees and (always) correct code. I feel remembered to the “double checked locking is broken” discussion in the Java context several years ago, see e.g. https://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
It makes sure the object is valid as far as the user can tell durirng the execution of the block. It’ll do whatever it takes to make sure that is the case.
That breaks the guarantee that the object is valid in that block so it will not happen. No matter what transformation you can come up with it will not change this answer. The @preserve block is not a function call, it is a syntax that the compiler is aware of. It does NOT give the object any special status at run time. For the user, you should not even think about “when does GC happen”. If the compiler determines that a GC could happen before the @preserve block is reached it’ll make sure the GC won’t make the object invalid, which may or may not include making sure the object is rooted.
In fact, the reason you don’t need to preserve it when you are creating the pointer and the use of the pointer is the same as the reason you don’t need to preserve anything when you are not using pointers or finalizers. The compiler will make sure the use is valid one way or another, it really doesn’t matter how it does it as far as the user is concerned.
Correct. That’s the only thing I’m talking. In fact, thats why I really don’t like to talk about the implementation detail, including any “reordering”, “GC”, “asynchronous”, compiler optimizations, etc. None of them help you decide what is safe and what’s not. The only thing you need is the guarantee (that the object is valid one way or anoher in that block).
Following your example, this is basically the same as saying the C++ memory model (and I assume java too) is a user level model/a model for the whole stack. It doesn’t matter what hardware memory model it runs on and it doesn’t matter what compiler transformation is happening, the only guarantee it gives is that the final result follows the memory model. The user should only code against this model and not anything related to the hardware or the compiler. Otherwise, similar to trying to understand how to interact with the GC from implementation, would be inaccurate and will easily come back to bite you sooner or later.
Conversely, I do like to talk about the concrete ways that things can go wrong — I think it’s helpful for people to understand not just how things are in the abstract, but also why things are that way. It’s the same way I like my mathematical definitions and proofs: accompanied with a bunch of examples to motivate why someone thought up the abstraction in the first place
Speaking of how things are in the abstract, has anybody tried to write down an abstract machine model for Julia? I guess not, but presumably we’re quite similar to abstract machines for existing languages. Could we roughly document the Julia abstract machine by analogy to one which already exists? Which one?
The presence of the @preserve buf some_code means “Compiler, I am telling you that some_code counts as a use of buf, even though you might not be able to see it.”.
Let’s look at the following code:
# (1)
x = foo()
bar(x)
# (2)
x = foo()
GC.@preserve x bar()
Would you worry about x being invalidated before the call to bar in (1)? Can you explain why the compiler knows not to do this, and exactly what mechanism is used? How does the mechanism change when x is a bits type vs not, and when bar is fully inlined vs not? What about when x is a bits type which is too large to store on the program stack and must be allocated on the heap? Personally I don’t know detailed answers to all of these questions but the compiler has to deal with them.
You can consider example (2) to be basically similar: by putting GC.@preserve x there, you are telling the compiler to use its “normal mechanisms” to make sure x is available during the call to bar(), regardless of there being no other hint in the source code that this is necessary.
Sure, but I assume you won’t replace your proof with examples would you?. That’s exactly what I’m saying.
I’ll say that I’ve only seen people going with examples here, based on specific implementation details. Literally no one else, regarding this issue, was using the fundamental model given to the programmer and that is what I see as replacing the proof with examples.
And I’d say it’s even more appropriate in the context of my last reply since this is basically the advice I got when learning about the C++ memory model. And that advice had been immensely useful…
I’m fine with examples, I even use a lot of them to see if the rule makes sense/implementable. However, if you want to figure out if something is valid, you must use the basic rule (conversely if you want to say if something is invalid, one counter example is enough, though the validity of the counter example may not always be valid).
Just to be clear, if you see x = foo() and bar() where bar() may use x, don’t put the GC.@preserve there. That’s almost always the wrong place. If bar() doesn’t have any unsafe code (or the finalizer case above), GC.@preserve is never needed no matter how hard you try to hide the value. If bar() does have unsafe code, the GC.@preserve will always go inside of it.
That is not to say you can never hide things from the compiler. You can do that fairly easily. However, the compiler will still always do the right thing even if you managed to confuse it. That’s basically what escape analysis is for…
No, but the example does not resemble the structure I was talking about. The structure is
#1
p = foo(x)
bar(p)
#2
p = foo(x)
@GC.preserve x bar(p)
#3
GC.@preserve x bar(foo(x))
If foo(x) computes some property p of x which depends on (Julia compiler&runtime) implementation details, and GC.@preserve is an official API to ensure that that property p does not change within the preserved code block, and the language semantics do not define how and when foo(x) will change outside a GC.@preserve block, then only #3 is safe and correct code.
Or to put it differently:
anyJuliaCodeAssigningX()
p1 = foo(x)
anyJuliaCodeNotChangingX()
p2 = foo(x)
moreJuliaCodeNotChangingX()
GC.@preserve x begin
p3 = foo(x)
furtherJuliaCodeNotChangingX()
p4 = foo(x)
end
If foo(x) uses only normal, “safe” Julia code, I expect julia semantics to guarantee that p1,p2,p3,p4 are all equal.
My understanding so far is: GC.@preserve gives a guarantee that p3 and p4 are equal for some “unsafe” properties, in particular pointer(x) or pointer(x.someFieldOfX), and there is no strict guarantee that p1 or p2 are equal to p3 or p4 for pointer-related properties.
As a practical rule for a Julia programmer using pointer methods, I would recommend: “to guarantee validity of a pointer, put pointer computation and pointer use always in a GC.@preserve code block, don’t try to optimize away GC.@preserve using implementation-specific knowhow, which might not apply to future Julia releases”.
No that’s is NOT what the macro does. Again, the macro make sure that x is valid in there, not that if it was valid before it will remain valid.
That’s wrong. It is garanteed that if you compare p1, p2, p3, p4 inside the preserved block, they’ll all be equal. If you do it outside then there’s no such guarantee from the preserved block, though since you are not using the pointer the address may still compare equal.
No, that’s wrong. You can do that but there’s no such need and you should not recommand or require it.
Could that be a “rule of thumb” for a Julia programmer using pointers?
“to be safe, enclose dereferencing use of a pointer into memory of x in a GC.@preserve x block”.
You said clearly that pointer computation outside (before) the GC.@preserve block is safe. This is a BIG difference between Julia and other languages, e.g. Java with a GC which relocates valid java memory blocks to compactify managed Java heaps.
Sorry for another example - it is not meant as efficient code, it solely demonstrates decoupled computing and dereferencing of a pointer variable and my understanding of “memory belongs to object pd”.
Are both uses of GC.@preserved in the following code snippet correct and necessary?
mutable struct PreservedDemo
s:: String # private instance, only changed by constructor and setstring!
p:: Ptr{UInt8} # private instance, only changed by constructor and setstring!
PreservedDemo(s::String) = new(s,pointer(s))
end
# pointer computation decoupled from pointer dereferencing
function setstring!(pd::PreservedDemo,s::String)
pd.s = s
pd.p = pointer(s)
end
# protect dereferencing of pd.p which points to memory belonging (indirectly) to pd
function codeunit1(pd::PreservedDemo, i::Integer)
# boundschecks ommitted to keep example short
GC.@preserve pd unsafe_load(pd.p+(i-1))
end
# protect dereferencing of pd.p which points to memory belonging to pd.s
function codeunit2(pd::PreservedDemo, i::Integer)
# boundschecks ommitted to keep example short
s = pd.s
GC.@preserve s unsafe_load(pd.p+(i-1))
end
# Testcode
q = PreservedDemo("hello")
codeunit1(q,2) # hex 'e'
codeunit1(q,4) # hex 'l'
Well… I wouldn’t say “other languages” as there are certainly other GC’s out there that is more similar to the julia one. Julia GC is indeed non-moving and doing that would likely be a major change, unless conservative pinning is reliable.
Ah, I can see why you’d be confused by the semantics of @preserve if you’re expecting the GC to move objects around. In Julia we don’t have any explicit concept of pinning because objects are not moved.
Yes. Julia’s GC doesn’t move objects. So the only thing that’s required for a pointer to be valid is that the object is not garbage collected. If the GC moved objects then you’d need to pin an object in order to ensure that a pointer to it remains valid.