It’s actually not really true that it is less efficient, because it’ll compile to the same code if these are regular loads and stores. LLVM will still compile it to the efficient version:
julia> code_llvm((Base.RefValue{Int}, Base.RefValue{Int}, Int); debuginfo=:none) do x, y, z
y[] = z
x[] = y[]
x[]
end
define i64 @"julia_#14_1001"({}* noundef nonnull align 8 dereferenceable(8) %0, {}* noundef nonnull align 8 dereferenceable(8) %1, i64 signext %2) #0 {
top:
%3 = bitcast {}* %1 to i64*
store i64 %2, i64* %3, align 8
%4 = bitcast {}* %0 to i64*
store i64 %2, i64* %4, align 8
ret i64 %2
}
Notice there is only two store
calls, and no load
calls at all.
If you have a funky type (like if x
and y
used atomic loads/stores) then if the user wrote x[] = y[] = z
then I’d say they probably intended for that to mean
y[] = z
x[] = y[]
x[]
and it would be bad if we sneakily tried to “help” them by turning that into
y[] = z
x[] = z
z
E.g. here’s the @atomic
version of the above code:
julia> mutable struct AtomicRef{T}
@atomic x::T
end;
julia> Base.getindex(x::AtomicRef) = @atomic x.x
julia> Base.setindex!(x::AtomicRef, y) = @atomic x.x = y
julia> code_llvm((AtomicRef{Int}, AtomicRef{Int}, Int); debuginfo=:none) do x, y, z
y[] = z
x[] = y[]
x[]
end
define i64 @"julia_#18_1006"({}* noundef nonnull align 8 dereferenceable(8) %0, {}* noundef nonnull align 8 dereferenceable(8) %1, i64 signext %2) #0 {
top:
%3 = bitcast {}* %1 to i64*
store atomic i64 %2, i64* %3 seq_cst, align 8
%4 = load atomic i64, i64* %3 seq_cst, align 8
%5 = bitcast {}* %0 to i64*
store atomic i64 %4, i64* %5 seq_cst, align 8
%6 = load atomic i64, i64* %5 seq_cst, align 8
ret i64 %6
}