this has to do with the speed of malloc vs your system allocator. note that this benchmark may be misleading since you might be ending up in a place where you are allocating, running a trivial GC and then freeing, where a more realistic workload wouldnβt show this behavior
Itβs useful to know about OS dependence, thanks for checking. Interestingly, allocs estimate = 3 in either of my cases, whereas for you itβs 1 and 2 for the smaller and larger case.
The view solution is indeed faster than direct allocation for 1000, and almost as fast as for 10,000. Also, itβs about 7 times faster than resize!, so Iβll try it elsewhere in my code.
EDIT: Actually, the resize! speed could be a regression in Julia 1.9.4 β 1.11.1.
x = Vector{Int}(undef, 10000)
y = Vector{Int}(undef, 10000)
display(@benchmark view($x, 1:1000))
display(@benchmark resize!($y, 1000))
Thanks, Iβll benchmark this with my actual (realistic) code. Do you have some pointers which OS/hardware/GC parameters could be relevant here? Perhaps I could adjust vector sizes programmatically, based on those.
This benchmark result is very weird and probably misleading. resize! should also be basically a zero-cost operation. How can it be slower than, well, anything?
Note that this is only the case on 1.10.4, in 1.11.1 I also got 3 allocations for both sizes.
This doesnβt seem to be true in 1.11: resize! essentially just calls _deleteend!, which is implemented as
function _deleteend!(a::Vector, delta::Integer)
delta = Int(delta)
len = length(a)
0 <= delta <= len || throw(ArgumentError("_deleteend! requires delta in 0:length(a)"))
newlen = len - delta
for i in newlen+1:len
@inbounds _unsetindex!(a, i)
end
setfield!(a, :size, (newlen,))
return
end
(while in earlier versions itβs a ccall). So apart from changing the size attribute, it is also explicitly looping over all βdeletedβ elements.
the intent, at least is that the entire loop should disappear for common eltypes. it needs to be there (and was there in C) because you donβt want deleted memory in the array keeping alive data since otherwise that would be a horrible way to have nasty memory leaks. for bitstypes it (hopefully) codegens into nothing
The median time is smaller for the smaller vector, whereas the average is still larger. However, for Julia 1.11.1, the outputs are the same as in my OP, even after several runs.