I’m not sure. There is some intrusive linked list even in Julia core. It seems to be useful to implement a queue.
I’m just wondering what are the modification patterns where a custom allocator in addition to the built-in one offers an advantage. Of course, using lists as stacks or queues is one such case but the solution for that case is basically known.
Maybe one case is when it’s known that the structure isn’t going to contain more than N items, and a pool of O(N) pre-allocated objects offers some advantages over the Julia allocator.
Hi Vasily,
let me try to motivate this with your simple List definition
mutable struct Cons{T}
car::T
cdr::Union{Nothing, Cons{T}}
end
const List{T} = Union{Nothing, Cons{T}}
T = Tuple{Int, StaticArrays.SVector{100, Float64}}
and rather artificial tests similar to
l = nil()
for j in 1:n
l = cons((j, p) , l)
end
s = 0
while l != nothing
s += l.car[1]
l = l.cdr
end
you can either use the Julia allocator resulting in
List without allocator 442.167 ms (1000000 allocations: 854.49 MiB)
or a fixed size allocator (if you know the maximal number of elements)
with fixed allocator 83.345 ms (0 allocations: 0 bytes)
or a resizable allocator
with resizable allocator 75.376 ms (0 allocations: 0 bytes)
or an allocator keeping an internal free list (if you want to remove and readd elements to the list)
with fixed free list allocator 98.112 ms (0 allocations: 0 bytes)
with resizable free list allocator 88.990 ms (0 allocations: 0 bytes)
or an corresponding SOA allocator
with fixed SOA allocator 56.665 ms (0 allocations: 0 bytes)
with resizable SOA allocator 56.830 ms (0 allocations: 0 bytes)
with fixed free list SOA allocator 68.195 ms (0 allocations: 0 bytes)
with resizable free list SOA allocator 73.258 ms (0 allocations: 0 bytes)
For me these are only abstractions of common preallocation patterns. I see a tradeoff of programming convenience vs. a bit of performance gain, because you have to rework your data structures.