Is it possible you introduced some type instability while switching to FixedSizeArrays.jl? Perhaps in some branch somewhere you return something like T[], which is not a FixedSizeVector? That could perhaps explain the increase in allocated memory.
If not that, perhaps it’s the issue pointed out by @foobar_lv2, where object identity would sometimes be preferable to value identity for the deduplicating effect.
was special-cased in the compiler to be (effectively?) immutable.
This isn’t quite true. IIUC the actual “size” field was known by the compiler not to change, but the backing was still mutable (preventing optimization). Also as of 1.11, the compiler dropped this optimization since Array is just a normal Julia object.
julia> using FixedSizeArrays
julia> @noinline f(A::AbstractMatrix) = length(A)
f (generic function with 1 method)
julia> g() = f(FixedSizeMatrixDefault{Float64}(undef, 3, 3))
g (generic function with 1 method)
julia> h() = f(Matrix{Float64}(undef, 3, 3))
h (generic function with 1 method)
julia> code_llvm(g)
; Function Signature: g()
; @ REPL[92]:1 within `g`
define i64 @julia_g_9806() local_unnamed_addr #0 {
top:
; @ REPL[92] within `g`
ret i64 9
}
Hence why the title of the JuliaCon talk My personal hope is that at some point we’ll be able to have a fixed-size array in Base because it’s so much useful in numerical applications, of course the question that this brings is how to handle two different array types in Base
Cthulhu tells me there was a type instability in my code unrelated to FArray. Once I fixed that both the number of allocations and the total amount of allocated memory go up with FArray w.r.t plain Array =(
I’ve added MutableSmallVector to my benchmarks above.
I think that FixedSizeArrays is a great package, and that the title of this thread sounds exactly right. However, looking at the benchmarks (which do not cover all sizes and element types!), I’m asking myself where, say, FixedSizeVector really shines in the current ecosystem. If you only need a mutable, indexable container, then it’s indeed a lightweight solution.
If you want to have algebraic operations, then for large vectors the difference to Vector seems negligible. For small vectors whose size is known in advance, MVector is faster (and so is MutableFixedVector). If the size unknown, but with a small upper bound, then MutableSmallVector appears to be the better choice. (MVector and Mutable(Fixed|Small)Vector need isbits elements, but that is usually the case.)
For which sizes and/or element types can FixedSizeVector play out its strengths? Or would it be for higher-dimensional arrays?
In my mind the main competitor of FixedSizeArray is only Base.Array, in the role of a general purpose container. Specialised containers like MArray are different beasts: they can be useful when you want to do something smart in very specific cases (e.g. dispatching on the size, or when you know you only deal with very specific, small sizes), but they may also come at the cost of longer compilation latency.
Performance difference with Base.Array is probably negligible for microbenchmarks, but the benefit is in enabling compiler optimisations which are just impossible with the base type: improving effect inference of a inner function (if for example it doesn’t throw errors anymore) can have positive cascade effects in a larger program as a whole.
How does this compare with SizedArrays from StaticArrays.jl? IIRC SizedArrays are also backed by normal memory and the only notable difference I can see is that the size information is stored as value instead of type parameter.
Besides the size being a type parameter versus an instance value, SizedArray wraps any AbstractArray, while FixedSizeArray is a DenseArray that wraps a DenseVector (default to Memory). For example, SizedArray can wrap a StepRange, but a FixedSizeArray can’t (but the constructor may copy the elements of an input AbstractArray into a Memory to wrap). Those two differences alone make the use cases fairly different, and there isn’t a straightforward answer to which is faster. The static size of SizedArray may leverage some optimized StaticArrays code, but it could wrap an AbstractArray where getindex takes 2 minutes each call.
That is a missed optimization! Julia could “simply” specialize code emission such that non-Vector Arrays have constant MemoryRef and size. If that optimization was still present today, then we would be able to work around Vector’s resizeability by using n x 1 matrices. (the load %memoryref_data5 = load ptr, ptr %"x::Array", align 8 for the Matrix case is the redundant one – the contents of the array are not constant and must be reloaded since we clobbered memory)
(main useful point here is the code snippet with llvmcall asm memory clobber to test optimizations)