Immutable vs Type: reassignment via arrays and performance

According to the docs, operating on immutable types can result in increased performance. On first thought, I assumed this was because the data contained inside immutable objects is known to be unchanging, but upon tinkering a bit, I realized immutable data can be changed when the data lies in an array:

immutable Test
    x1::Float64
    x2::Array{Float64,1}
end

test = Test(1.0,[1.0])
# This is an invalid operation
test.x1 = 2.0
# This is a valid operation
test.x2[1] = 2.0

So my questions are:

  • Is reassignment to an array inside an immutable a bad thing?
  • If performance is better with immutable types and an array structure is always the same, why not always use immutables at the cost of less clean code?

It’s fine.

Because sometimes you want to change the pointer. But yes, immutables as a wrapper for arrays is a very common “trick”. When you do this, you can make them dispatch and broadcast differently. This kind of wrapper type is actually the suggested implementation for @threads broadcasting.

Immutables are value-types. It’s easiest for me to understand this in contract to mutable types which are not. Mutable types are pointers to each field. So each access has an indirection, and this can ruin optimizations and cause other performance issues (in reality, if it’s not a tight loop, you won’t notice this). Arrays of mutables are arrays of pointers.

Value types have their values “inlined”, so an array of immutables is not an array of pointers, it’s an array of larger pieces of data all written in a line. If an array of immutables concretely typed and you want to grab A[i].b, the compiler can know exactly how many bits to move down the memory to get the value (not just the pointer). So an immutable is almost like a larger primitative type which has a tag in the front to denote what it is and what’s in there.

Arrays are mutable types. So when you put an array in an immutable, the field A.b is a pointer to the array. So there’s one level of indirection. But if A was mutable, it would be two pointers that would have to be dereferenced. But one level of indirection is enough to get rid of many optimizations, so that’s why you won’t notice too much of a difference unless you’re in a tight loop. And in the case where you have immutables, many times the compiler will compile away the “immutable” and turn it into a “zero-cost abstraction”.

I might have some of the details a little bit off, so someone else will probably want to clean this up, but that’s the general idea.

2 Likes

I’ll give a stab at answer this. I want to be upfront that I’m not an expert Julia user; hopefully I can shed some light, but this is as much to educate myself as anything else. Hopefully someone who actually knows what they’re talking about can correct any mistakes.

  1. immutable types can be bits types:
julia> type MutFloat
           x::Float64
       end

julia> immutable ImmFloat
           x::Float64
       end

julia> isbits(MutFloat)
false

julia> isbits(ImmFloat)
true
  1. Let’s compare the performance of these two types. Additionally, we’ll include an immutable type wrapping an array.
immutable ArrFloat
	x::Vector{Float64}
end

floats = rand(100000)
immfloats = map(ImmFloat, floats)
mutfloats = map(MutFloat, floats)
arrfloats = map(x -> ArrFloat([x]), floats)

println("Immutable float")
println(@benchmark sum(t.x for t in immfloats))
println("Mutable float")
println(@benchmark sum(t.x for t in mutfloats))
println("Immutable-wrapped array")
println(@benchmark sum(t.x[1] for t in arrfloats))

Immutable float
Trial(97.806 ÎĽs)
Mutable float
Trial(110.693 ÎĽs)
Immutable-wrapped array
Trial(936.762 ÎĽs)

So empirically, there is a slight performance advantage to have immutables. Wrapping in an array is really bad for performance.

What’s going on here? I think with an immutable whose fields are a known leaf type - like ImmFloat in our example - the data passed around are exactly the bits of the fields. So passing an ImmFloat is just passing the 64 bits that make up a Float64; there are not references or redirection. Therefore, if someone gives you an ImmFloat, operating on its value is easy: just look at the bits they handed you.

For mutable types, however, we pass around references. (From the manual: “An object with an immutable type is passed around (both in assignment statements and in function calls) by copying, whereas a mutable type is passed around by reference.”) So if I hand you a MutFloat, you’re not holding the bits of the float itself; you’re holding the address where those bits can be found. To operate on the MutFloat, you have to go where the address tells you to find the bits. This is slower than the corresponding operation on a ImmFloat.

In the last case, what happens when I give you an immutable type holding an array? I’ve given you exactly the address to that array (whereas if I had given you a mutable type, you’d be holding the address to look for the array’s address). But to actually get the Float64 data out of the array, you have to call getindex which is comparatively slow - bounds checking, function call, etc.

So punchline here is if you want mutability, use a mutable type.

  1. I think there may be psychological benefits to immutable types. We tend to use immutables when we’ve carefully thought about what types our data will have and which parts of our programs are invariant. Type specificity is really important for performance:
immutable ImmFloatInt
	x::Union{Float64, Int}
end
immfloatints = map(ImmFloatInt, floats)
println("Immutable union")
println(@benchmark sum(t.x for t in immfloatints))

Immutable union
Trial(3.453 ms)

Here, adding up the same 100,000 floating point numbers is about 35x faster if they’re held in ImmFloats rather than ImmFloatInts, the latter of which has a union type. (I believe union performance may be improving in the near future, not sure; this is on Julia 0.5.1.)

I didn’t catch that he was wanting to use an array just to make a “mutable field”. Yeah, don’t do that. However, your example probably wouldn’t be quite as bad if you added @inbounds. But Ref exists for this reason.

I figured @inbounds would help, but I couldn’t find where to stick it and get any improvement. (Maybe it’s just a limitation of using a generator expression inside the sum?)

julia> @benchmark sum(t.x[1] for t in arrfloats)
BenchmarkTools.Trial:
  memory estimate:  48 bytes
  allocs estimate:  3
  --------------
  minimum time:     936.763 ÎĽs (0.00% GC)
  median time:      965.840 ÎĽs (0.00% GC)
  mean time:        978.484 ÎĽs (0.00% GC)
  maximum time:     1.552 ms (0.00% GC)
  --------------
  samples:          4923
  evals/sample:     1

julia> @benchmark @inbounds sum(t.x[1] for t in arrfloats)
BenchmarkTools.Trial:
  memory estimate:  48 bytes
  allocs estimate:  3
  --------------
  minimum time:     936.433 ÎĽs (0.00% GC)
  median time:      966.832 ÎĽs (0.00% GC)
  mean time:        979.603 ÎĽs (0.00% GC)
  maximum time:     1.595 ms (0.00% GC)
  --------------
  samples:          4934
  evals/sample:     1

julia> @benchmark sum(@inbounds t.x[1] for t in arrfloats)
ERROR: syntax: unexpected ")"

In @inbounds doesn’t propagate through function calls which aren’t inlined, so it has to be this one. But you don’t want the full expression to be captured, so it should just be

@benchmark sum(@inbounds(t.x[1]) for t in arrfloats)

Doesn’t seem to work:

ERROR: MethodError: no method matching +(::Void, ::Void)
Closest candidates are:
  +(::Any, ::Any, ::Any, ::Any...) at operators.jl:138
 in mapfoldl_impl(::Base.#identity, ::Base.#+, ::Void, ::Base.Generator{Array{ArrFloat,1},##36#38}, ::Int64) at .\reduce.jl:46```

Yeah, peculiar. I can’t find out how to @inbounds the generator either. Also, the ref form

immutable RefFloat
	x::Ref{Float64}
end
reffloats = map(x -> RefFloat(Ref{Float64}(x)), floats)
println(@benchmark sum(t.x[] for t in arrfloats))

is being slower than the array form, which contradicts things I’ve seen before. Ehh I’m off today.

Ref is an abstract type you probably want RefValue here.

2 Likes

Not to be overly pedantic, but that’s not why Ref was added. It was specifically added to act like a view operation on one element, primarily for aiding in C-interop. In fact, it is an abstract type specifically because that allows it to perform its intended function (despite also making it sub-optimal as a annotation for making a field mutable).

From a performance standpoint, it’s usually best if you make the whole object immutable, or the whole object mutable. It’s better (performance and memory) to wrap an entire immutable object in a mutable object (including in a mutable struct or an Array), rather than using Ref (or RefValue or Array or another mutable type) to select multiple specific fields to make mutable. This flattens out the memory hierarchy, meaning the computer needs to do fewer memory fetches to compute the answer.

2 Likes

Very helpful. Thanks all. To summarize and see if I got this right:

  • Passing an immutable(Float64) to a function passes the actual Float64
  • Passing a mutable(Float64) to a function passes a pointer pointing to the Float64
  • Passing an immutable(Array) to a function passes a pointer pointing to the Array
  • Passing a mutable(Array) to a function passes a pointer pointing to the pointer pointing to the Array
  • Operations on arrays come with costly operations from the call to getindex, but these can be bypassed by disabling bounds checking via @inbounds or by typing the array with Ref: a type that originated for aiding C-interop.
  • Although the performance cost of making an immutable field mutable via an array can be lessened by disabling bounds checking, data can be better organized in memory by wrapping mutable data in mutable types and immutable data in immutable types, therefore making the code more performant.

I don’t understand what you mean by mutable() and immutable(), but it’s not correct in general to assume that immutable objects are unboxed (i.e. passed as raw bits). Generally, only isbits objects are passed to functions unboxed. The definition of isbits is roughly the same as the definition of POD in other languages.