This is a really good question with a kind of unexpectedly philosophical answer!
If you have mutable value and reference types, then you end up with a bifurcated type system that has two fundamentally different kinds of values with incompatible semantics. In the following example, imagine T
is some kind of mutable struct with a field field::Int
but we don’t know if it’s a hypothetical mutable value type or a normal mutable reference type:
function mutate_maybe!(x::T, v::Int)
x.field = v
end
x = T(1)
mutate_maybe!(x, 2)
println(x.field)
Does this code print 1 or 2? In other words: are mutations to a value visible outside of the function in which the mutation occurs or not?
The answer is that you have no idea: it depends on whether T
is passed by value or reference. This is what I mean by “incompatible semantics”. This kind of thing is bad enough in a language like C# with static typing—at least you know from the types in the program which case you’re dealing with. Now imagine that in a language like Julia where type annotations are optional! I could have left the types off of the signature of mutate_maybe!
and it would mutate some kinds of values and do nothing to other kinds of values. The very semantics of the language are unclear without knowing whether you’re dealing with a value type or a reference type. Not great, Bob.
Note that this also makes function call boundaries rather bizarrely significant: you can’t just inline the operations of a function if the arguments are passed by value since you have to make sure that mutations that happened across a function boundary have no effect while also making sure that mutations that didn’t happen across a function boundary do have an effect. If I automatically or manually inline the logic of a function, it changes the meaning of those operations! Oops.
If, on the other hand, we instead disallow mutating some types, then this problem completely vanishes: all values have the same semantics, but we’re allowed to pass the immutable ones by value or reference as we see fit because there’s no way to tell the difference!
Moreover, this is a feature for many kinds of objects. Imagine if you could mutate the value of an integer? This was actually possible due to a bug in an early Fortran compiler. You could change the value of 2
to be 3
and after that anywhere anyone used the value 2
they got a 3
instead.
The intuitive wrongness of that Fortran compiler bug is getting at the fundamental fact that numbers are a value type: if you increment the number 2, you do not get the same number with a different value, you get a different number. It fundamentally does not make sense to mutate a number. The fact that Julia allows user-defined types to be immutable in the same way is a powerful feature: you can make your own types that have this same number-like property that if you change them, they are different thing, not the same thing with a different value.
Bottom line: value types should not be mutable. The fundamental question about a type is “if I change the value of this thing, is that a different thing or the same thing with different content?” If the answer is that it’s a different thing, then you have a value type and it should be immutable. If the answer is that it’s the same thing with a different value, then that object must have identity independent of its contents, so it’s a reference type and may be mutable. The very concept of a mutable value type is muddled and ill conceived.