I was wondering if anyone could explain why changing the type of a variable is problematic in Julia.
I (think) I understand the reason type-instabilities are a problem when they arise conditional on the value (rather than type) of input arguments. When that happens, as I understand it, it causes problems for the compiler because the compiler is only conditioning its code on function input types, not input values, so it can’t make a clean type inference. e.g.
function x(a::Integer)
if a % 2 == 0
return a
else
return "a is odd"
end
Is type unstable because if a is even , the function returns an integer, if its odd it return a string. And the compiler doesn’t condition on the value of a, just it’s type (here, Integer).
function foo()
x = 1
for i = 1:10
x = x/bar()
end
return x
end
is problematic, since it seems like the changes in variable type are fully deterministic and thus, I would have thought, fully transparent to the compiler.
Could anyone explain why that’s something the compiler can’t handle?
In theory the compiler could do this. But in practice it’s difficult. Note I am speaking as someone who doesn’t hack on the compiler, so they might want to correct some details, but here’s how I understand it.
Think about implementing it. You have to keep your floats and ints separate, so in the typed code you’d want to create a new _x to keep the float in, where _x = convert(Float64,x) is how you’d initialize it and then you’d use _x in the loop and know to return _x. That would work in this case, but there are a lot of cases where this would work. If x is mutable (an array), then the function bar could in theory be mutating it. Or you could want to mutate x after the return, so you’d have to be able to prove that returning _x would be the same to a user as returning x.
The assumption that made your example work is that x is immutable so no side effects can happen. With the new optimizer, I think someone could in theory create an optimization pass so that way if a type is immutable and you get a union out from something that was strictly typed, check if there’s a point to make a shadow variable to make the computation fully inferred. I can see this pass taking quite a bit of time though if you wanted to check every single operation to see if there’s a good point to get type-stability back, so it would need some good heuristics. And then in the end, this won’t get the mutable case either, where you’d have to work out that x cannot be changed in the future via an escape analysis or something like that.
But you can play around with macros and generated functions to see if you can come up with a way to do this. That would be interesting.
function foo()
x = 1
for i = 1:10
x = x/2
end
return x
end
improved a lot from 0.6 to 0.7 from 162.397 ns (19 allocations: 304 bytes) to 18.397 ns (0 allocations: 0 bytes) (although the type-stable version is still a lot faster 1.686 ns (0 allocations: 0 bytes)). So now the compiler does in fact do lots of optimizations on small union-types.
Regarding mutability, I’m still a little confused. Are there situations where one can mutate an object in place in a manner that changes it’s type? I thought that was disallowed for exactly this reason.
I guess the reason I’m most confused is that this isn’t about changing the type of the underlying object (which, as you say, would have side-effects), but rather about re-binding the Symbol to a different object all together. And I’m having trouble thinking of how that could ever have side effects (though maybe that’s a failure of imagination?).
Let’s modify the above to work with a mutable object:
function foo()
x = Integer[1]
y = x[1] / 2
x = Float64[y]
return x
end
We didn’t mutate the vector associated with x, we re-bound x to a new vector. And that seems (to me) always predicable (if we don’t have some conditional-branch a-la standard type instability)
No, you’re not getting it. You cannot ever do x = Integer[1] and then x = Float64[y]. The idea that this can work is an abstraction in a higher level language. Think of it like C. When you declare x, you can only make it an integer or a float. If you want it to be both, you can make it a Box struct that does type-checking before operations. That method always works, and this is Julia’s default compiler setup. However, could in theory create a new variable. So the user code says x = ..., but you interpret that as _x = ... and then have that _x declared to the correct type. But if you create a new variable, you have to make sure that using the new variable gives you exactly the same result as using the boxed variable (otherwise the compiler will silently be generating wrong code) which is difficult.
I’ll muse on that more, though initially, I guess my question is why we can’t always use the _x trick when a user assigns x to a new variable / object of a different type than it’s initial assignment.
Sounds like, as you say “you have to make sure that using the new variable gives you exactly the same result as using the boxed variable (otherwise the compiler will silently be generating wrong code) which is difficult.”, but I guess I’m just not clear why that’s the case in a re-assignment situation. It seems like that would be straightforward.
Edit: to make concern more clear:
As you noted before, seems like shadow vars have issues when there are side effects. But it seems like the rule “if user assigns a value of a new type to an already assigned Symbol, use a shadow var” would be safe because (unless I’m confused about this?) there are no type-changing in-place mutations in Julia (to ensure type-stability), so those situations would never result in side effects.
Well I don’t know the new IR. If it does, then hey, sounds like this optimization has already came.
Something like:
y = [1]
function foo()
x = y
for i in 1:10
x = bar(x)
end
x
end
function bar(x)
y += 1
x./y
end
where you’re silently modifying x from a distance. If you made the reference of x and _x different without correcting for this then you’d have to look for this. So at least with v0.6’s IR where the variables had one type throughout a given scope this would’ve had to be accounted for.
But x is only silently modified in the first iteration. Using henchman unrolling and the _x trick, the following seem to produce the same result:
julia> y = [1]
1-element Array{Int64,1}:
1
julia> function foo()
x = y
for i in 1:10
x = bar(x)
end
x
end
foo (generic function with 1 method)
julia> function bar(x)
global y += 1
x./y
end
bar (generic function with 1 method)
julia> foo()
1-element Array{Float64,1}:
2.50521e-8
julia> y = [1]
1-element Array{Int64,1}:
1
julia> function foo()
x = y
_x = bar(x)
for i in 2:10
_x = bar(_x)
end
_x
end
foo (generic function with 1 method)
julia> function bar(x)
global y += 1
x./y
end
bar (generic function with 1 method)
julia> foo()
1-element Array{Float64,1}:
2.50521e-8
Well in this case Henchmen unrolling does the trick, but since Julia doesn’t do this (at least on v0.6) that’s a pretty firm answer as to why it’s not optimized yet but how it could be in the future. I am trying to think of a more difficult example involving references but it seems that type changes require a reference change so maybe there isn’t an “action at a distance” example in the case where you have a type change. Maybe there isn’t one, in which case Henchmen unrolling is all you need as long as you only apply the shadow variable in the case where you have a type change (because of course it’s easy to find an example where there isn’t a type change and changing a reference is incorrect).
Or maybe the new optimizer does it all. A few people could give us an update on that
Peeling off one loop iteration does the trick in this case. The hard part about this optimization is the potential for exponential code inflation in the case of deeply nested loops.
Ah, ok – so if I follow this correctly, the problem only emerges when there are loops, and it’s emerging because loops are kinda like little functions that are, in effect, type unstable (if we think of them as functions that take the index as the input)?
And if we just unrolled all loops this wouldn’t be an issue, but that’s infeasible / not computationally worthwhile? (I have very little understanding of how loops work at a low (assembly) level…)
The optimization is doable, it’s just never really been a top priority. Type unstable loops are often the cause of whole-function type instability, so it’s usually better to just fix the type instability, so the slowness acts as a kind of red flag anyway. For example:
function sum_inverses(n)
t = 0
for i = 1:n
t += 1/i
end
return t
end
You can optimize this by peeling one loop iteration off like so:
function sum_inverses(n)
t = 0
if n ≤ 0
return t # always Int (i.e. `0`)
else
t += 1/1
for i = 2:n
t += 1/i
end
return t # always Float64
end
end
However, this still has the problem that t is an Int (i.e. 0) when n ≤ 0 and a Float64 when n > 0. So generally, we’ve leaned towards documenting this and encouraging people to write type-stable code instead of putting a bandaid on it.
That one really confused me because, before thinking about these weird loop-based cases, I couldn’t figure out why this should be a problem – seems easy to see what’s going on, and seems like just a normal use of a dynamic language. But it does throw up a code_warntype error, and apparently really slows things down.
Any chance this is something that could be easily fixed outside of loops?