My point was whether type instability was exclusively defined as whether the returned object was a concrete type, because if that’s the case maybe could create some issues?
In any case, now that I downloaded the package, I’m a little bit confused about the following, which could be related.
using DispatchDoctor
function foo1(x)
y = Tuple(x)
sum(y[1:2])
end
x = [1,2,3]
@code_warntype foo1(x) # type unstable because vararg argument
@stable function foo2(x)
y = Tuple(x)
sum(y[1:2])
end
x = [1,2,3]
foo2(x) # it returns the output
@code_warntype foo2(x) # red signs go away, and now it's type stable
This is because the body of foo2 has been moved out into a separate function and replaced by the type stability checking code, as explained by @MilesCranmerabove. So if the code you wrote was type stable, @code_warntype foo2(x) will be all blue (green?) by design. To see the inference of the function body you wrote you need to figure out the gensymmed name and run @code_warntype on that:
Here you get the red Tuple{Vararg{Int64}} as expected. But note that the function is still type-stable, since the return type is fully inferred from the argument types. Type stability only implies that the Body::T line in @code_warntype is concrete, not that every internal variable has a concrete inferred type.
To add one more tip @alfaromartino — I would recommend using Cthulhu.jl for this as it lets you descend into the function body.
I guess another option is for me to leave the function body in both functions? Then inspection tools like @code_warntype would work. And I suppose it could improve source tracking for errors, and simplify the propagation of other macros (@matthias314 - relevant to your question too).
You would still generate the second function and call promote_op on it; but then just not actually use it. It would exclusively be for testing the type inference of itself.
To detect type-instabilities inside a function when they don’t affect the inferrability of the output, you can also use JET.jl with the macros @test_opt and @report_opt
So once that merges then @code_warntype will still be useful on stabilized functions.
Basically now the function body is copied rather than moved. And it uses the “simulator function” just to check the inferred type, but doesn’t actually call it.
It results in more codegen but it means fewer changes to the original function.
It will also simplify propagation of other macros like @propagate_inbounds
Thanks for all the responses! I’m interested in using it to discipline my work. However, although not related to the package, I’m always conflicted about how we should teach and understand type stability to non-programmers, or more generally people who only use Julia for applied work (I’m one of those).
In the sense of distinguishing between only inferring a concrete type for the output (stability) or inferring a concrete type for all expressions (groundedness). While the former is the definition used, like in this package, the latter is how we usually think about our code (by “we”, I mean people using Julia for applied work exclusively, without ever creating packages).
Is there any reason why type stability is defined in terms of output? I guess checking concrete types for each expression takes you down a rabbit hole? Or that any performance issue of a stable function would only be confined to that function, without propagating?
Just to note, the definition of “type instability” in DispatchDoctor is just a single function:
type_instability(::Type{T}) where {T} = !Base.isconcretetype(T)
type_instability(::Type{Union{}}) = false
You could totally always add a method for whatever other definitions you want:
import DispatchDoctor as DD
DD.type_instability(::Type{<:AbstractArray{Any}}) = true
which results in
julia> DD.@stable f() = Any[1, 2, 3]
f (generic function with 1 method)
julia> f()
ERROR: TypeInstabilityError: Instability detected in `f` defined at
REPL[4]:1. Inferred to be `Vector{Any}`, which is not a concrete type.
I think JET.jl and Cthulhu.jl are already great at this. I want DispatchDoctor.jl to be moreso something you can just “leave on” to alert you to any type instabilities that pop up, but be removed by the compiler if things are stable. It’s hard to add Test.@inferred to every possible combination of input arguments to every internal method.
I’ve also noticed there is a bit of a mental overhead from needing to manually test for type instability which leads to avoidance of doing so. But having a flag you can just turn on for your whole codebase makes it much easier.
So the compiler was smart enough to make that integer a float for us.
And thus if we use @stable on it, the doctor is happy:
julia> @stable function f(x)
dx = x > 0 ? -1.0 : 1
return x + dx
end
f (generic function with 1 method)
julia> f(1.0)
0.0
whereas looking at it with @code_warntype would highlight a type instability.
Obviously sometimes internal type instabilities matter. But there is also a reason to not worry about them, and just treat entire functions as the “atomic unit” of instability checking.
I belong to this “we” most of the time, but I disagree that this is how I think about my code, and I think DispatchDoctor makes the right choice in focusing on functions as the unit of stability. The implied meaning of @stable foo(...) = ... is that this method of foo will not trigger dynamic dispatch in its callers. Dynamic dispatch within foo is a separate concern, and if I’m concerned about that I’ll factor out the problematic lines from foo into separate functions and apply @stable to them as well. @code_warntype is a great tool for discovering these lines, so it’s great that it’s no longer broken by @stable.
Oh maybe my question was wrongly understood. I was literally asking why developers focus on the types of outputs exclusively, rather than all the expressions. It was just to learn, wasn’t related to the package.
My point is that developers and the official documentation emphasizes type stability for performance. But, then I can come up with the following function, which is type stable by construction but problematic.
function foo(x)
y = sum(x[1:2])
y::Int64
end
foo([1,2,"hello"])
So, I’ve always wondered what kind of (technical?) reason there is for using type stability, rather than “groundedness” in the sense of this paper.
@alfaromartino Perhaps you are making an assumption that the definitions given in this paper are the same as used in the Julia community? As far as I can tell they are not. I haven’t heard groundedness before; most others just use “type stability” loosely to refer to either situation mentioned here.
Keep in mind that these analysis packages are just tools that a developer can use to improve their code. They still depend on the user to use them effectively. For example as @danielwe mentioned, if using DispatchDoctor, one might want to split out potentially problematic lines into separate functions so that they would get flagged correctly. But there will of course be many scenarios where it fails to catch an inference problem in code.
As mentioned previously, both Cthulhu.jl and JET.jl are other tools in the developer’s arsenal. Those are both quite popular, and work at the level of individual variables (Cthulhu successfully flags the problem in foo), so I think you may be jumping to conclusions if you ask “why developers focus on the types of outputs exclusively” – I do not actually think this is the case. It is just this new package DispatchDoctor.jl that uses this approach.
Indeed, people usually want every variable to be correctly inferred. However, you made a crucial observation here:
If you have a function that does weird things under the hood but returns an inferrable value, then it cannot cause too much harm. It’s the same principle underlying function barriers: if type-instability is unavoidable, try to limit its reach.
As @gdalle mentions, since inferring the type of each variable is the ideal situation, maybe the official documentation should be mentioning this? although, I assume there’s a reason for Julia’s developers sticking to this definition.
I’m just going too much out of topic. The important thing is: great package!
I am a bit confused since that second page you linked actually has the following subsection immediately underneath it:
This is specifically about type instability in variables because the return type here is stable. Maybe you missed those parts of the docs because they don’t refer to the “groundedness” term from that paper?
For each type T , Type{T} is an abstract parametric type whose only instance is the object T .
So Type{T} contains the object T. But… Val{x} is a type that contains the object x – and Val{x} is type stable.
So even if the relation typeof(T) !== Type{T} that defines isconcretetype is not satisfied (because typeof returns DataType rather than Type{T}), I feel like you wouldn’t want to actually call Type{T} “unstable”.
Because otherwise
f(::Type{T}) where {T} = T
is technically type unstable for the input Int64! Which doesn’t make sense to me.
I guess I’ll just add a workaround for this… Having functions return a type that is known at compile-time should certainly not be considered type-unstable, that’s a weird edge case.
Also I realized the issue may have just been because DispatchDoctor is using its own specializing_typeof rather than typeof so that it can get a Type{T} rather than DataType. This is so that Base.promote_op call uses Type{T} rather than DataType in the argtypes... (otherwise everything would show up as an instability).
But in doing so, I was effectively also doing Base.isconcretetype(specializing_typeof(T)) which led to the issue.
IIRC when making a closure over a type, inference is marking it as DataType, which loses information that could be conferred to consumers of the closure if it were marked as Type{T}. So it seems like this is somewhat of an edge case of type inference itself.