This package provides the @stable macro as a more ergonomic way to use Test.@inferred within a codebase:
using DispatchDoctor: @stable
@stable function f(x)
if x > 0
return x
else
return 1.0
end
end
which will then throw an error for any type instability:
julia> f(2.0)
2.0
julia> f(1)
ERROR: return type Int64 does not match inferred return type Union{Float64, Int64}
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] f(x::Int64)
@ Main ~/PermaDocuments/DispatchDoctor.jl/src/DispatchDoctor.jl:18
[3] top-level scope
@ REPL[4]:1
I could see this being useful for maintaining type hygiene in a codebase – you see type instabilities early, rather than needing to fix things when code is already slow.
The @stable macro is pretty simple (using MacroTools)
function _stable(fex::Expr)
fdef = splitdef(fex)
closure_func = gensym("closure_func")
fdef[:body] = quote
let $(closure_func)() = $(fdef[:body])
$(Test).@inferred $(closure_func)()
end
end
return combinedef(fdef)
end
However, this @inferred call is quite slow – a massive 400ns per call.
Is there anything I can do to only trigger the Test.@inferred on the first call with the given set of input types? (Is my only option to use @generated?)
Here’s a benchmark:
julia> using DispatchDoctor: @stable
julia> @stable f(x) = x > 0 ? x : 0.0;
julia> @btime f(1.0);
567.568 ns (12 allocations: 752 bytes)
julia> g(x) = x > 0 ? x : 0.0;
julia> @btime g(1.0);
0.875 ns (0 allocations: 0 bytes)
Any tricks I should try?
Ideally I would like to have the Test.@inferred completely compiled away by the second run… Not sure if that’s possible or not.
I’d wager the stated problem is solvable, even without metaprogramming, however I don’t like the idea, as it would incur a heavy penalty on the first call, and it seems like it’d make using a debugger less nice.
IMO using @inferred in the test suite is preferable.
It would be quite tedious to explicitly test the inference over all possible permutations of types to every internal function in a library. Especially functions that are deeply nested, for which a failed inference may not be picked up by a top-level @inferred. Those methods which would require some manual @descend work are not practical for automation. But tagging it at the call site would let you automate this.
Anyways Im not looking to convince anyone of the utility at this stage. I hate type instabilities and I hate finding them, so I want to get this @stable faster so I can use it in my own stuff.
I’ve always wanted something small and convenient like this! I’ve also seen a macro floating around for erroring on all allocations inside a macro-ed function, which could also live in such a package (combined into @static)?
Hmm, you’re right. What about hiding this behavior behind a compile time preference, with Preferences.jl? This way it could be turned off for production but turned on in the test suite.
How does this sound for working with keywords? The downside is that it has to call the internal function Core.kwcall, but it seems like promote_op doesn’t define a keyword-compatible method:
function stable_wrap(f::F, args...; kwargs...) where {F}
T = if isempty(kwargs)
Base.promote_op(f, map(typeof, args)...)
else
Base.promote_op(Core.kwcall, typeof(NamedTuple(kwargs)), F, map(typeof, args)...)
end
Base.isconcretetype(T) || error("...")
return f(args...; kwargs...)::T
end
It seems to work for a variety of scenarios too which is great:
@testitem "smoke test" begin
using DispatchDoctor
@stable f(x) = x
@test f(1) == 1
end
@testitem "with error" begin
using DispatchDoctor
@stable f(x) = x > 0 ? x : 1.0
# Will catch type instability:
@test_throws TypeInstabilityError f(1)
@test f(2.0) == 2.0
end
@testitem "with kwargs" begin
using DispatchDoctor
@stable f(x; a=1, b=2) = x + a + b
@test f(1) == 4
@stable g(; a=1) = a > 0 ? a : 1.0
@test_throws TypeInstabilityError g()
@test g(; a=2.0) == 2.0
end
@testitem "tuple args" begin
using DispatchDoctor
@stable f((x, y); a=1, b=2) = x + y + a + b
@test f((1, 2)) == 6
@test f((1, 2); b=3) == 7
@stable g((x, y), z=1.0; c=2.0) = x > 0 ? y : c + z
@test g((1, 2.0)) == 2.0
@test_throws TypeInstabilityError g((1, 2))
end
Slightly related question… Does anybody know how to unit-test that the LLVM is as expected?
julia> using DispatchDoctor
julia> @stable f(x) = x
f (generic function with 1 method)
julia> @code_llvm f(1)
; @ /Users/mcranmer/PermaDocuments/DispatchDoctor.jl/src/DispatchDoctor.jl:65 within `f`
define i64 @julia_f_460(i64 signext %0) #0 {
top:
ret i64 %0
}
I can do this check manually but would prefer to have the CI scream at me when Julia no longer compiles away the check.
Replace str with some ir that shows up when the check isn’t compiled away.
Plenty of examples at [Code search results (github.com)](Repository search results · GitHub and I imagine CUDA.jl, GPUCompiler.jl and LLVM.jl also have more examples.
(And btw do you foresee any issues with the use of Core.kwcall? I noticed it wasn’t compatible with earlier Julia, so I basically am just having @stable be a no-op on Julia earlier than 1.10)
vector.body is a name LLVM typically gives to vectorized loop bodies, so this code checks to make sure a gigantic broadcast vectorized.
You could do things like add the debuginfo=:none kwarg to code_llvm, and then check for number of lines.
Or for totally trivial cases, you could try things like comparing string distance with what the optimized IR is supposed to be like (with debuginfo=:none of course; e.g. we don’t are about LineNumberNode paths matching).
EDIT: should maybe replace the String(take!(io)) from FastBroadcast’s tests with sprint.
It’s great that you done this, could be nerdsniped to do it, so maybe you or someone else can be nersniped to make improvements building on this. I.e. apply one or more macros globally, like a REPL mode that could for your f do implicitly:
i.e. you wouldn’t need to specify those there, in that mode, only in the regular julia prompt, that you would no longer use most of the time.
[We already have a package for checked arithmetic; and a package for a REPL more that enables it, and we could have the above REPL mode include that, and call it debug…]
Ideally all functions (you care about) would be type-stable (and non-allocating if important), but it’s a learning curve, I think can’t be checked at compile time for arbitrary types. Your example relu code isn’t type-stable, since it used 0.0, should use zero(x) to also work for e.g. Float32; and one(x) where applies, and division / (and I guess \) give Float64, another stability trap.
Would you want to check for such to have type-stability at compile time, for most or all generic code? Often it’s ok to know type-stable for the types I use at runtime. You merged a credit for a perfomance trick minutes ago, is this now no overhead if the code is type-stable (for some types, but not then you get a type-instability error)?
The hard part would be include.
At that point, it may be worth trying to play with Core.Compiler/inference instead, to see if you can create a module-level Base.Experimental.@ option like @optlevel or @max_methods.
Btw, I found a weird case of Julia’s specialization rules interfering with this interface:
using DispatchDoctor
@stable f(a, t::Type{T}) where {T} = sum(a; init=zero(T))
f([1f0, 1f0], Float32)
Despite the normal function being type stable, this actually fails the type specialization test
ERROR: TypeInstabilityError: Instability detected in function `f`
with arguments `(Vector{Float32}, DataType)`. Inferred to be
`Any`, which is not a concrete type.
Stacktrace:
[1] #_stable_wrap#1
@ ~/PermaDocuments/DispatchDoctor.jl/src/DispatchDoctor.jl:25 [inlined]