Why doesn't `@nospecialize` work in this example?

Just discovered that using @nospecialize on a method loses it’s effect as soon as its called by a specialized method.

julia> using MethodAnalysis

julia> struct Foo{X}
           x::X
       end

julia> @nospecialize

julia> has_trait(T::Type) = false
has_trait (generic function with 1 method)

julia> has_trait(T::Type{<:Foo}) = true
has_trait (generic function with 2 methods)

julia> @specialize

julia> has_trait(typeof(Foo(1)))
true

julia> has_trait(typeof(Foo("a")))
true

# `has_trait` doesn't specialize on the parametric type of `Foo`
julia> methodinstances(has_trait) 
1-element Vector{Core.MethodInstance}:
 MethodInstance for has_trait(::Type{<:Foo})

julia> do_something_with_trait(x) = has_trait(typeof(x)) ? "success" : "failure"
do_something_with_trait (generic function with 1 method)

julia> do_something_with_trait(Foo(1))
"success"

julia> do_something_with_trait(Foo("a"))
"success"

 # `do_something_with_trait` specializes, forcing `has_trait` to be specialized?
julia> methodinstances(has_trait) 
3-element Vector{Core.MethodInstance}:
 MethodInstance for has_trait(::Type{Foo{Int64}})
 MethodInstance for has_trait(::Type{Foo{String}})
 MethodInstance for has_trait(::Type{<:Foo})

Is there some sort of trick to this, similar to how using where {T} negates despecialization?

It may not be specialized, but it may be inlined and/or const-propped. Both of those force specialization; @nospecialize is a hint, not a command.

If you want, you can turn off both inlining and const-prop with

@noinline Base.@constprop :none has_trait...
2 Likes

My understanding was that @nospecialize affects code generation not inference so I assumed that code that wasn’t specialized could still be inlined and use constant propagation.

On v1.8.2 I still get method specialization when swapping in these lines…

@noinline Base.@constprop :none has_trait(@nospecialize T::Type) = false
@noinline Base.@constprop :none has_trait(@nospecialize T::Type{<:Foo}) = true

Is the Type{<:Foo} forcing specialization? What if you don’t have that?


julia> using MethodAnalysis

julia> struct Foo{X}
           x::X
       end

julia> @noinline Base.@constprop :none has_trait(@nospecialize T::Type) = T <: Foo
has_trait (generic function with 1 method)

julia> has_trait(typeof(Foo(1)))
true

julia> has_trait(typeof(Foo("a")))
true

julia> methodinstances(has_trait)
1-element Vector{Core.MethodInstance}:
 MethodInstance for has_trait(::Type)

julia> do_something_with_trait(x) = has_trait(typeof(x)) ? "success" : "failure"
do_something_with_trait (generic function with 1 method)

julia> do_something_with_trait(Foo(1))
"success"

julia> do_something_with_trait(Foo("a"))
"success"

julia> methodinstances(has_trait)
3-element Vector{Core.MethodInstance}:
 MethodInstance for has_trait(::Type)
 MethodInstance for has_trait(::Type{Foo{Int64}})
 MethodInstance for has_trait(::Type{Foo{String}})

@nospecialize is targeted at the callee, but the caller is still allowed to try and guess what type is returned. It therefore doesn’t block inference directly, though loss of specialization can indirectly block inference. https://github.com/JuliaLang/julia/pull/41931

You might need Base.inferencebarrier:

julia> do_something_with_trait(x) = has_trait(typeof(Base.inferencebarrier(x))) ? "success" : "failure"
do_something_with_trait (generic function with 1 method)

julia> do_something_with_trait(Foo(1))
"success"

julia> do_something_with_trait(Foo("a"))
"success"

julia> methodinstances(has_trait)
2-element Vector{Core.MethodInstance}:
 MethodInstance for has_trait(::Type)
 MethodInstance for has_trait(::DataType)

Hmm, even @noinline Base.@constprop :none has_trait(@nospecialize x) = true ends up creating new methods.

Is all this expected behavior or should I file an issue?

Should Base.inferencebarrier be documented?

It’s all expected. Again, the caller is allowed to specialize wrt to inference, and in cases like this where the return type is Const that’s the end of the story. But in normal cases there will be only one LLVM implementation generated for all those “specializations.” MethodInstance ≠ LLVM code—@nospecialize does not directly affect caller inference, only codegen.

1 Like

Thanks for your thoughtful responses. I’m assuming the generation of these methods increases compile time even if it results in only one LLVM method?

Inference definitely takes time. That’s the motivation for a “stronger” version @noinfer (re-posting link to https://github.com/JuliaLang/julia/pull/41931). But runtime performance can sometimes be greatly enhanced by letting the caller introspect on the return type, hence the choice for the specific design of @nospecialize.

Thanks Tim!
@noinfer would be a great tool to have and I commented on the related PR in case public interest had any impact in moving that forward.
I think a complete solution to the problem I’m trying to solve would allow specializing on just the parametric wrapper, without interfering with inferrence.
This would help with inheritance by construction when all we want to do is f(x) = f(parent(x)), without a new method instance of f when x has different fields accompanying its parent data.