Surprising type instability in apply_type

It seems suprising that type inferrence doesn’t work here:

f(Tp) = Tp{Int, Int}
julia> f(Tuple)
Tuple{Int64, Int64}

julia> @code_warntype f(Tuple)
MethodInstance for f(::Type{Tuple})
  from f(Tp) in Main at REPL[5]:1
1 ─ %1 = Core.apply_type(Tp, Main.Int, Main.Int)::Any
└──      return %1

Seems like the compiler has all the information it needs to figure out the output type? Is this some conscious limitation to the inferrence system for performance reasons?

If the first argument to apply_type is a constant, inferrence works just fine, for example g(Tp) = Tuple{Tp, Tp}.

I’m running Julia 1.8.

To pass types as arguments, I think the better (inference-compatible) way uses type selectors. Otherwise, the only thing we can infer statically is the type of Tuple (which is DataType), and not the actual value of Tuple, which is the type you want to use.

See here for instance:

julia> g(::Type{Tp}) where {Tp} = Tp{Int, Int}
g (generic function with 1 method)

julia> @code_warntype g(Tuple)
MethodInstance for g(::Type{Tuple})
  from g(::Type{Tp}) where Tp in Main at REPL[2]:1
Static Parameters
  Tp = Tuple
Body::Type{Tuple{Int64, Int64}}
1 ─ %1 = Core.apply_type($(Expr(:static_parameter, 1)), Main.Int, Main.Int)::Core.Const(Tuple{Int64, Int64})
└──      return %1
1 Like

Ok that’s helpful, thanks!