Force specialization on kwargs

If I have a function like this:

f_vararg(varargs::Int...) = tuple(varargs...)

I can force specialization on varargs with either of the following:

g_vararg(varargs::Vararg{Int, N}) where {N} = tuple(varargs...)
h_vararg(varargs::Vararg{<:Any, N}) where {N} = tuple(varargs...)

Now suppose instead that I have a function like this:

f_kwarg(; kwargs...) = (; kwargs...)

How can I force specialization on the keyword arguments?

How do you check specialization of functions with keyword arguments? Does this show that apparently they are automatically specialized?

julia> f_kwarg(; kwargs...) = (; kwargs...)
f_kwarg (generic function with 1 method)

julia> (@which f_kwarg(a=1, b=2.0, c=:three)).specializations

julia> @code_lowered f_kwarg(a=1, b=2.0, c=:three)
CodeInfo(
1 ─      kwargs... = Base.pairs(@_2)
│   %2 = Main.:(var"#f_kwarg#3")(kwargs..., @_3)
└──      return %2
)

julia> (@which var"#f_kwarg#3"((a=1, b=2.0, c=:three), f_kwarg)).specializations

julia> f_kwarg(a=1, b=2.0, c=:three)
(a = 1, b = 2.0, c = :three)

julia> (@which f_kwarg(a=1, b=2.0, c=:three)).specializations
Core.TypeMapEntry(nothing, Tuple{var"#f_kwarg##kw",NamedTuple{(:a, :b, :c),Tuple{Int64,Float64,Symbol}},typeof(f_kwarg)}, nothing, svec(), 0x0000000000000001, 0xffffffffffffffff, MethodInstance for (::var"#f_kwarg##kw")(::NamedTuple{(:a, :b, :c),Tuple{Int64,Float64,Symbol}}, ::typeof(f_kwarg)), true, true, false)

julia> (@which var"#f_kwarg#3"((a=1, b=2.0, c=:three), f_kwarg)).specializations
Core.TypeMapEntry(nothing, Tuple{var"##f_kwarg#3",Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol,Symbol},NamedTuple{(:a, :b, :c),Tuple{Int64,Float64,Symbol}}},typeof(f_kwarg)}, nothing, svec(), 0x0000000000000001, 0xffffffffffffffff, MethodInstance for #f_kwarg#3(::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol,Symbol},NamedTuple{(:a, :b, :c),Tuple{Int64,Float64,Symbol}}}, ::typeof(f_kwarg)), true, true, false)

Julia does not [yet] specialize dispatch on keyword arguments.

I don’t think that we can trust the output of @code_lowered, etc when it comes to specialization. See e.g. Can we support a way to make `@code_typed` stop lying to us all? :) · Issue #32834 · JuliaLang/julia · GitHub and https://github.com/JuliaLang/julia/blob/master/doc/src/manual/performance-tips.md:

Note that @code_typed and friends will always show you specialized code, even if Julia would not normally specialize that method call.

1 Like

Well, dispatch is a different question entirely. Keyword arguments do not participate in dispatch. But I’m asking if I can get Julia to compile specialized code for the types of the keyword arguments.

If I understand correctly, specialization on keyword arguments would not require keyword arguments to participate in dispatch.

1 Like

I’m using @code_lowered just to get the name of the “body function” var"#f_kwarg#3" as it looks like (@which f_kwarg(a=1, b=2.0, c=:three)).specializations is showing the specializations of the “kwsorter” function.

In that REPL session, AFAICT both keyword sorter and the body function are specialized (as I see Tuple{Int64,Float64,Symbol}). But I don’t know if this is the right way to do it.

Yeah, it certainly does seem as if specialization is happening.

I am skeptical though. If specialization does not happen by default on varargs (thus requiring us to add the Vararg type parameter), why would it happen by default on keyword arguments?

I think the more likely situation is that we are seeing misleading output, and specialization is not actually happening here.

Yeah, I agree it’s strange and that’s why I feel like I’m missing something.

But function call with keyword arguments is directly lowered to construction of a named tuple:

julia> Meta.@lower f_kwarg(a=1, b=2.0, c=:three)
:($(Expr(:thunk, CodeInfo(
    @ none within `top-level scope'
1 ─ %1 = Core.tuple(:a, :b, :c)
│   %2 = Core.apply_type(Core.NamedTuple, %1)
│   %3 = Core.tuple(1, 2.0, :three)
│   %4 = (%2)(%3)
│   %5 = Core.kwfunc(f_kwarg)
│   %6 = (%5)(%4, f_kwarg)
└──      return %6
))))

(See also: Julia Functions · The Julia Language)

So maybe that’s why it’s easier for the compiler to handle this?

Could you elaborate this? If you have multiple specializations of a method for different combinations of kwarg types how would you utilize, i.e. call, these different specializations if not through dispatch?

1 Like

I recall reading a post by Jeff, Stefan, or some other core dev saying that code is specialized on kwarg types, just not dispatched on them. I cannot remember where it was, sorry.

Example:

julia> f(;kw) = values(kw)
f (generic function with 1 method)

julia> @code_llvm f(;kw = 1.0)

;  @ REPL[1]:1 within `f##kw'
define double @"julia_f##kw_17248"([1 x double] addrspace(11)* nocapture nonnull readonly dereferenceable(8)) {
top:
; ┌ @ namedtuple.jl:95 within `getindex'
   %1 = getelementptr inbounds [1 x double], [1 x double] addrspace(11)* %0, i64 0, i64 0
; â””
  %2 = load double, double addrspace(11)* %1, align 8
  ret double %2
}

julia> @code_llvm f(;kw = 1)

;  @ REPL[1]:1 within `f##kw'
define i64 @"julia_f##kw_17215"([1 x i64] addrspace(11)* nocapture nonnull readonly dereferenceable(8)) {
top:
; ┌ @ namedtuple.jl:95 within `getindex'
   %1 = getelementptr inbounds [1 x i64], [1 x i64] addrspace(11)* %0, i64 0, i64 0
; â””
  %2 = load i64, i64 addrspace(11)* %1, align 8
  ret i64 %2
}

This is my recollection also.
kwargs are specialized on.
I believe it was a 1.0 change.
But I can’t find it in the release notes.

1 Like

Thanks for the example. I understand that kwargs are specialized on. But how is this example different from actually having two methods and dispatching to either one based on the kwargs types?

Otherwise put, what is the difference between methods (in the dispatch sense), which we have one of in your example, and specializations, that is different natively compiled codes for a function (at least two in your example).

Isn’t the fact that different specializations get called depending on the type of your kwargs just a form of dispatch?

Please enlighten me :slight_smile:

I couldn’t really say, but I suspect dispatch is much more complex than specialization. You can constrain the signature of a function as a whole in many ways that are not just the product of the types of each argument. For example f(x::T, y::T) where {T} establishes a correlation of argument types. I suspect the only reason for not allowing multiple dispatch on kwargs is due to this added complexity. If we just had simple dispatch like f(x::Int, y::Float64) it would indeed be quite the same as specialization, as far as I understand.

I guess the major difference is that dispatch, through methods, allows different semantics whereas specialization optimizes a single function body (the meaning is fixed).

1 Like

From reading this thread, it seems like specialization does occur on keyword arguments but no dynamic dispatch (which I don’t totally understand because in my mental model that makes the specialization seem pointless). Is there a way to use @nospecialize on keyword arguments?