De-`NamedTuple`izing keyword arguments

I’m trying to understand if there’s optimization potential in Makie regarding its ubiquitous use of keyword arguments. My idea is writing a macro that makes a function gather its keyword arguments not into a NamedTuple but into a Dict{Symbol,Any}. That type would, when passed down to callees, only cause one compilation, while a NamedTuple needs a new one for each combination of keywords and types.

The simplest thing I can do is of course just to gather all keyword arguments into a dict manually wherever needed:

function myfunction(; kwargs...)
   d = Dict{Symbol,Any}(kwargs)
   do_something_else(d)
end

But this still incurs a NamedTuple specialization at the level of Core.kwcall for every call, I think.
Has anyone ever measured if all these named tuples are a problem for the compiler? For example every one of these calls compiles a new one:

scatter(...; color = :red)
scatter(...; color = "red")
scatter(...; color = "red", marker = :rect)
scatter(...; color = "red", marker = :rect, markersize = 1)

But it seems this behavior is baked deep into the language because it happens directly at the lowering stage, both when defining a function with keyword arguments and when calling it. I understand it’s better for final performance, but for “bag of keyword argument” interfaces like Makie has it’s not great.

If I wanted to overload the kwcall behavior, can I overload the helper function that gets created at function definition? It’s name is gensymmed and I’m not sure how it’s supposed to be retrieved after the fact (var"#myfunction#131" in this case):

julia> Core.kwcall(Dict(:a => 1), myfunction)
ERROR: MethodError: no method matching var"#myfunction#131"(::Dict{Symbol, Int64}, ::typeof(my_function))

Closest candidates are:
  var"#myfunction#131"(::Base.Pairs{Symbol, V, Tuple{Vararg{Symbol, N}}, NamedTuple{names, T}} where {V, N, names, T<:Tuple{Vararg{Any, N}}}, ::typeof(my_function))

In Makie, there are many methods that take both a positional ::Attributes argument (that’s just a special Dict) and ; kwargs..., so it would be cool to be able to overwrite the kwcall behavior for a whole number of functions to just immediately merge their kwargs into the positional Attributes arg automatically (without manually doing this step everywhere, but maybe that is the easiest path after all).

A solution for optional despecialization of keyword arguments would be useful. Overloading anything from Core seems like it will eventually bite you. Could you do what DataFrames.jl did and just have the option of the last positional argument being Pair{Symbol, Any}...?

Can’t really do that because all these functions already take args.... Which one could in principle turn into a single positional arg now that I think about it, but not sure if it’s worth the gain.

I hacked around a little with Core.kwcall and while this particular example does not give me a gain in inference time (rather a small loss) I did see a gain for recompilation in Makie where the logic that needs to be reinferred is more complex than x + y.

using SnoopCompile

function func(; kwargs...) end

function func(kwargs::Dict)
    return kwargs[:x] + kwargs[:y]
end

function Core.kwcall(@nospecialize(nt::NamedTuple), ::typeof(func), args...)
    kwargs = Dict{Symbol, Any}()
    # this iteration scheme is supposed to cause less specialization for different namedtuples
    names = fieldnames(typeof(nt))
    for name in names
        kwargs[name] = getfield(nt, name)
    end
    return func(kwargs, args...)
end

func(; x = 1, y = 2)
tinf = @snoopi_deep func(; x = 1, y = 2, z = 3.0)

func2(; kwargs...) = kwargs[:x] + kwargs[:y]
func2(; x = 1, y = 2)

tinf2 = @snoopi_deep func2(; x = 1, y = 2, z = 3.0)

There was a discussion recently about (I think) a similar problem in the context of stack traces: DifferentialEquations Package Kills Performance Everywhere when TruncatedStacktraces is used

I’m not knowledgeable enough to determine if that applies here, bit maybe you find something useful in the discussion, some solution approaches are discussed near the end of the thread.

Do namedtuple-kwargs really cause major overhead, compared to compiling the actual underlying functions? Curious to see how much is that…
Not sure about Makie, but commonly kwargs are used close to the user-facing level, turning into regular positional args downstream.

While this is not the case in Makie (we just have bags of options being passed very far down) turning them into positional args doesn’t help because that still causes specialization. But yeah in our case the best course of action seems to be just converting to Dict at the surface and going from there.

There’s no deep specialization for each combination of passed parameters in this case. They just get unpacked on the top level, stuff like f(x; kwargs...) = _f(x, get(kwargs, :init, 0), get(kwargs, :default, 0)).

AFAICT, a recommended solution for these problems is to define a custom struct representing underlying parameters. Then, either make this struct a part of the user-facing API, or convert kwargs to this struct at the very top level, filling defaults for omitted kwargs.
Or a couple of such structs, if there are several sets of orthogonal options (ie line options and figure options).

1 Like