Dispatch time using `Val` is an order of magnitude slower than looking up with `Dict`

I was going to post this as an issue, but then I reminded myself that I know very little about what’s really going on here, and thought better of it (apologies for any issues I might have posted in the past out of ignorance!).

Anyway, I was thinking that multiple dispatch of functions like f(::Type{Val{$x}}) should be essentially the same as looking up functions in a Dict and then executing them. Here’s what I did:

__precompile__(true)

module TestValDispatch

const N = 0x80
const FUNC_NAME = :f

function makefunc(n::Integer)
    :($FUNC_NAME(::Type{Val{$n}}) = 1)
end

macro definefuncs()
    args = [makefunc(n) for n ∈ 0x00:N]
    esc(quote
        $((arg for arg ∈ args)...)
        $FUNC_NAME(x::UInt8) = $FUNC_NAME(Val{x})
    end)
end

macro export_()
    esc(:(export $FUNC_NAME))
end

@definefuncs
@export_

dict = Dict(n=>1 for n ∈ 0x00:N)
export dict

end # module TestValDispatch

then

using TestValDispatch
using BenchmarkTools

const M = 0x0f

urand = rand(0x00:M)

bdispatch = @benchmark f($(urand()))
bdict     = @benchmark dict[$(urand())]

And the results

julia> bdispatch
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     365.854 ns (0.00% GC)
  median time:      366.824 ns (0.00% GC)
  mean time:        367.376 ns (0.00% GC)
  maximum time:     511.533 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     199

julia> bdict
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     28.011 ns (0.00% GC)
  median time:      28.249 ns (0.00% GC)
  mean time:        28.310 ns (0.00% GC)
  maximum time:     70.839 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     995

As you might expect, the dispatch times increase with N, although from a few samples it seems that the dispatch time “converges” for large N (again, very roughly). This makes sense, however it begs the question of why there is such a big performance difference between dispatch and Dict for N=1.

Granted, I’m not 100% sure that I’m making a sensible “apples-to-apples” comparison between dispatch and Dict look-up. If I’m not, perhaps someone can tell me where I went wrong.

I know that this is probably a flagrant abuse of Val, however, it has me curious.

  1. You must not use Val this way in real code. And that’s exactly because

    I know that this is probably a flagrant abuse of Val

  2. Dispatch is fundamentally more complex than a dict look up because type matching is much more complex than hash lookup and direct value comparison. Dict lookup also doesn’t need to worry about multiple matches/specifity/ambiguity. It may be possible to make dispatch matches dict lookup performance but it’ll never be better and will in general be much worse.

4 Likes

That’s more or less what I figured, at least in the general case, but I thought that since Val is so weakly coupled to the rest of the type hierarchy that it’d basically fall back on something like a hash lookup. Oh well, hopefully this post will prove interesting to someone who is curious about this in the future.

Well, even if you only limit your usecase to Val{x::UInt8} it is still not less expressive than a Dict{UInt8}. More over, this is an abuse of dispatch and therefore not what the dispatch code is optimized for…

Got you. This was not intended as a complaint nor was I in any way suggesting Val should be made to work this way. I had a use case where I thought “Wouldn’t it be cool if…” but there’s no real reason not to use a Dict.