Best way to select implementation based on runtime argument? (dispatch vs. control flow)

I’ve been running into the following question: What is the most “sensible” or idiomatic way to select a function implementation based on a runtime argument (by which I mean, it could potentially be passed e. g. as a command line argument by the user) identifying the implementation to use?

For illustration, I’ll use the example of reading data from different file formats. A function read_file(path, format) could take arguments path (giving the file path) and format, which could be a Symbol or String indicating what format the given file is in. How should this choice between different versions of read_file (one for each possible file format) be implemented?

It seems like a mundane thing, but somehow with Julia, the choice doesn’t necessarily seem obvious to me. It might be because “TIMTOWTDI” (there’s more than one way to do it), whereas for example with Python, approach (2b) (see below) seems like the most common and natural way of doing it. (Although I suppose something like approach (1b) could still be implemented in a similar fashion using OOP/classes. I think this is probably just my distaste for approach (1b) steering me away from it in general.)

Another thing to note is that these functions will generally be expensive to call (e. g. doing file IO), so it doesn’t matter so much if some of the approaches add a little more overhead to the function call than others.

Generally, I see two main ways of implementing the function/method selection: Either

  1. by using Julia’s dispatch system, i. e. the “file format” information is translated to the type domain, or
  2. just using normal control flow branches, i. e. if/else.

I came across two versions for approach (1):

Approach (1a)

read_file(path, ::Val{:hdf5}) = ...
read_file(path, ::Val{:binary}) = ...
read_file(path, ::Val{:text}) = ...
read_file(path, format::Symbol) = read_file(path, Val(format))

I’m aware of the performance tips on “values-as-parameters”, but as explained above, some function call overhead won’t really make much of a difference. On the other hand, if doing this somehow has a more serious impact than I think (e. g. by somehow slowing down compilation even in cases where the read_file function is not involved), that would of course be relevant. Otherwise, syntactically and from an organizational perspective, this looks really clear and neat to me!

Approach (1b)

abstract type FileFormat end
struct hdf5 <: FileFormat end
struct binary <: FileFormat end
struct text <: FileFormat end
get_format_type(format)::FileFormat = ...
read_file(path, ::hdf5) = ...
read_file(path, ::binary) = ...
read_file(path, ::text) = ...
read_file(path, format) = read_file(path, get_format_type(format))

This looks similar to traits, although it doesn’t actually have anything to do with traits since no other types are involved. I would perhaps rather call this a “type enum”, i. e. an enumeration whose elements are separate types. I ran across this in several threads here:

However, it actually doesn’t seem very effective, since it’s still necessary to define a function (here get_format_type) to actually convert from the runtime value format to the corresponding type. So really, this just looks like manually re-implementing what Val does.

For approach (2), I also have two versions (with another two variants), but the differences here are mostly minor.

Approach (2a)

@enum FileFormat begin
read_file_hdf5(path) = ...
read_file_binary(path) = ...
read_file_text(path) = ...
function read_file(path, format::FileFormat)
	if format == hdf5
	elseif format == binary
	elseif format == text
	else error("Unreachable")

I found this question here on Discourse about dispatching on enum values, where the answers were either to use approach (1a) or (2a) (this one). What I don’t like about this is that I have to use function names to carry the file format information. It seems much cleaner to actually keep the file format argument as an argument, as in approach (1a). Further, this violates “DRY” (don’t repeat yourself) and there is potential for mistakes here, since you manually have to make sure to use the correct function name again in each branch (possible copy-paste errors).

Approach (2b)

read_file_hdf5(path) = ...
read_file_binary(path) = ...
read_file_text(path) = ...
function read_file(path, format::Symbol)
	if format == :hdf5
	elseif format == :binary
	elseif format == :text
	else error("Unknown format")

This is really the same as (2a), just using Symbol or String values instead of defining a separate enum. More terse, but less rigorous since there are now invalid values of format. On the other hand, I suppose I’m actually glossing over how the user input is converted into an enum value, so in some sense something equivalent to this has to be done somewhere in the program.

Approach (2a’)

@enum FileFormat begin
const read_file_map = (
	hdf5 = path -> ...,
	binary = path -> ...,
	text = path -> ...,
read_file(path, format::FileFormat) = read_file_map[format](path)

This solves my main problems with (2a/b), but I still have to define read_file_map as a separate global object which really only exists for the purposes of the read_file function. At this point, I feel like I’m just re-implementing the equivalent of what Julia’s dispatch mechanism would automatically do for me in approach (1a) anyway!

Approach (2b’)

const read_file_map = (
	hdf5 = path -> ...,
	binary = path -> ...,
	text = path -> ...,
read_file(path, format::Symbol) = read_file_map[format](path)

Again the same as (2a’), but just using Symbol (like (2b)) instead of an enum (like (2a)).

1 Like

use GitHub - JuliaIO/FileIO.jl: Main Package for IO, loading all different kind of files maybe

I encourage approach 2a. Approach 1. places you at risk of bad runtime and especially compile time performance, but has the advantage of extensibility without modifying the source code.

I think runtime and compile time are much more important than extensibility without modifying source. Just modify the source instead of writing other source elsewhere to extend, and avoid the runtime and compile time costs.

If you do go with some variant of 1, I suggest at least asserting the return type.

read_file(path, ::Val{:hdf5}) = ...
read_file(path, ::Val{:binary}) = ...
read_file(path, ::Val{:text}) = ...
read_file(path, format::Symbol) = read_file(path, Val(format))::File

where Base.isconcretetype(File) == true to make sure that no type instabilities will propagate from this dynamic dispatch.

If someone goes with 1, that means they prioritize external extensibility. Thus, I would recommend adding

Base.Experimental.@max_methods 1 function read_file end

before your first definition, so that we always have dynamic dispatches, and thus don’t need to recompile all code calling it when someone defines a new method.
If you’re already defining enough methods to go over the default limit (3), then this isn’t strictly necessary.

The single dynamic dispatch (single, because you’ve asserted the return type to make sure type instabilities don’t propagate, so no reason for more) is fast enough that it’s fine from a performance perspective, when the function you’re calling is as slow as reading a file. It is like 20ns.


The whole read_file business was just an example. It’s not just about reading an entire file into memory or creating an IO stream, but generally how to handle this kind of situation. FileIO.jl will also have implemented some kind of solution to a similar problem, but the considerations involved were likely very different because the mechanism and heuristics used there are much more complicated than in my example.

Thanks! Although actually, neither runtime/compile time (again, unless the consequences reach further than just calling this function) nor external extensibility are really my main concern at the moment. Instead, I’m focused more on making the code “clean”/maintainable/readable, although indeed, if that has global performance implications, it’s not really worth it. I guess my question is somewhat turning into “is there any serious problem with variant (1a)”, since it kind of looks like it’s turning out to be my favorite.

I think I have two questions about what you wrote:

  1. About asserting the return type: Could that also just be written by giving the return type, i. e. read_file(path, format::Symbol)::File = read_file(path, Val(format)), or is there some reason to prefer your version?
  2. What about the dynamic dispatch makes this type assertion necessary? In the other variants, it’s not guaranteed either that all of the different functions (read_file_hdf5, read_file_binary, etc.) have the same return type? (Apologies, I don’t have a deep understanding of type instabilities yet.)

And finally, when you refer to runtime and compile time costs, I’m assuming these are mitigated by the two suggestions (asserting the return type and using Base.Experimental.@max_methods)? Or is there anything else to watch out for beyond the 20ns dispatch overhead?

People not caring about compile time is a tragedy of the commons.
Is it worth an hour of your time to shave 5 seconds off of load time? Probably not. That isn’t worth anyone’s time.
So no one does it, and thus spend all our time waiting for other people’s code to load and JIT compile.


Note that it’s easy to typo a symbol, but not an enum.

Your version calls convert before the type assert. Mine just calls the type assert.
I prefer to disallow things convertible to a File.
Thus, the convert call itself will be slow – a dynamic dispatch on an object of unknown type – but do nothing, because it is already the type you’re converting to.

I did say “neither runtime/compile time […] are really my main concern at the moment:wink:

Indeed. I suppose I’m missing a variant (1c), where I’m using an enum instead of Symbols, in analogy to (2a) vs (2b).

Ah, OK – makes sense.

Do you have a good explanation for the other question (if/why the Val approach specifically needs the type assert)? Intuitively, I don’t see why that would be necessary only in this particular case, since none of the variants are guaranteed to be type-stable without looking at all the function implementations. But perhaps the compiler has more trouble figuring out the return types with the value types version than with the others?

I am assuming every implementation method is type stable.
If so, it should be clear why the “2” methods are type stable, correct?

The compiler will realize that every branch results in the same type, so there is only one possible type it can return.

The “1” variants are trickier.
Val(format) is type unstable, the return type depends on the value of format. That is, Val(:text) returns an object of type Val{:text}, etc.

However, when there are a small number of potentially matching methods (<= 3 by default), the compiler can create type checks for all three of them.

julia> @enum Fruit apple orange kiwi

julia> e(::Val{apple}) = 1
e (generic function with 1 method)

julia> e(::Val{orange}) = 2
e (generic function with 2 methods)

julia> e(::Val{kiwi}) = 3
e (generic function with 3 methods)

julia> e(f::Fruit) = e(Val(f))
e (generic function with 4 methods)

julia> @code_typed e(apple)
1 ─ %1  = Core.apply_type(Base.Val, f)::Type{Val{_A}} where _A
│   %2  = %new(%1)::Val
│   %3  = (isa)(%2, Val{apple})::Bool
└──       goto #3 if not %3
2 ─       goto #8
3 ─ %6  = (isa)(%2, Val{orange})::Bool
└──       goto #5 if not %6
4 ─       goto #8
5 ─ %9  = (isa)(%2, Val{kiwi})::Bool
└──       goto #7 if not %9
6 ─       goto #8
7 ─ %12 = Main.e(%2)::Int64
└──       goto #8
8 ┄ %14 = φ (#2 => 1, #4 => 2, #6 => 3, #7 => %12)::Int64
└──       return %14
) => Int64

So the compiler realized that it must return an Int64.
However, there are three issues here:

  1. The Val call is still there and hasn’t been optimized away. Thus, @benchmark e($apple) takes around 90ns for me.
  2. You’re relying on a compiler optimization rather than being explicit.
  3. The benefit of this approach is that you can add more methods later. But, the function we have above would be invalid! This is especially true with Symbol, as @enum won’t let you add more enums. But regardless if the compiler thinks a new method is callable – the benefit of the dispatch approach – then the above method is invalid, as it just has branches between the old stuff. So that means the compiler needs to suddenly recompile the e(::Fruit) function, and also every function that called the old e(::Fruit) function needs to be invalidated and replaced, and then those… This happens transparently, but you may notice with added compilation time when Julia sometimes needs to recompile huge swaths of code for this reason.

So, because the advantage of the methods approach is to allow people to add methods, @max_methods is useful to avoid ever needing to recompile:

julia> Base.Experimental.@max_methods 1 function m end

julia> m(::Val{apple}) = 1
m (generic function with 1 method)

julia> m(::Val{orange}) = 2
m (generic function with 2 methods)

julia> m(::Val{kiwi}) = 3
m (generic function with 3 methods)

julia> m(f::Fruit) = m(Val(f))
m (generic function with 4 methods)

julia> @code_typed m(apple)
1 ─ %1 = Core.apply_type(Base.Val, f)::Type{Val{_A}} where _A
│   %2 = %new(%1)::Val
│   %3 = Main.m(%2)::Any
└──      return %3
) => Any

Max methods controls how many matches it considers.

We now have a dynamic dispatch. This won’t need to be updated if we add more possible m methods, because the old definition will automatically be able to look up the correct new definition.
But note that it is now type unstable!

Because it is no longer considering all possibilities, and because it may also call new possibilities in the future that don’t exist yet, it returns Any.

So, if we want to make the return type a part of our API, we can just add an assert

julia> m(f::Fruit) = m(Val(f))::Int
m (generic function with 4 methods)

julia> @code_typed m(apple)
1 ─ %1 = Core.apply_type(Base.Val, f)::Type{Val{_A}} where _A
│   %2 = %new(%1)::Val
│   %3 = Main.m(%2)::Any
│        Core.typeassert(%3, Main.Int)::Int64
│   %5 = π (%3, Int64)
└──      return %5
) => Int64

and now the return type is known, and we don’t have to worry about invalidations here at all.

As for runtime performance:

julia> @benchmark e($apple)
BenchmarkTools.Trial: 10000 samples with 956 evaluations.
 Range (min … max):  90.408 ns … 743.141 ns  ┊ GC (min … max): 0.00% … 83.01%
 Time  (median):     91.541 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   92.017 ns ±  10.843 ns  ┊ GC (mean ± σ):  0.19% ±  1.44%

  ▂▂▂▂▃▃▃▄▆███████▆▄▄▄▅▅▅▅▄▄▄▃▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▁▂▂▂▂▂▂▂▂▂▂▂▂▂ ▃
  90.4 ns         Histogram: frequency by time         95.4 ns <

 Memory estimate: 16 bytes, allocs estimate: 1.

julia> @benchmark m($apple)
BenchmarkTools.Trial: 10000 samples with 946 evaluations.
 Range (min … max):  98.060 ns … 880.346 ns  ┊ GC (min … max): 0.00% … 86.69%
 Time  (median):     99.654 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   99.987 ns ±  12.748 ns  ┊ GC (mean ± σ):  0.21% ±  1.48%

  ▂▁▂▁▂▂▃▃▄▄▅▇███████████████████████▇▇▆▆▅▅▄▄▄▄▄▄▃▃▃▃▃▃▃▃▃▃▂▂▂ ▅
  98.1 ns         Histogram: frequency by time          102 ns <

 Memory estimate: 16 bytes, allocs estimate: 1.

This cost is pretty small, way lower than the cost of recompiling.

Meaning if you want extensibility, I think the @max_methods 1 is a good choice.
I guess Symbol is better for extensibility than @enum, too.

Approach “2” is “much” faster in terms of runtime:

julia> function b(f::Fruit)
           if f == apple
           elseif f == orange
           elseif f == kiwi
               throw("Unknown fruit")
b (generic function with 1 method)

julia> @code_typed b(apple)
1 ─ %1  =
│   %2  = (f === %1)::Bool
└──       goto #3 if not %2
2 ─       return 1
3 ─ %5  =
│   %6  = (f === %5)::Bool
└──       goto #5 if not %6
4 ─       return 2
5 ─ %9  =
│   %10 = (f === %9)::Bool
└──       goto #7 if not %10
6 ─       return 3
7 ─       Main.throw("Unknown fruit")::Union{}
└──       unreachable
) => Int64

julia> @benchmark b($apple)
BenchmarkTools.Trial: 10000 samples with 1000 evaluations.
 Range (min … max):  2.404 ns … 6.242 ns  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     2.413 ns             ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.421 ns ± 0.090 ns  ┊ GC (mean ± σ):  0.00% ± 0.00%

      ▃█  ▄▇▄                                                
  ▆▅▃▄██▇▆███▄▂▂▃▃▃▃▃▃▂▂▂▂▂▂▂▂▂▁▁▂▁▁▁▂▁▂▁▁▁▁▂▁▁▂▁▂▂▁▁▁▁▂▃▅▅ ▃
  2.4 ns         Histogram: frequency by time       2.46 ns <

 Memory estimate: 0 bytes, allocs estimate: 0.

That is, “much” in relative terms.
If the code doing actual work took 1 mirosecond instead of being free, we’d be looking at 1.1 vs 1 microsecond instead of 0.1 vs 0.
If the code doing actual work took 1 millisecond…

So, I think it’s fine to go with the Val(::Symbol) approach, as long as you set max_methods and type assert the return.

1 Like

Would it make sense to have special enums that can assist runtime dispatch by creating specialized methods for Val?

julia> macro dispatch_enum(basetype, args...)
           name = basetype isa Expr ? basetype.args[1] : basetype
               @enum($((basetype, args...)...))
               @generated Base.Val(var"#"::$name) = let ifex(enum)=:(if $enum≡var"#"; Val{$enum}() end),
                       enums = collect(values(Base.Enums.Enums.namemap($name))), ret = ifex(enums[1]), expr = ret
                   for enum = enums[2:end];  expr = last(push!(expr.args, ifex(enum)))  end
                   expr.head, expr.args = expr.args[end].head, expr.args[end].args
@dispatch_enum (macro with 1 method)

julia> @dispatch_enum foo a b c

julia> methods(Val)
# 2 methods for type constructor:
 [1] Val(var"#"::foo)
     @ Main REPL[1]:5
 [2] Val(x)
     @ essentials.jl:801

julia> @code_warntype Val(a)
MethodInstance for Val(::foo)
  from Val(var"#"::foo) @ Main REPL[1]:5
Body::Union{Val{a}, Val{c}, Val{b}}
1 ─ %1  = (Main.a ≡ #)::Bool
└──       goto #3 if not %1
2 ─ %3  = Core.apply_type(Main.Val, Main.a)::Core.Const(Val{a})
│   %4  = (%3)()::Core.Const(Val{a}())
└──       return %4
3 ─ %6  = (Main.c ≡ #)::Bool
└──       goto #5 if not %6
4 ─ %8  = Core.apply_type(Main.Val, Main.c)::Core.Const(Val{c})
│   %9  = (%8)()::Core.Const(Val{c}())
└──       return %9
5 ─ %11 = Core.apply_type(Main.Val, Main.b)::Core.Const(Val{b})
│   %12 = (%11)()::Core.Const(Val{b}())
└──       return %12

Thanks a lot for the detailed explanation – that’s definitely cleared up some things for me!