Performance of hasmethod vs try-catch on MethodError

To help me better understand Julia’s performance behavior, I am trying to understand the large difference between two strategies for checking for undefined methods: hasmethod vs try-catch.

I’ve read through Are exceptions in Julia "Zero Cost" - #8 by StefanKarpinski but that thread seemed to demonstrate negligible differences, whereas this example is quite significant.

# Manual check
function f(x)
    hasmethod(cos, typeof((x,))) && return cos(x)
    return one(typeof(x))
end

# Try-catch
function g(x)
    try
        return cos(x)
    catch
    end
    return one(typeof(x))
end

Both of these functions do the same thing, returning cos(x) where possible, otherwise just one of the input type.

However, these have a massive performance difference:

julia> @btime f(x) setup=(x="Test");
  1.188 μs (4 allocations: 176 bytes)

julia> @btime g(x) setup=(x="Test");
  211.333 μs (6 allocations: 224 bytes)

Is this behavior expected? Maybe different types of errors can be more expensive to catch? Particularly the MethodError due to the type inference required, but I would have expected hasmethod to do essentially the same thing.


Additional info:

julia> versioninfo()
Julia Version 1.9.0
Commit 8e630552924 (2023-05-07 11:25 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin22.4.0)
  CPU: 8 × Apple M1 Pro
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, apple-m1)
  Threads: 6 on 6 virtual cores
Environment:
  JULIA_NUM_THREADS = auto
  JULIA_FORMATTER_SO = /Users/mcranmer/julia_formatter.so
  JULIA_EDITOR = code

I ran with -O3.

When I run for valid types, the try-catch actually wins:

julia> @btime f(x) setup=(x=1.0);
  163.068 ns (2 allocations: 112 bytes)

julia> @btime g(x) setup=(x=1.0);
  1.208 ns (0 allocations: 0 bytes)

If I rethrow non-MethodErrors to make these completely equivalent, I don’t see much of a change:

function g(x)
    try
        return cos(x)
    catch e
        isa(e, MethodError) || rethrow(e)
    end
    return one(typeof(x))
end

@btime g(x) setup=(x="Test");
#  214.042 μs (6 allocations: 224 bytes)

Yes. When you branch on hasmethod or catch a MethodError, you’re doing two very different things, so it makes sense they’d have different performance. catch grabs any errors that happen within the try block, and that usually comes with a bit of a computational cost. On the other hand, hasmethod just checks if the given method signature is there or not.

The two also have their own optimizations (although we don’t have much optimizations for try/catch). For example, from Julia 1.10 onwards, hasmethod can be resolved at compile time. That means f is going to run a lot faster.:

julia> # Manual check
       function f(x)
           hasmethod(cos, typeof((x,))) && return cos(x)
           return one(typeof(x))
       end
f (generic function with 1 method)

julia> # Try-catch
       function g(x)
           try
               cos(x)
           catch
           end
           return one(typeof(x))
       end
g (generic function with 1 method)

julia> @btime f(x) setup=(x="Test");
  3.041 ns (0 allocations: 0 bytes)

julia> @btime g(x) setup=(x="Test");
  58.125 μs (2 allocations: 48 bytes)

julia> versioninfo()
Julia Version 1.10.0-DEV.1431
Commit 0c774c7fb8* (2023-06-03 05:13 UTC)
3 Likes

Thanks!

Part of the reason I am curious about this is because I need to catch a MethodError in a dependent function. Say I define some generic function like:

kernel(x) = cos(x)

Now, hasmethod has lost its usefulness (hasmethod(kernel, Tuple{String})==true). But the try-catch will still work!

Are there any sort of recursive hasmethods to deal with this sort of thing? Or am I stuck with the significantly slower try-catch?

1 Like

Use try/catch (recommended) or implement CassetteOverlay-like approach to change behavior of function calls with no methom match (if it’s absolutely required).

Since there is no return before cos(x), I would have thought the compiler could remove the entire try catch block. Why doesn’t that happen?

Oops, my mistake. I’ll edit the original post. (Does not affect the timings though)

Interesting package, thanks for pointing it out. This feels a bit heavy-handed though. I guess the best option would just be to have a more optimized version of try/catch in Julia.

GitHub - JuliaPreludes/Try.jl: Zero-overhead and debuggable error handling was intended to be a modern-style error handling system that has a lot of good ideas.

1 Like

Thanks, this looks useful!

Although from reading through the README I’m not sure it solves the problem? It sounds like it gives you a way of avoiding throwing MethodErrors in code, rather than a faster way of catching the MethodErrors produced by Julia Base library. But I could be mistaken?

Yes - It requires rewriting a bunch of functions to return failure values instead of throwing. Takafumi started working on that at the time but no action lately.

I guess a faster way to recursively check hasmethod would be to still use try/catch, but to cache the result if a MethodError is raised? i.e., something like

const HasMethodCache = Dict{Type,Bool}()
const Lock = Threads.SpinLock()

# Example function:
f(args...) = f_impl(args...)

function call_f(args...)
    T = typeof(args)

    if haskey(HasMethodCache, T)
        !HasMethodCache[T] && return nothing
        return f(args...)
    end

    f_has_method = true
    output =
        try
            f(args...)
        catch e
            !isa(e, MethodError) && rethrow(e)
            f_has_method = false
            nothing
        end

    lock(Lock) do
        HasMethodCache[T] = f_has_method
    end
    return output
end

It would break if you add new methods to an existing function though. But I think the tradeoff could be worth it?


Or instead of a cache, you create a struct to wrap f, and dispatch for only those types that you have verified will work?

Here’s a better version:

struct FuncWrapper{F}
    f::F
end

function call_well_defined(f::F, x::T) where {F,T}
    wrapper = FuncWrapper{F}(f)
    hasmethod(wrapper, Tuple{T}) && return wrapper(x)
    good = true
    output = try
        f(x)
    catch e
        !isa(e, MethodError) && rethrow(e)
        good = false
        nothing
    end
    if good
        @eval (w::FuncWrapper{$F})(x::$T) = w.f(x)
    else
        @eval (::FuncWrapper{$F})(::$T) = nothing
    end
    return output
end

This will cache the result of try/catch using multiple dispatch. So there should be a much smaller penalty for invalid functions now. For example:

julia> @btime call_well_defined(cos, x) setup=(x="1");
  139.326 ns (2 allocations: 112 bytes)

julia> @btime call_well_defined(cos, x) setup=(x=1.0);
  174.566 ns (2 allocations: 112 bytes)

Even though we had to call try/catch on the cos("1"), we can see that the result is immediately cached, as we create:

(::FuncWrapper{typeof(cos)})(::String) = nothing

on the first pass, which makes the second pass not have to check.

The nice part about this strategy is that it does a deeper check for MethodError, unlike hasmethod which just checks the surface.


The downside is that it only caches the first call of try/catch, so if I overload cos in the above example, it doesn’t change the behavior:

julia> Base.cos(x::String) = 1.0

julia> call_well_defined(cos, "1") # nothing

This approach also breaks if your function might hit a MethodError only for a specific input value that triggers a different branch of code.

Maybe static_hasmethod from GitHub - oxinabox/Tricks.jl: Cunning tricks though the julia compiler internals would be useful here.

1 Like

Nice!! This makes the overhead negligible!

Here’s the updated version. This one also gives a static output type: the second output is a flag, for whether the function actually executed.

using Tricks: static_hasmethod

struct FuncWrapper{F}
    f::F
end

function safe_call(f::F, x::T, default=one(T)) where {F,T}
    wrapper = FuncWrapper{F}(f)
    static_hasmethod(wrapper, Tuple{T}) && return wrapper(x)
    output = try
        (f(x), true)
    catch e
        !isa(e, MethodError) && rethrow(e)
        (default, false)
    end
    if output[2]
        @eval (w::FuncWrapper{$F})(x::$T) = (w.f(x), true)
    else
        @eval (::FuncWrapper{$F})(::$T) = ($default, false)
    end
    return output
end

Here are the timings:

julia> @btime safe_call(f, x) setup=(f=cos; x="1");
  2.000 ns (0 allocations: 0 bytes)

julia> @btime safe_call(f, x) setup=(f=cos; x=1.0);
  1.208 ns (0 allocations: 0 bytes)

Note that safe_call(cos, 1.0)[1] is the output, and safe_call(cos, 1.0)[2] is whether it actually executed, or if there was a MethodError.

For multiple arguments, you can just call

safe_call(splat(f), (x, y, z), default)

without any overhead.


I note you could avoid using the FuncWrapper and @eval kernel(f::$F, x::$T) = $(f)(x), but this makes me worried about world age issues if a user is passing functions in from a different scope. I think this way is safer (and should have the same performance if the compiler does a good job.)


Warning: The downside is that it only checks the first call for a MethodError, so it will not know about new methods:

julia> safe_call(cos, "1")
("", false)

julia> Base.cos(x::String) = 1.0

julia> safe_call(cos, "1")
("", false)

julia> cos("1")
1.0

This approach also breaks if the MethodError depends on the value input to the function.

I am not sure if someone pointed this out, but in some rare cases a method exists but throws MethodError explicitly. This is often done to give a better message to the MethodError exception. If you search the forum, you can see that sometimes this is done by the standard library and has caused confusion in the past.

1 Like

Thanks. I guess this is another reason to use this safe_call method rather than hasmethod.

There’s an alternative approach I use based on promote_op. You may want to take a look at this sample PR where someone (…) implemented this approach in LoopVectorization.jl… https://github.com/JuliaSIMD/LoopVectorization.jl/pull/431

1 Like

Haha, I forgot about that PR :smiley:

Here’s the equivalent with promote_op:

function safe_call2(f::F, x::T, default=one(T)) where {F,T}
    promoted_op = Base.promote_op(f, T)
    promoted_op === Union{} && return (default, false)
    return (f(x), true)
end

However it seems like safe_call is a bit faster, presumably because it gets to skip the overhead at the start (after the first pass)

julia> @btime safe_call(f, x) setup=(f=cos; x="1");
  2.000 ns (0 allocations: 0 bytes)

julia> @btime safe_call2(f, x) setup=(f=cos; x="1");
  2.833 ns (0 allocations: 0 bytes)

If there was a static_promote_op I bet that would equalize the timings here.

Note that this isn’t recommended since it’s a bit of a hack on the compiler. IIRC it’s broken on Julia master (v1.10), but @jameson added these optimizations and so when that is released then the standard hasmethod should act at compile time like static_hasmethod. See the extended discussion in Cannot precompile on latest Julia master: no method matching length(::Nothing) · Issue #1920 · SciML/OrdinaryDiffEq.jl · GitHub

1 Like

When types are inferred, promote_op acts at compile time. If it didn’t, the call would take much more than 3ns.