Performance of hasmethod vs try-catch on MethodError

Thanks for the tips. It looks like static_hasmethod is automatically =hasmethod on 1.10 so I guess it is safe to use? https://github.com/oxinabox/Tricks.jl/blob/019aeb9c9d471c09ddd753e3e89621a69aba9c69/src/Tricks.jl#L43-L48

You are right. It looks like the LLVM is identical for valid types:

(Click to open)
julia> @code_llvm safe_call(cos, 1.0)
;  @ REPL[5]:1 within `safe_call`
define void @julia_safe_call_1000({ double, i8 }* noalias nocapture noundef nonnull sret({ double, i8 }) align 8 dereferenceable(16) %0, double %1) #0 {
top:
;  @ REPL[5]:2 within `safe_call` @ REPL[5]:3
; ┌ @ REPL[5]:11 within `FuncWrapper`
   %2 = call double @j_cos_1002(double %1) #0
; └
;  @ REPL[5]:2 within `safe_call`
  %.sroa.0.0..sroa_idx = getelementptr inbounds { double, i8 }, { double, i8 }* %0, i64 0, i32 0
  store double %2, double* %.sroa.0.0..sroa_idx, align 8
  %.sroa.2.0..sroa_idx = getelementptr inbounds { double, i8 }, { double, i8 }* %0, i64 0, i32 1
  store i8 1, i8* %.sroa.2.0..sroa_idx, align 8
  ret void
}

julia> @code_llvm safe_call2(cos, 1.0)
;  @ REPL[1]:1 within `safe_call2`
define void @julia_safe_call2_1003({ double, i8 }* noalias nocapture noundef nonnull sret({ double, i8 }) align 8 dereferenceable(16) %0, double %1) #0 {
top:
;  @ REPL[1]:2 within `safe_call2` @ REPL[1]:4
  %2 = call double @j_cos_1005(double %1) #0
;  @ REPL[1]:2 within `safe_call2`
  %.sroa.0.0..sroa_idx = getelementptr inbounds { double, i8 }, { double, i8 }* %0, i64 0, i32 0
  store double %2, double* %.sroa.0.0..sroa_idx, align 8
  %.sroa.2.0..sroa_idx = getelementptr inbounds { double, i8 }, { double, i8 }* %0, i64 0, i32 1
  store i8 1, i8* %.sroa.2.0..sroa_idx, align 8
  ret void
}

but slightly different for invalid types:

(Click to open)
julia> @code_llvm safe_call(cos, "1")
;  @ REPL[5]:1 within `safe_call`
define void @julia_safe_call_1006({ {}*, i8 }* noalias nocapture noundef nonnull sret({ {}*, i8 }) align 8 dereferenceable(16) %0, [1 x {}*]* noalias nocapture noundef nonnull align 8 dereferenceable(8) %1, {}* noundef nonnull %2) #0 {
top:
;  @ REPL[5]:2 within `safe_call`
  %3 = getelementptr inbounds [1 x {}*], [1 x {}*]* %1, i64 0, i64 0
  store {}* inttoptr (i64 5260606768 to {}*), {}** %3, align 8
  store { {}*, i8 } { {}* inttoptr (i64 5260606768 to {}*), i8 0 }, { {}*, i8 }* %0, align 8
  ret void
}

julia> @code_llvm safe_call2(cos, "1")
;  @ REPL[1]:1 within `safe_call2`
define void @julia_safe_call2_1008({ {}*, i8 }* noalias nocapture noundef nonnull sret({ {}*, i8 }) align 8 dereferenceable(16) %0, [1 x {}*]* noalias nocapture noundef nonnull align 8 dereferenceable(8) %1, {}* noundef nonnull %2) #0 {
top:
;  @ REPL[1]:2 within `safe_call2`
; ┌ @ strings/basic.jl:262 within `one`
   %3 = call nonnull {}* @j_convert_1010({}* inttoptr (i64 5260606768 to {}*)) #0
; └
;  @ REPL[1]:2 within `safe_call2` @ REPL[1]:3
  %4 = insertvalue { {}*, i8 } zeroinitializer, {}* %3, 0
  %5 = insertvalue { {}*, i8 } %4, i8 0, 1
;  @ REPL[1]:2 within `safe_call2`
  %6 = getelementptr inbounds [1 x {}*], [1 x {}*]* %1, i64 0, i64 0
  store {}* %3, {}** %6, align 8
  store { {}*, i8 } %5, { {}*, i8 }* %0, align 8
  ret void
}

In terms of practical usage, Base.promote_op recommends against its own usage, so perhaps the statically-cached try/catch might be more robust?

  │ Warning
  │
  │  Due to its fragility, use of promote_op should be avoided. It is preferable to base the container eltype on the type of the actual elements. Only in the absence of any
  │  elements (for an empty result container), it may be unavoidable to call promote_op.

Yeah looks like it got the proper backwards compatibility, so it’s safe.

1 Like

Here is a brief update in case other people find this on Google.

The original answer here faces problems when you start using multi-threading:

When you access safe_call from multiple threads, performance can slow down >100x relative to a serial version. Even putting a Threads.SpinLock() surrounding the @eval did not seem to fix things. [1]

Here’s an updated solution. It’s a bit overkill and definitely not elegant, but it fixes the speed issues from multithreading. It faces a ~10 ns overhead, but still permits the compiler to inline the function call.

@enum IsGood::Int8 begin
    Good; Bad; Undefined
end
const SafeFunctions = Dict{Type,IsGood}()
const SafeFunctionsLock = Threads.SpinLock()

function safe_call(f::F, x::T, default::D) where {F,T<:Tuple,D}
    status = get(SafeFunctions, Tuple{F,T}, Undefined)
    status == Good && return (f(x...)::D, true)
    status == Bad && return (default, false)
    return lock(SafeFunctionsLock) do
        output = try
            (f(x...)::D, true)
        catch e
            !isa(e, MethodError) && rethrow(e)
            (default, false)
        end
        if output[2]
            SafeFunctions[Tuple{F,T}] = Good
        else
            SafeFunctions[Tuple{F,T}] = Bad
        end
        return output
    end
end

There is an overhead relative to the above solution, but it’s faster than vanilla try/catch, and safer than using @eval in a multithreading context.

julia> @btime safe_call(f, (x,), x) setup=(f=cos; x=1.0);
  13.861 ns (0 allocations: 0 bytes)

julia> @btime safe_call(f, (x,), x) setup=(f=cos; x="1");
  9.342 ns (0 allocations: 0 bytes)

Note also that this solution allows for multiple arguments to be passed, as it will splat the input.


  1. Running with serial first, and then multi-threading (after all safe_calls had been compiled) seemed to fix things. I guess running @eval from a thread seemed to mess with static_hasmethod and multiple dispatch. ↩︎

3 Likes