Best way to make myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...) type stable?

It is sometimes useful to extend Base functions behaviour by creating another function that passes through, for example:

myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...)
# other overrides of myisapprox, which may include Base types
# so you don't want to overload Base.isapprox directly

Unfortunately, the above override is not type stable in 0.6-rc1:

julia> @code_warntype myisapprox(1.0,2.0)
Variables:
  #self#::#myisapprox
  a::Float64
  b::Float64

Body:
  begin 
      return $(Expr(:invoke, MethodInstance for #myisapprox#1(::Array{Any,1}, ::Function, ::Float64, ::Float64), :(Main.#myisapprox#1), :($(Expr(:foreigncall, :(:jl_alloc_array_1d), Array{Any,1}, svec(Any, Int64), Array{Any,1}, 0, 0, 0))), :(#self#), :(a), :(b)))
  end::Any

Is there a way to say ā€œcopy all definitions of Base.isapprox to myisapprox, so type stability is maintainedā€? I suppose I could copy all of the keywords from Base, but this is annoying, as if I just overloaded Base.isapprox directly I wouldnā€™t need to do this.

For the specific case of isapprox that always returns a Bool:

myisapprox(a,b;kwds...)::Bool = Base.isapprox(a,b;kwds...)

Note that isapprox itself is unstable with keywords:
https://github.com/JuliaLang/julia/issues/21794

Interestingly, the type-stable version looks slower than the unstable one:

julia> myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...)
myisapprox (generic function with 1 method)

julia> @benchmark myisapprox(1.0, 2.0)
BenchmarkTools.Trial: 
  memory estimate:  80 bytes
  allocs estimate:  1
  --------------
  minimum time:     31.179 ns (0.00% GC)
  median time:      34.251 ns (0.00% GC)
  mean time:        39.566 ns (10.66% GC)
  maximum time:     1.622 Ī¼s (97.12% GC)
  --------------
  samples:          10000
  evals/sample:     994

julia> myisapprox(a,b;kwds...)::Bool = Base.isapprox(a,b;kwds...)
myisapprox (generic function with 1 method)

julia> @benchmark myisapprox(1.0, 2.0)
BenchmarkTools.Trial: 
  memory estimate:  80 bytes
  allocs estimate:  1
  --------------
  minimum time:     93.442 ns (0.00% GC)
  median time:      96.119 ns (0.00% GC)
  mean time:        102.693 ns (4.08% GC)
  maximum time:     1.430 Ī¼s (91.55% GC)
  --------------
  samples:          10000
  evals/sample:     951

Thatā€™s standard. Since itā€™s likely inlining the function call, your version is the exact same thing, except with an extra type-check / conversion afterwards. So your version is doing strictly more work.

Type-instability (or actually here, type-inference) issues are not necessarily a performance issue at their source. The issue is that, if you use the result of a type that is not strictly inferred, then code further down will not know the type, causing dynamic dispatch in function calls and ā€œpropagation of the type-uncertaintiesā€ (i.e. causing everything else downwards to not be strictly inferred if youā€™re not careful!). So lack of type inference itself isnā€™t an issue, rather itā€™s the fact that lack of inference in one place can make it impossible to infer the types in further code.

I think the problem may be due to keyword arguments. I donā€™t think thereā€™s a better fix until keyword arguments actually dispatch correctly (thereā€™s a PR for that), so a manual conversion should be fine. Itā€™s ~60ns, youā€™ll live.

Iā€™m more concerned with allocations than time, as my particular use case builds up to MBs. Thatā€™s very interesting that using keywords causes such a big difference:

julia> @btime isapprox(1.0,2.0;rtol=0.0001)
  249.653 ns (3 allocations: 128 bytes)
false

julia> @btime isapprox(1.0,2.0)
  6.796 ns (0 allocations: 0 bytes)
false

Hmm, just realized myeps(x...) = Base.eps(x...) also destroys type stability for myeps(Float64)... I would have thought this would be type stable, as the xā€¦would infer the type ofxas aTuple{Float64}`.

My take is, itā€™s simply a pre-1.0 issue and the solution is right around the corner:

so I wouldnā€™t let it influence designs. Just design assuming it will infer correctly in a few months.

1 Like

A few months away is a bit optimistic for a PR that just 7 days ago was getting looked again. Unless you are one of the people who thinks 1.0 is still going to be released at JuliaCon 2017 :stuck_out_tongue_winking_eye:

But yes, on the software side you are right about just waiting.