Best way to make myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...) type stable?


#1

It is sometimes useful to extend Base functions behaviour by creating another function that passes through, for example:

myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...)
# other overrides of myisapprox, which may include Base types
# so you don't want to overload Base.isapprox directly

Unfortunately, the above override is not type stable in 0.6-rc1:

julia> @code_warntype myisapprox(1.0,2.0)
Variables:
  #self#::#myisapprox
  a::Float64
  b::Float64

Body:
  begin 
      return $(Expr(:invoke, MethodInstance for #myisapprox#1(::Array{Any,1}, ::Function, ::Float64, ::Float64), :(Main.#myisapprox#1), :($(Expr(:foreigncall, :(:jl_alloc_array_1d), Array{Any,1}, svec(Any, Int64), Array{Any,1}, 0, 0, 0))), :(#self#), :(a), :(b)))
  end::Any

Is there a way to say “copy all definitions of Base.isapprox to myisapprox, so type stability is maintained”? I suppose I could copy all of the keywords from Base, but this is annoying, as if I just overloaded Base.isapprox directly I wouldn’t need to do this.


#2

For the specific case of isapprox that always returns a Bool:

myisapprox(a,b;kwds...)::Bool = Base.isapprox(a,b;kwds...)

Note that isapprox itself is unstable with keywords:

Interestingly, the type-stable version looks slower than the unstable one:

julia> myisapprox(a,b;kwds...) = Base.isapprox(a,b;kwds...)
myisapprox (generic function with 1 method)

julia> @benchmark myisapprox(1.0, 2.0)
BenchmarkTools.Trial: 
  memory estimate:  80 bytes
  allocs estimate:  1
  --------------
  minimum time:     31.179 ns (0.00% GC)
  median time:      34.251 ns (0.00% GC)
  mean time:        39.566 ns (10.66% GC)
  maximum time:     1.622 ÎĽs (97.12% GC)
  --------------
  samples:          10000
  evals/sample:     994

julia> myisapprox(a,b;kwds...)::Bool = Base.isapprox(a,b;kwds...)
myisapprox (generic function with 1 method)

julia> @benchmark myisapprox(1.0, 2.0)
BenchmarkTools.Trial: 
  memory estimate:  80 bytes
  allocs estimate:  1
  --------------
  minimum time:     93.442 ns (0.00% GC)
  median time:      96.119 ns (0.00% GC)
  mean time:        102.693 ns (4.08% GC)
  maximum time:     1.430 ÎĽs (91.55% GC)
  --------------
  samples:          10000
  evals/sample:     951

#3

That’s standard. Since it’s likely inlining the function call, your version is the exact same thing, except with an extra type-check / conversion afterwards. So your version is doing strictly more work.

Type-instability (or actually here, type-inference) issues are not necessarily a performance issue at their source. The issue is that, if you use the result of a type that is not strictly inferred, then code further down will not know the type, causing dynamic dispatch in function calls and “propagation of the type-uncertainties” (i.e. causing everything else downwards to not be strictly inferred if you’re not careful!). So lack of type inference itself isn’t an issue, rather it’s the fact that lack of inference in one place can make it impossible to infer the types in further code.

I think the problem may be due to keyword arguments. I don’t think there’s a better fix until keyword arguments actually dispatch correctly (there’s a PR for that), so a manual conversion should be fine. It’s ~60ns, you’ll live.


#4

I’m more concerned with allocations than time, as my particular use case builds up to MBs. That’s very interesting that using keywords causes such a big difference:

julia> @btime isapprox(1.0,2.0;rtol=0.0001)
  249.653 ns (3 allocations: 128 bytes)
false

julia> @btime isapprox(1.0,2.0)
  6.796 ns (0 allocations: 0 bytes)
false

#5

Hmm, just realized myeps(x...) = Base.eps(x...) also destroys type stability for myeps(Float64)... I would have thought this would be type stable, as thex…would infer the type ofxas aTuple{Float64}`.


#6

My take is, it’s simply a pre-1.0 issue and the solution is right around the corner:

so I wouldn’t let it influence designs. Just design assuming it will infer correctly in a few months.


#7

A few months away is a bit optimistic for a PR that just 7 days ago was getting looked again. Unless you are one of the people who thinks 1.0 is still going to be released at JuliaCon 2017 :stuck_out_tongue_winking_eye:

But yes, on the software side you are right about just waiting.