What is a root near NaN
?
Replacing the bisect function with this should help with the Inf
issue:
@inline bisect(a,b) = a/2 + b/2
Now:
julia> prevfloat(typemax(Float64))/2 + prevfloat(typemax(Float64),3)/2
1.7976931348623155e308
julia> ans == prevfloat(typemax(Float64),2)
true
It’s a super simple algorithm, so you should be able to diagnose why something’s going wrong and make it robust to handle that.
(Of course, unit tests would help to make sure you’re not suddenly failing on some other edge case.)
@Raf
Okay, yeah, simulations with millions of calls are my use case too.
I’m guessing for a lot of other people too, so I’d support a simulation-focused version as at least being an option.
For the “low compile time” version, perhaps for when someone is trying to debug a function they’re root finding, and thus constantly redefining the method → recompiling,
# 0.7, cleaned up dep warnings / started with --depwarn=no
julia> using Test, BenchmarkTools
julia> import FunctionWrappers: FunctionWrapper
julia> const F64F64Func = FunctionWrapper{Float64,Tuple{Float64}}
FunctionWrapper{Float64,Tuple{Float64}}
julia> f(x) = exp(x) - x^4
f (generic function with 1 method)
julia> FWf = F64F64Func(f)
FunctionWrapper{Float64,Tuple{Float64}}(Ptr{Nothing} @0x00007f2fa4060ca0, Ptr{Nothing} @0x00007f2fa71a47f0, Base.RefValue{typeof(f)}(f), typeof(f))
julia> @inferred f(8.6)
-38.422008637020554
julia> @inferred FWf(8.6)
-38.422008637020554
julia> typeof(FWf)
FunctionWrapper{Float64,Tuple{Float64}}
julia> struct UnstableWrap
f::Function
end
julia> (UW::UnstableWrap)(x...) = UW.f(x...)
julia> uf(8.6)
-38.422008637020554
julia> @inferred uf(8.6)
ERROR: return type Float64 does not match inferred return type Any
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] top-level scope at none:0
This is type stable and non-allocating.
julia> using BenchmarkTools
julia> @benchmark f(8.6)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 2.992 ns (0.00% GC)
median time: 2.997 ns (0.00% GC)
mean time: 3.009 ns (0.00% GC)
maximum time: 14.001 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 1000
julia> @benchmark $FWf(8.6)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 129.376 ns (0.00% GC)
median time: 129.572 ns (0.00% GC)
mean time: 131.515 ns (0.00% GC)
maximum time: 215.137 ns (0.00% GC)
--------------
samples: 10000
evals/sample: 886
julia> @benchmark $uf(8.6)
BenchmarkTools.Trial:
memory estimate: 32 bytes
allocs estimate: 2
--------------
minimum time: 174.538 ns (0.00% GC)
median time: 176.032 ns (0.00% GC)
mean time: 194.097 ns (7.44% GC)
maximum time: 98.510 μs (99.77% GC)
--------------
samples: 10000
evals/sample: 714
Versus full specialization, it’s only about an eighth of a second overhead after a million iterations on this computer, but I’d still prefer a dedicated simulation version that just specializes on the function.
I do think using this looks like a good upgrade for everything currently avoiding specializing on functions by leaving them abstract and doing dynamic dispatches every time. That includes Roots.jl and Optim (NLSolversBase.jl/oncedifferentiable.jl at master · JuliaNLSolvers/NLSolversBase.jl · GitHub) at least.
It would need to recompile more for (eg, SArray{Tuple{3},Float64,1,3}
vs SArray{Tuple{4},Float64,1,4}
), but I think it’s a good compromise. Changing the input type is going to force a lot of recompilation anyway.