Repo: GitHub - Eliassj/ARS.jl
Docs: eliassj.github.io/ARS.jl/dev/
While working on another project I found that AdaptiveRejectionSampling.jl, while very good in other ways, did not allow for other numeric types than Float64
(I needed to use BigFloat
). For this reason (and fun) I implemented another version without restriction on element types, configurable differentiation backend (through DifferentiationInterface.jl) and (seemingly) faster sampling with fewer allocations. However, ARS.jl currently requires the user to specify initial points and for the function to sample from to already be in its log-form in favor of simplicity and speed. Meanwhile, AdaptiveRejectionSampling.jl will try to find initial points and log-transform the function to be sampled from if not told otherwise.
Benchmark against AdaptiveRejectionSampling.jl
using ARS, AdaptiveRejectionSampling
using DifferentiationInterface, Distributions
using ForwardDiff
# Define function to sample from
f(x) = logpdf(Laplace(0., 0.5), x) + logpdf(Normal(0.0, 2.0), x)
#= ARS.jl =#
sam_ARS = ARS.ARSampler(ARS.Objective(f, AutoForwardDiff()), [-0.5, 0.5], (-Inf, Inf))
@be deepcopy(sam_ARS) ARS.sample!(_, 100000, true, 25) samples=100 evals=1
# Benchmark: 100 samples with 1 evaluation
# min 6.739 ms (17 allocs: 784.492 KiB)
# median 7.089 ms (20 allocs: 784.586 KiB)
# mean 7.367 ms (21.26 allocs: 785.465 KiB, 1.42% gc time)
# max 18.497 ms (29 allocs: 789.367 KiB, 59.08% gc time)
#= AdaptiveRejectionSampling.jl =#
sam_other = RejectionSampler(f, (-Inf, Inf), (-0.5, 0.5); logdensity=true, max_segments=25)
@be deepcopy(sam_other) run_sampler!(_, 100000) samples=100 evals=1 seconds=1000
# Benchmark: 100 samples with 1 evaluation
# min 32.249 ms (2201420 allocs: 37.416 MiB, 6.99% gc time)
# median 32.912 ms (2201734 allocs: 37.421 MiB, 9.80% gc time)
# mean 34.020 ms (2201755.93 allocs: 37.421 MiB, 10.80% gc time)
# max 50.222 ms (2202237 allocs: 37.429 MiB, 22.42% gc time)