@stillyslalom are you able to run R code on your machine as well? I’m not sure whether the benchmarks are doing the same thing, but I find a considerable speed difference doing:
- In Julia:
julia> using BenchmarkTools, Distributions, LinearAlgebra
julia> m = InverseWishart(20, Matrix{Float64}(1.0I, 20, 20))
julia> @btime for i ∈ 1:50_000 rand($m) end
8.098 s (2928004 allocations: 1001.40 MiB)
julia> versioninfo()
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
JULIA_EDITOR = "C:\ProgramData\LONNG9\atom\app-1.34.0\atom.exe" -a
JULIA_NUM_THREADS = 2
- In R:
> system.time(replicate(50000, rWishart(1, df = 20,Sigma = diag(20))))
user system elapsed
2.03 0.17 2.22
> sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
So not quite the 8-fold gap that OP found but still a difference of factor 3.6 - I don’t know enough about benchmarking in R to judge whether this is a useful comparison but it roughly looks like the scripts are doing the same thing.
Maybe some of the benchmarking gurus like @bennedich or @foobar_lv2 or Distributions experts like @simonbyrne can weigh in on whether this is an artefact of measuring the wrong thing or there’s an actual difference in sampling speed!