My guess would be that you’re getting SIMD operations in numpy but not in Julia. Interestingly, I can’t reproduce your timing numbers; instead, my Julia timing roughly matches yours, but my numpy results are about 10X slower:
julia> x = randn(10000) .+ 5;
julia> @benchmark log.($x)
BenchmarkTools.Trial:
memory estimate: 78.20 KiB
allocs estimate: 2
--------------
minimum time: 57.371 μs (0.00% GC)
median time: 59.739 μs (0.00% GC)
mean time: 67.735 μs (7.25% GC)
maximum time: 35.763 ms (99.67% GC)
--------------
samples: 10000
evals/sample: 1
In [3]: x = np.random.randn(10000) + 5
In [4]: %timeit np.log(x)
173 µs ± 5.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
You also might want to consider making a separate thread for this discussion, as it’s only tangentially related to the topic of the original post (which was simply a question of syntax, not of performance optimization).
@rdedit – Thanks for comparing your results. I’m guessing that the differences in our NumPy timings may relate to the fact that the NumPy I’m using is linked against MKL (this is the default for the Anaconda distributed versions of the NumPy/SciPy toolchain)?
@baggepinnen – Thanks for the link to the related discussion. I gave Yeppp a shot but unfortunately Yeppp.log is about twice as slow on my platform/CPU as Base.log
julia> using Yeppp
julia> x = randn(10000 ) .+ 5;
julia> @benchmark Yeppp.log($x)
BenchmarkTools.Trial:
memory estimate: 78.20 KiB
allocs estimate: 2
--------------
minimum time: 133.556 μs (0.00% GC)
median time: 174.349 μs (0.00% GC)
mean time: 193.267 μs (4.51% GC)
maximum time: 3.532 ms (92.44% GC)
--------------
samples: 10000
evals/sample: 1
For reference, here’s my Julia and system info:
julia> versioninfo()
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin14.5.0)
CPU: Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.1 (ORCJIT, skylake)