Welcome!
My guess would be that you’re getting SIMD operations in numpy but not in Julia. Interestingly, I can’t reproduce your timing numbers; instead, my Julia timing roughly matches yours, but my numpy results are about 10X slower:
julia> x = randn(10000) .+ 5;
julia> @benchmark log.($x)
BenchmarkTools.Trial:
memory estimate: 78.20 KiB
allocs estimate: 2
--------------
minimum time: 57.371 μs (0.00% GC)
median time: 59.739 μs (0.00% GC)
mean time: 67.735 μs (7.25% GC)
maximum time: 35.763 ms (99.67% GC)
--------------
samples: 10000
evals/sample: 1
In [3]: x = np.random.randn(10000) + 5
In [4]: %timeit np.log(x)
173 µs ± 5.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
There was a related discussion of SIMD optimizations for logs here: Performance of logarithm calculation (when not to use the dot operator?) - #4 by kristoffer.carlsson although I’m afraid I don’t know enough about the details to be sure if it’s still relevant.
You also might want to consider making a separate thread for this discussion, as it’s only tangentially related to the topic of the original post (which was simply a question of syntax, not of performance optimization).