New Julia interface in Wolfram Mathematica works great

Mathematica added a built-in Julia interface in version 12 last year. I had a try and found the interface quite convenient. Here’s a short example, using Julia to speed up calculation of Fibonacci numbers, using an inefficient recursive algorithm just for illustration.

First, here’s the pure Mathematica implementation for calculating the 30th Fibonacci number,

fib[n_] := If[n<=1, n, fib[n - 2] + fib[n - 1]];

The output is

{2.67564, 832040}

2.7 seconds was used to get the result 832040.

Now, using the Julia interface,

juliaSession = StartExternalSession["Julia"]; (* Start Julia session *)

fibJulia = ExternalEvaluate[juliaSession,
    "fib(n) = n<=1 ? n : fib(n-2) + fib(n-1)"
(* defined Julia function callable from Mathematica *)

fibJulia[5]; (* JIT warmup for Julia *)

AbsoluteTiming[fibJulia[30]] (* Actual calculation *)

The output is

{0.00839, 832040}

Only 8 milliseconds was used (including FFI latency), more than 300 times faster than the pure Mathematica version.

Finally, the Julia session is closed by


(Julia packages ZMQ and JSON are required for the interface to work, as stated in the Wolfram documentation.)


My preferred Julia fib implementation:

fib(N::Integer) = fib(Val(N))
fib(::Val{N}) where {N} = N <= 1 ? N : fib(Val(N-2)) + fib(Val(N-1))

The first run yields:

julia> @time fib(30)
  0.000000 seconds

(Credit to T. Papp.)

This is abusing Val-dispatch for memoization.


@Elrod, would it be possible to explain in simple terms why your/T.Papp’s version is much faster than the OP’s ? Or provide some reading material.

1 Like
fib(::Val{N}) where {N} = N <= 1 ? N : fib(Val(N-2)) + fib(Val(N-1))

Is a generic method for the value type N. I.e. each value of N will get its specific compiled version to dispatch to. The compiler can probably determine the output for each value of N staticly at which point you’re “abusing” the multiple dispatch as a lookup table for allready computed values (i.e. memoization).

PS: But that doesn’t explain the fast first execution :thinking:

1 Like

@time is misleading here, because of precompilation. It would probably be better to time like this:

julia> @time @eval fib(30)
  0.045663 seconds (119.59 k allocations: 8.438 MiB, 107.76% compilation time)

fib(N) is slow because we have an exponential number of function calls.
It calls fib(N-2) twice (fib(3)), fib(N-3) three (fib(4)) times, fib(N-4) five (fib(5)) times, fib(N-5) eight times, …, fib(1) a total of fib(N + 1 - 1) times.

If we make N big enough that just evaluating fib normally takes much longer than compiling, e.g. 47 below:

julia> fib(::Val{N}) where {N} = N <= 1 ? N : fib(Val(N-2)) + fib(Val(N-1))
fib (generic function with 1 method)

julia> fib(N::Integer) = fib(Val(N))
fib (generic function with 2 methods)

julia> fibint(n) = n <= 1 ? n : fibint(n-2) + fibint(n-1)
fibint (generic function with 1 method)

julia> @time @eval fib(47)
  0.020000 seconds (188.22 k allocations: 13.272 MiB, 111.50% compilation time)

julia> @time fibint(47)
 13.292764 seconds

julia> @code_typed fib(Val(47))
1 ─     goto #3 if not false
2 ─     nothing::Nothing
3 ┄     return 2971215073
) => Int64

then the fact that the Val version only calls each fib once makes it much faster to compile.

Or, put another way, the first time calling fib(47) is much faster with the Val version, because instead of calling fib(30) a total of fib(18) == 2584 times, the Val version only has to call fib(30) once (and that fib(30) is itself faster the first time because it only has to call fib(12) once, etc).


It’s a goofy trick, but if you actually want to memoize functions there’s a great easy way using Memoize.jl and the @memoize macro

1 Like