FIrst, it looks like you are including the time to initialize Julia and compile your function. (In real-life usage, you wouldn’t do this on every function call!)
(Also the @eval
in your jlinv
makes no sense, why are you invoking parse/eval there instead of simply calling inv.(sm)
? Using @eval
also forces the evaluation to happen in global scope, which slows things down.)
Also, constructing the SMatrix
wrapper on the fly like that is not type-stable, so the compiler cannot type-specialize inv.(sm)
properly even if you omit the @eval
. In native Julia code, you would pass an array of SMatrix
directly, in which case the compiler would know the matrix size (being encoded in the type).
Not to mention that you are including the time to make multiple copies/transposes of your array in Julia.
- When you call Julia from Python, by default it makes a copy of the row-major numpy original into a Julia
Array
. (You can avoid this by telling pyjulia to pass aPyArray
wrapper.) - You then transpose into
mre
- You then make another copy into
sim
instead of inverting in-place - You then transpose into
simout
Basically, you are throwing away a lot of the advantages you would have in native Julia code.
(If you want to call Julia from Python, rather than wrapping one elementary operation, e.g. a vectorized inv
, at a time, you will get much better gains if you translate a whole sequence of operations to Julia and call that all at once. That being said, it is possible to eliminate all of the extra copies/transposes above if you know what you are doing with translating NumPy data structures to Julia, but this requires a bit more low-level understanding.)
PS. You don’t need a semicolon (;
) at the end of every line in Julia, though it is harmless.