There is a benchmark posted here of which I just became aware today. I will investigate a bit more, but if anyone has had a look already, I would appreciate any comments or insights.

# [ANN] New benchmark comparison

**rdeits**#2

I’m getting about a 2X speedup over their stored benchmarks just by using Julia v0.6 and fixing the deprecation warnings. But it looks like there are some pretty weird things being done in the Julia code, and I suspect a much larger speedup is possible. For example:

in which they take the `max`

, `min`

, and `sort`

of scalars.

**rdeits**#3

Oh, I see, that’s because the function appears to have some parts written as if `n_cx`

might be a vector, although it never is (and, in fact, can’t be without throwing errors elsewhere). This is very matlab-y.

**non-Jedi**#5

In the article from the README, the author says the Julia code is a direct

transcription of the Fortran code, so weirdness is more than to be expected.

**barche**#6

Some observations, only from looking at the README:

- The difference between these solver types is huge, it is not fair to compare a direct and an iterative solver, so essentially the comparison is limited to the assembly
- The fact that Matlab outperforms Fortran for the assembly phase probably means there is a lot of room for improvement there.
- Not counting the solve phase, Julia is pretty close to Fortran, not bad for a naive port I’d say