1.0 annoyances and Matlab comparison

That’s probably architecture dependent. If you have an Intel processor, you can get Julia with MKL and should have the same BLAS performance. However, if you have an AMD processor, you cannot (AFAIK) get MATLAB with OpenBLAS.

I like the safe-by-default, but with the choice to opt in to performance.
I think these macros are primarily for package development. In other languages, users very rarely look into the source code. And were they to, these macros are a lot easier to understand than suddenly facing the C++ needed for performance in other languages.

This is more safety by default, taking care of the user. The user shouldn’t be doing A = Array{Float64}(uninitialized, m, n)!

julia> NaN * 0
NaN

Getting random NaNs in your matrix, that may then infect more of your code, is an easy mistake to make. And a frustrating one for new users to diagnose.
For general use, the slow down of creating an array initialized to zero isn’t going to be noticeable at all.
When writing package code, then you can deliberately opt in to the uninitialized version, and make it clear you are doing so. fill(0.0, m,n) is easy enough to write.
I would much rather direct new users there.

Default safety is a lot less likely to scare people away.

EDIT:
I’m also interested in the case of Fortran being faster than Julia. In a few simple test problems, ifort seems to be faster, but gfortran does not.

3 Likes