Julia 1.0 released

Maybe this topic should be split…

v0.6 used to tell me about BLAS & LAPACK:

julia> versioninfo()
Julia Version 0.6.3
Commit d55cadc350 (2018-05-28 20:20 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-4258U CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, haswell)

v0.7/1.0 no longer reports any. Is that due to the lack of custom system image? How do I know it uses BLAS or not?

julia> versioninfo()
Julia Version 0.7.0
Commit a4cb80f3ed (2018-08-08 06:46 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-4258U CPU @ 2.40GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.0 (ORCJIT, haswell)
Environment:
  JULIA_NUM_THREADS = 4
1 Like

I think it’s because LinearAlgebra has been moved into the standard library?
Anyway, a few possibilities if you want info regarding your BLAS library:

julia> using LinearAlgebra

julia> BLAS.vendor()
:mkl

julia> Base.libblas_name
"libmkl_rt"

julia> using Libdl

julia> Libdl.dlpath(Libdl.dlopen(Base.libblas_name))
"/home/chriselrod/Documents/languages/jdev/usr/bin/../lib/julia/libmkl_rt.so"

EDIT:

julia> BLAS.openblas_get_config()
"USE64BITINT DYNAMIC_ARCH NO_AFFINITY Zen MAX_THREADS=16"
6 Likes

Will you be making a new blog, “7 Gotchas Revisited”? :wink:

12 Likes

I had the same thought :slight_smile:

Wow congratulations! The development cycle was fast, but maybe too fast. I look forward to successfully upgrading my code. Unfortunately I’m going back to Julia 0.64 for now. There were so many code breaking changes that it took hours to get a thousand line module to run under Julia 1.0. But that’s not why I have to revert my code. There are bugs that prevent me from using 1.0 right now.

First however a few comments about the changes. Unfortunately I don’t use julia at a level where I can appreciate all of them, so excuse me for making this a mostly negative comment. I fully appreciate just how much work is put into this effort.

My first lament is that so much basic math and linear algebra functionality is now hidden in external modules and libraries, yet there are no pointers to what those libraries are! What is the module name of the Linear Algebra library? I never found it documented until I just guessed LinearAlgebra. Where is the FFT library that gives me fft? I think thats FFTW, but no pointer to this is provided. Why do we need a pedantic change of atan2(y,x) to atan(y,x)? atan2 is fully established in many other languages. I now have to import a half a dozen libraries just to get the basic functionality of what I had before. My recommendation would be to have some kind of unifying math library that would bring in all the other missing libraries for basic linear algebra and DSP usage etc.

I also found the removal of broadcasting for the + operator intolerable. How often does one write something like indices+1 in your code? Now I have to search for + and replace it with the unwieldy .+ remembering to surround it with spaces. Is it really that hard to determine the intention of +(Array{T}, {T}) ? I haven’t read the comments regarding the changes that gave us abs. and Float64. but I’m hoping that at least . becomes some kind of generic postfix operator that enables elementwise operation on “elemental” functions similar to Fortran.

Also what happened to the transpose post fix operator? I’m guessing that .’ doesn’t fit the new meaning of . . However contrary to what some of the math folks opined in the forums, we engineers use transpose and Hermitian transpose all the time. Most objects are complex arrays in my world, so I often need a transpose in order to apply matrix multiplication to the right dimension. Is there a postfix operator for it? If you need some ideas how about .H and .T? x.H for Hermitian transpose and x.T for transpose? I had to define my own T(x) = transpose(x) to maintain my sanity.

All of the above are just minor annoyances, the reason I have to revert is that PyPlot is segfaulting on some plot routines that worked fine before. Also @enter doesn’t appear to be working in atom. Has the debugging facility changed? Do I need to import Gallium? Not sure whats going on there.

The error occurs in a PyPlot call to clf() Here is some of the segfault stack trace:
signal (11): Segmentation fault
in expression starting at /data/Projects/Maestro/datacheck.jl:62
function_call.lto_priv.350 at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
PyObject_Call at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
macro expansion at /data/usr/matthew/.julia/packages/PyCall/uX707/src/exception.jl:81 [inlined]
__pycall! at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:117
_pycall! at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:30
#pycall#88 at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:16 [inlined]
pycall at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:160 [inlined]
gcf at /data/usr/matthew/.julia/packages/PyPlot/jXCXB/src/PyPlot.jl:149
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2182
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1536 [inlined]
jl_f__apply at /buildworker/worker/package_linux64/build/src/builtins.c:556
jl_f__apply_latest at /buildworker/worker/package_linux64/build/src/builtins.c:594
#invokelatest#1 at ./essentials.jl:686 [inlined]
invokelatest at ./essentials.jl:685
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2182
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1536 [inlined]
jl_f__apply at /buildworker/worker/package_linux64/build/src/builtins.c:556
_pyjlwrap_call at /data/usr/matthew/.julia/packages/PyCall/uX707/src/callback.jl:28
unknown function (ip: 0x7ff1080b3ff4)
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2182
pyjlwrap_call at /data/usr/matthew/.julia/packages/PyCall/uX707/src/callback.jl:49
unknown function (ip: 0x7ff108020544)
_PyObject_FastCallDict at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
call_function at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
_PyEval_EvalFrameDefault at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
_PyEval_EvalCodeWithName at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
PyEval_EvalCodeEx at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
function_call.lto_priv.350 at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
PyObject_Call at /data/usr/matthew/.julia/packages/Conda/m7vem/deps/usr/lib/libpython3.6m.so (unknown line)
macro expansion at /data/usr/matthew/.julia/packages/PyCall/uX707/src/exception.jl:81 [inlined]
__pycall! at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:117
_pycall! at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:30
#pycall#88 at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:16 [inlined]
pycall at /data/usr/matthew/.julia/packages/PyCall/uX707/src/pyfncall.jl:160 [inlined]
#clf#28 at /data/usr/matthew/.julia/packages/PyPlot/jXCXB/src/PyPlot.jl:172
jl_fptr_trampoline at /buildworker/worker/package_linux64/build/src/gf.c:1829
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2182
clf at /data/usr/matthew/.julia/packages/PyPlot/jXCXB/src/PyPlot.jl:169 [inlined]
sfplotframe at /data/Projects/Maestro/StepFrequency.jl:1153
unknown function (ip: 0x7ff1080b8ca6)
jl_fptr_trampoline at /buildworker/worker/package_linux64/build/src/gf.c:1829
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2182
do_call at /buildworker/worker/package_linux64/build/src/interpreter.c:324
eval_value at /buildworker/worker/package_linux64/build/src/interpreter.c:428
eval_stmt_value at /buildworker/worker/package_linux64/build/src/interpreter.c:363 [inlined]
eval_body at /buildworker/worker/package_linux64/build/src/interpreter.c:686
jl_interpret_toplevel_thunk_callback at /buildworker/worker/package_linux64/build/src/interpreter.c:799
unknown function (ip: 0xfffffffffffffffe)
unknown function (ip: 0x7ff114b7ae9f)
unknown function (ip: 0xffffffffffffffff)
jl_interpret_toplevel_thunk at /buildworker/worker/package_linux64/build/src/interpreter.c:808
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:787
jl_parse_eval_all at /buildworker/worker/package_linux64/build/src/ast.c:838
jl_load at /buildworker/worker/package_linux64/build/src/toplevel.c:821
include at ./boot.jl:317 [inlined]
include_relative at ./loading.jl:1038
…etc.

Plotting functionality is critical so unfortunately I can’t yet upgrade. Hopefully this can be resolved somehow soon. Should I file a bug report on it?

5 Likes

Please see PSA: use Julia 0.7 if you are upgrading if you’re upgrading code. You would have gotten nice warnings for essentially everyones of the changes you complained about.

9 Likes

I heavily used the 0.7 documentation to find resolutions to backwards incompatible syntax. My complaints are simply from a user’s scientific programming/computational perspective. I understand that .+ has a new meaning with regards to fancy broadcasting semantics, but I wonder if maintaining simple broadcasting rules for + breaks that.

Also is there really no replacement for .’ ? Couldn’t actually find it in any documentation. Also the broadcasting rules will end up hiding some bugs that would otherwise be caught, like if you do add together mismatched arrays and end up with an unintended outer sum. Perhaps that’s a worthy price to pay for the new functionality and performance.

2 Likes

I would also advice using FemtoCleaner, it makes upgrading much easier. It’s true that some changes are annoying, but most of the time there’s good reasons behind them.

1 Like

Just guessing here, but it might perhaps lead to .+ calls inadvertently broadcasting “two levels down”, when only one level was expected.

This is something i rarely (or never) need. The adjoint ' works for both real and complex matrices. Do you really frequently need to (non-conjugate) transpose complex matrices? That seems like something that should be clearly advertised, using e.g. a function call like transpose.

For this, you should use ordinary, non-broadcasted, array addition:

julia> rand(2, 2) + rand(2, 2)
2×2 Array{Float64,2}:
 0.692851  1.24335 
 0.808972  0.966845

julia> rand(2, 2) + rand(2, 1)
ERROR: DimensionMismatch("dimensions must match")
Stacktrace:
 [1] promote_shape at ./indices.jl:129 [inlined]
 [2] promote_shape(::Array{Float64,2}, ::Array{Float64,2}) at ./indices.jl:120
 [3] +(::Array{Float64,2}, ::Array{Float64,2}) at ./arraymath.jl:45
 [4] top-level scope at none:0
1 Like

The answer to the transpose for complex matrices is yes, one uses the non conjugated version all the darn time, almost as much as the Hermitian transpose or adjoint as you call it. For example a wireless MIMO channel can be represented as a matrix. What is the channel of the system if you switch the roles of transmitter and receiver? For time division multiplexing, assuming channel reciprocity that channel is the transpose of this matrix.

Adjoint is not the best terminology in linear algebra since it refers to the transpose of the cofactor matrix. The operator adjoint from Hilbert space appears to refer to something like the Hermitian transpose. Even exacting math geeks can generate confusing nomenclature.

I’m fairly certain you will generate interesting bugs with .+ simply by doing an operation like this:
A * B .+ C . Suppose you thought that A is 1 x N and B is N x 1 and C is 1 x N . You believe that A * B is a 1x1 matrix being added and broadcast into a 1 x N matrix.

But suppose you messed up and ended up with A: N x 1 and B: 1 x N. It might generate surprising results later when A * B .+ C broadcasts into an N x N matrix instead of a 1 x N matrix. There are a lot of variations like this. You will have to be very careful in your testing.

However I’m not knocking this, I agree that broadcasting used judiciously is a very nice feature. It comes at a cost if + entirely loses broadcasting, especially for the case of adding a scalar to an array. Just my opinion. I’d like to see the case where such broadcasting messes things up some number of levels down just to understand it a bit better.

2 Likes

Well, it’s hard to know what your code will look like, but do you then end up with several transposes sprinkled throughout your equations, like one often does with the ' operator, or is it rather something that happens in ‘problem setup’?

This, I don’t get. If you mess up the sizes of A and B, how is any programming language supposed to help you, unless you specifically check the output? And how would the old behaviour of + help compared to the current one? What behaviour of + or .+ could even conceivably save you from such a mistake?

2 Likes

The PyCall segfault is now fixed in v1.18.2 (thanks to amazing work of its main developer).

1 Like

Have you considered permutedims?

1 Like

It is not hidden at all, but well documented.

I am not sure I understand what you mean here; . has been used for broadcasting since v0.5.0, released almost 2 years ago.

1 Like

He has a fair point here though as the documentation (except for the navigation tree) doesn’t contain a list of all stdlibs with short explanations anywhere. See Better documentation of stdlibs · Issue #28712 · JuliaLang/julia · GitHub.

4 Likes

I think it would be nice if there were some easy way to locate functionality that has been moved out of Base. For example, after one minute of searching, I still don’t know where fft went (I know, not very impressive searching, but still).

This isn’t about the language, per se, but about the tooling.

Edit: Hehe. Eh, but seriously, where did fft go? Searching for ‘fft’, ‘dft’, and ‘fourier’ gives zero relevant hits on the search page. Isn’t it even in stdlib? So then it’s in FFTW.jl?

1 Like

Google is my friend :slight_smile:

https://www.google.com/search?hl=en&q=julia%20fft

Those Google results make me no wiser at all. The most relevant hits are FFTViews.jl and AbstractFFTs.jl. It certainly doesn’t explain what happened, where did ordinary fft go, when and why?

If I didn’t know from experience that Julia’s fft is the FFTW one, I’d be totally lost.

2 Likes
7 Likes