Avoiding a Julia version of the Python 2/3 mess

The Python 3 breaking changes were a big mess for them, because by that time people had built lots of code around Python 2. There seems to be a similar problem happening now with updates to TensorFlow.

What areas might lead to similar issues for Julia? What is being done, or can be done, to prevent or mitigate this?

The biggest potential issue I see with community scalability is our approach to “reserved names”. By discouraging type piracy, freely using packages, and needing to be careful not to use a name that’s used differently in an existing package (especially a popular one), I’d expect Julia programming to gradually become more and more constrained. Eventually something may need to shake this up, so it seems a possible pain point on the horizon.

Are there other things like this with some potential benefit to considering well ahead of any big problem? How can we avoid an eventual breaking-changes dumpster fire?


Having the self-enclosed environments and infrastructure in Pkg.jl seems to be a step in the right direction.

1 Like

The 1.0 adoption path? thread had some great comments on this (I’ve linked to one by @StefanKarpinski).


I find what Guido van Rossum said in a talk last year very interesting:

What we intentionally didn’t do but what I still regret is that we said there will be no way at run-time to combine Python 2 and Python 3 code in the same process, in the same Python interpreter. […] That sadly made it so that everyone had to really convert all their code and all their dependencies, like all the third-party packages they are using to Python 3 code before they would actually be able to run that code in Python 3 and benefits from the Python 3 features.

I think this is equivalent to what Rust is doing with the edition system and, for packages, what Go is doing with semantic import versioning. Maybe Julia and Julia packages can use similar approach.


Julia has at least two major advantages: macros, and fast FFI. The first provides a lot of room for tooling like Compat.jl (for better or worse). The latter means that there is very little module code using the libjulia C API, except for unavoidable cases like embedding. The fact that Python 3 introduced CPython C API breakage made supporting 2 + 3 code in-process much more difficult than a hypothetical situation where all C interfacing was done via something like ctypes (not very practical for performance and other reasons; having a JIT helps a lot).


I think there’s a third major and even more important advantage which makes Compat (and it’s evil cousin @deprecate) work much better than expected: we have a solution to the expression problem. Because of this one can often just write using Compat (or sometimes using Compat: newfunc, NewType) and then happily write module code which uses the new APIs on an old Julia version, without any further thought to compatibility.

This semantic compatibility for types and functions is even more complete than the syntax compatibility provided by @compat (which is limited to processing the syntax accepted by an older version of the Julia parser).


I would be surprised if Julia v2.0 is that different from Julia v1.0, at least in syntax and Base. The only major change I can see a huge demand for is the addition of traits, but this could be done without breaking existing code. The other change is “fixing” the global scope issues when code is run in REPL, that would impact scripts but not libraries.

StdLib is a different story, though I’d be surprised if it’s still included in Julia v2.0. It feels like we’re heading towards LinearAlgebra, Random, etc. each being separate packages.

1 Like

I think that 2.0 will also be used as an opportunity to make all those tiny changes which are now shelved because they are considered breaking (API consistency issues, changes to semantics of functions, …; the label linked is just a subset of these).

Just like it happened before 1.0. There is no point in releasing 2.0 unless it is breaking stuff. Upward-compatible extensions can go in 1.x.


Right, I didn’t mean to imply they would not be breaking, just not on the same level as Python 2 -> 3. Actually, I’d bet the changes are less extreme than Julia 0.6->0.7…


What would people think about making view default indexing behavior in 2.0? It would be a pretty big change, but given how as of 1.4 views allegedly might be non-allocating, it might make sense.


Users coming from R already seem to find this surprising:

julia> A = [1 2 3 4 5];

julia> B = A;

julia> B[2] = 7;

julia> A
1×5 Array{Int64,2}:
 1  7  3  4  5

Perhaps the same lesson that explains that could explain that this is also the case for slicing, yet not scalar getindex.

Personally, I love optimizing code and cutting down on allocations, so I would like slicing as views. But I also don’t find @views hard to write, so I do think it’s worth considering what people would and would not find intuitive.


Even if views are non-allocating, they can involve an indirection for lookup or suboptimal memory access, so this could make some code (a lot) slower. Cf


1 Like

That is true, but in general, I feel like views will never be more than a few x slower, so should be the default. If you want the extra performance in the (imo less frequent) case, you can always just copy manually.

Optimized BLAS libraries pack, copying elements into preallocated blocks for better locality.
Here is a comment explaining how important it is to performance, by mratsim who implemented a high performance BLAS in Nim.

In Fortran, the gfortran compiler uses views when they are contiguous, but copies otherwise. This is often a good heuristic, but can prevent vectorization if not inlined when using a struct-of-arrays style memory layout (where what would be the fields to the struct are distributed across the columns of a matrix). If inlined, the compiler will hopefully make the correct perform- vs eliminate-the-copy decision.
If the calling function mutates the view, gfortran also emits an unpack, to maintain the same behavior in both versions.

Perhaps a view is only ever up to several times slower than a copy, but couldn’t the same normally be said about a copy, barring fairly extreme cases, such as

foo(x) = x[1] + 1

function bar(x)
    s = zero(eltype(x))
    @inbounds @simd for i ∈ eachindex(x)
        #s += @views foo(x[i:i])
        s += foo(x[i:i])
1 Like