List of most desired features for Julia v1.x



x .|> fn1 .|> fn2 .|> fn3? It works like any other operator like *.


ah… of course!!!


Even better if it’s also allowed to put the continuation symbol at the beginning of the second line instead of at the end of the first. This saves a character on the line that is too long anyway. Or multiple characters if using a longer continuation symbol. A three-character symbol and a space could replace four spaces of indentation. (Advanced editors could render the continuation symbol nearly the same color as the background, so that it resembles indentation.)


For the general case: if you end the line with an unfinished expression, Julia will read the next one to complete the expression. Eg

1 +

function lotsofargs(arg1,


If only that metadata carried through to the publication. It won’t. So the only point where it would be useful is when you use the original files. You, or someone you sent it to, or if you host it on a server you have explicit control over.
But I looooove this idea…


Of course, but often I find an old plot and have no idea of the context or I want one specific plot from a folder of countless plots that all look relatively similar. Images can be extracted from PDFs so the metadata might still survive even in digital publication.

I have already done a little bit of work testing how to read it which turned out to be quite simple. It relies on exiftool which I see you have experience with.

This is off the main topic though so let’s move further conversation to a new thread. I’ll post something at some point today once I’ve built a full example. I’m still a bit uncertain of how much is possible or the best approach.

Edit: Added link to the new thread


Some way to get guaranteed allocation-free immutable types containing references (especially arrays of such eltype must not be pointer-arrays). I don’t need a pretty syntax for such type-definitions, but I need a way of specifying, unambiguously, the resulting data layout.

In other words I want a guaranteed (not sometimes maybe optimized) zero-cost abstraction to bundle reference-types together. Currently, the only available zero-cost abstractions for bundling values together into a type are for bitstype.

This would solve, e.g., allocating array views, or the token/semitoken API for trees in datastructures.jl.

Such zero-cost types need to be available and the resulting data layout needs to be well-defined, but it does not need to be syntactically pretty (e.g. new keyword: bitstype_like / leaftype, requiring users to specify explicitly, for every member field, whether it is supposed to be inplace or as reference, and the partial order of “bitstype_like contains inplace member” must be non-circular, else compile-time error).

Cf .


Maybe this is a no-go, or discussed elsewhere, but if my understanding is correct all memory allocated in Julia is implicitly pinned, to make interfacing with external code easier and less error prone I think. For any future performance work on allocation and GC this imposes some pretty significant constraints on the implementation. Is there any willingness to reconsider this? It feels like the sort of semantics you don’t want to break after a 1.0 release.


I’m not sure I understand what you mean by ‘observable’. In general, settling design and syntax issues for 1.0 that will be hard to change later seems most important. Multi-threading may not have much to do with this. But, convenient, stable multi-threading is an essential for many projects. I’d have a hard time selling Julia as “reasonably mature” if it can’t do multi-threading. (Of course, I’m not offering to try to fix it right now. And it won’t kill Julia if it is not there.)

Multi-threading by default, please no. At least, not MATLAB style. I repeatedly see things like users unwittingly starting thread-pools in a loop to invert 10x10 matrices. They move the code to a server and half the CPU time is in system calls. It’s hard to explain; someone has to be an enforcer, because people ignore you if they can, etc. Happens with numpy, too.


Thanks @Tamas_Papp. I know, but I still mess this up all the time. I use whitespace liberally to convey meaning, and I don’t always want to reformat my lines. For instance, we write math like:

+ b
= c

so I like to write out operations like:

result = ↵
  this ↵
+ that

or nested with operations lined up, like:

result = ↵
    thing ↵
  * this ↵
+ that

Plus, I think the ↵ symbol seems like an unclaimed and elegant solution. Ok, this isn’t the highest thing on my wishlist, but it seemed easy and helpful in addressing one of the common mistakes.


Have you tried this?

julia> thing = 1; this = 2; that = 3;
julia> result = (
         (  thing
          * this)
       + that)

That seems to work fine, no need for extra syntax, just use ( and ).


“Observable” means: can you, just by writing Julia code, tell whether something is happening or not. If you cannot, then it’s not observable. So my point was that if we add threading as an optimization, it should cause no difference in program behavior, aside from working faster.

I’d have a hard time selling Julia as “reasonably mature” if it can’t do multi-threading.

This doesn’t make sense to me – support for threading is orthogonal to language maturity. Lots of very “mature” languages don’t have real support for multithreading. R and Matlab don’t have language-level threading. Python has user-level threads, but because of the GIL, only one thread can do work in Python at a time, so there’s no actual concurrent user-level work (I’m defining user-level work as being implemented by the user in Python code). Julia’s multithreading support is already more capable than all of these mature languages: you can do OpenMP-style “parfor” loops with @threads on a for loop without I/O you get real concurrent, user-level multithreading. Of course, if you’re comparing Julia with C++ or Go, then yes, we have some more work to do, but that is well under way.

I agree that Matlab’s threading is not desirable. And R and Python do the same thing, as you note. There’s no user-level ability to express concurrent computation, and no composability – just calls to native libraries that assume that they can and should use as many threads as they can. But this is exactly what I was saying above… these systems don’t actually have any language-level threading, they just call threaded C/C++/Fortran libraries.


“Observable” means: can you, just by writing Julia code, tell whether something is happening or not. If you cannot, then it’s not observable. So my point was that if we add threading as an optimization, it should cause no difference in program behavior, aside from working faster.

That’s not too far from saying all Turing-complete languages are equivalent.

But, for the most part, I agree with you.

  1. I was implicitly comparing Julia today with what it can be and what C and Fortran are rather than two of its biggest competitors: MATLAB and python. The distinction between language-level threading and threads in libraries is enormous.

  2. I understood the original statement to mean that OMP-style, language-level, multithreading (modulo IO) is somehow not reliable. I have not been keeping up, but, more than a year ago, it worked fine for me for many things. At that point, it was not advertised as a stable feature. Plus, there were some issues, I don’t recall exactly… certain module code that would cause crashes (Distributions?), or something like that. But, non thread-safe libraries are always present.

Anyway, my main point was that automatic, transparent multithreading (eg in broadcasting) is a bad idea. All the more so, given that Julia supports language-level multithreading. It could interact badly with Julia libraries that use multithreading.


I would like to share a discussion panel that has some very good points about evolving a programming language:

Many hopes stated in the video seem to be a reality nowadays in Julia like code rejuvenation, strong type system with minimum annotation, etc.

I understand that the main contributors of the language must be super busy with the first major release. I am sharing the video as a form of support in these stressful times.


I’m not sure how Turing completeness is relevant. If a transformation is “observable” in the sense I defined, then it is by definition not an optimization. Perhaps the confusion stems from the fact that threading can occur in different ways:

  1. Automatically as an optimization. If the compiler is sure that there’s no way to tell the difference between the implementation using one thread or many, it can choose to use a threaded implementation.

  2. Implicitly, as part of the language definition. If the language is defined to be allowed to execute certain constructs concurrently, then whether the result is observable or not is irrelevant since the language is defined to behave concurrently.

  3. Explicitly, because the user asked for it. If the user asks for threads, then they can’t complain if the resulting behavior is observably different from the non-threaded behavior.

I’m talking about 1 while you seem to be talking about 2. If we know that the operation each iteration does is pure and can be executed in any order or concurrently, then we could use threads because we know the program won’t behave differently. Whether it’s faster or not depends on the circumstances.

If it’s strictly an optimization (as I stipulated), then it is by definition not observable whether threads are used or not, which means it cannot “interact badly” with Julia libraries that use multithreading – at least not in the sense of changing their behavior. If you mean “interact badly” in the sense of different levels of threading composing poorly and causing bad performance, then that’s a different matter. That would only happen if we chose a non-composable threading model, which we’re not doing – we’re going with Cilk-style work stealing, which does compose well. So, if the compiler can determine that a comprehension or broadcast operation is pure, it is free to make it concurrent and let the task/thread scheduler decide whether to run on different threads or not. Of course, all of this is predicated on having a work-stealing scheduler that’s good enough, which remains to be proven. All I’m saying is that it’s not impossible to add threading as a pure optimization.


I would like a version, let’s say 0.7 where we drop support from all previous versions and make sure the ecosystem is stable. For example, no more depreciation warnings or whatnot. A stable environment would require getting JuliaData, JuliaMath, JuliaStats, etc. playing well with one another for example. After that is done, 1.x can be used to polish the documentation, debuggers, etc. Personally, I would also like a way for functions to take a struct with named arguments and that way it can do the method dispatch without users having to remember the order of each argument, keyword status, etc. a general macro maybe?


You mean this?


Currently there is no method dispatch with keyword arguments.


NamedTuples just merged two days ago. They are essentially the key to implementing this though, so I wouldn’t be surprised to see this solved soon after 1.0. There’s already a prototype of what that looks like:

That’s not the same as a Base implementation, but its a good proof of concept that this really is the only data structure that’s necessary to get this accomplished.


I secretly get super excited when someone mentions one of my packages