Have you tried this?
julia> thing = 1; this = 2; that = 3;
julia> result = (
( thing
* this)
+ that)
5
That seems to work fine, no need for extra syntax, just use (
and )
.
Have you tried this?
julia> thing = 1; this = 2; that = 3;
julia> result = (
( thing
* this)
+ that)
5
That seems to work fine, no need for extra syntax, just use (
and )
.
“Observable” means: can you, just by writing Julia code, tell whether something is happening or not. If you cannot, then it’s not observable. So my point was that if we add threading as an optimization, it should cause no difference in program behavior, aside from working faster.
I’d have a hard time selling Julia as “reasonably mature” if it can’t do multi-threading.
This doesn’t make sense to me – support for threading is orthogonal to language maturity. Lots of very “mature” languages don’t have real support for multithreading. R and Matlab don’t have language-level threading. Python has user-level threads, but because of the GIL, only one thread can do work in Python at a time, so there’s no actual concurrent user-level work (I’m defining user-level work as being implemented by the user in Python code). Julia’s multithreading support is already more capable than all of these mature languages: you can do OpenMP-style “parfor” loops with @threads
on a for loop without I/O you get real concurrent, user-level multithreading. Of course, if you’re comparing Julia with C++ or Go, then yes, we have some more work to do, but that is well under way.
I agree that Matlab’s threading is not desirable. And R and Python do the same thing, as you note. There’s no user-level ability to express concurrent computation, and no composability – just calls to native libraries that assume that they can and should use as many threads as they can. But this is exactly what I was saying above… these systems don’t actually have any language-level threading, they just call threaded C/C++/Fortran libraries.
“Observable” means: can you, just by writing Julia code, tell whether something is happening or not. If you cannot, then it’s not observable. So my point was that if we add threading as an optimization, it should cause no difference in program behavior, aside from working faster.
That’s not too far from saying all Turing-complete languages are equivalent.
But, for the most part, I agree with you.
I was implicitly comparing Julia today with what it can be and what C and Fortran are rather than two of its biggest competitors: MATLAB and python. The distinction between language-level threading and threads in libraries is enormous.
I understood the original statement to mean that OMP-style, language-level, multithreading (modulo IO) is somehow not reliable. I have not been keeping up, but, more than a year ago, it worked fine for me for many things. At that point, it was not advertised as a stable feature. Plus, there were some issues, I don’t recall exactly… certain module code that would cause crashes (Distributions?), or something like that. But, non thread-safe libraries are always present.
Anyway, my main point was that automatic, transparent multithreading (eg in broadcasting) is a bad idea. All the more so, given that Julia supports language-level multithreading. It could interact badly with Julia libraries that use multithreading.
I would like to share a discussion panel that has some very good points about evolving a programming language:
Many hopes stated in the video seem to be a reality nowadays in Julia like code rejuvenation, strong type system with minimum annotation, etc.
I understand that the main contributors of the language must be super busy with the first major release. I am sharing the video as a form of support in these stressful times.
I’m not sure how Turing completeness is relevant. If a transformation is “observable” in the sense I defined, then it is by definition not an optimization. Perhaps the confusion stems from the fact that threading can occur in different ways:
Automatically as an optimization. If the compiler is sure that there’s no way to tell the difference between the implementation using one thread or many, it can choose to use a threaded implementation.
Implicitly, as part of the language definition. If the language is defined to be allowed to execute certain constructs concurrently, then whether the result is observable or not is irrelevant since the language is defined to behave concurrently.
Explicitly, because the user asked for it. If the user asks for threads, then they can’t complain if the resulting behavior is observably different from the non-threaded behavior.
I’m talking about 1 while you seem to be talking about 2. If we know that the operation each iteration does is pure and can be executed in any order or concurrently, then we could use threads because we know the program won’t behave differently. Whether it’s faster or not depends on the circumstances.
If it’s strictly an optimization (as I stipulated), then it is by definition not observable whether threads are used or not, which means it cannot “interact badly” with Julia libraries that use multithreading – at least not in the sense of changing their behavior. If you mean “interact badly” in the sense of different levels of threading composing poorly and causing bad performance, then that’s a different matter. That would only happen if we chose a non-composable threading model, which we’re not doing – we’re going with Cilk-style work stealing, which does compose well. So, if the compiler can determine that a comprehension or broadcast operation is pure, it is free to make it concurrent and let the task/thread scheduler decide whether to run on different threads or not. Of course, all of this is predicated on having a work-stealing scheduler that’s good enough, which remains to be proven. All I’m saying is that it’s not impossible to add threading as a pure optimization.
I would like a version, let’s say 0.7 where we drop support from all previous versions and make sure the ecosystem is stable. For example, no more depreciation warnings or whatnot. A stable environment would require getting JuliaData, JuliaMath, JuliaStats, etc. playing well with one another for example. After that is done, 1.x can be used to polish the documentation, debuggers, etc. Personally, I would also like a way for functions to take a struct with named arguments and that way it can do the method dispatch without users having to remember the order of each argument, keyword status, etc. a general macro maybe?
You mean this?
NamedTuples just merged two days ago. They are essentially the key to implementing this though, so I wouldn’t be surprised to see this solved soon after 1.0. There’s already a prototype of what that looks like:
https://github.com/bramtayl/Keys.jl
That’s not the same as a Base implementation, but its a good proof of concept that this really is the only data structure that’s necessary to get this accomplished.
I secretly get super excited when someone mentions one of my packages
This would be a number one request for me also - I am mainly interested in discovering as many errors as possible at compile time, at least the trivial ones like misspelling a function name or passing incorrect types. I understand it is not possible to find all errors and some suspicious code might be actually correct but that is fine. For example, the “unsafe” or “advanced” code could be isolated and marked as such. The rest could be verified statically. Currently, there are too many errors which only appear when the code is run, which makes the development cycle slow. Unit tests help but not everything can be tested on small examples. I know Matlab and Python are the same in this aspect but Julia should be better, right? I am not hoping for “if it compiles, it is correct” but we should be moving towards “if it compiles, it will run”.
Yes. Debug used to work nicely but sadly no longer does.
Actually, I wish that as Julia matures, backward compatibility is given more and more importance and breaking changes become increasingly rare. As an illustration, I have recently revived my project in Julia after about a year. I upgraded from Julia 0.5 to Julia 0.6. There are so many breaking changes in Julia (e.g. Array constructor syntax) and packages (like Images) that I have spent several days fixing my code and I am no way sure to have fixed everything.
Julia is great, please carry on. Yours,
Jan
Note that compilation per se won’t catch any runtime errors for you, the way it does in statically typed languages. Consider
f(x) = g(x) + 1
with g
undefined. Now suppose you “compile” this for the signature Tuple{Float64}
. It this an error? Not necessarily, because g(::Float64)
could be defined before you run it, and then everything is fine.
Are we talking about
? It has some minor issues, but is usable at the moment.
All code running on Julia 1.0 will run on all Julia 1.x versions, but until then backwards compatibility will be broken one more time for 0.7.
The whole point of 1.0 is to say “this is where we stop doing so many breaking changes, and any 1.x is backwards compatible with 1.0”.
I love the idea of getting most breaking changes done before 1.0. But I don’t think insisting on backwards compatibility always in the long run is very good for a language. Julia is young and should still be able to develop creatively, and tools like compat and femtocleaner really do a lot for ameliorating version issues.
There’s so much work to be done that isn’t breaking though that it will be nice to have those prioritized for a bit.
I’d imagine the Python 3 developers said something along these lines at some point.
I’d really like to agree with you, but I think we need to be much more careful after 1.0. Granted, macros do give us a big advantage over python when it comes to compatibility.
for sure!
I don’t see why julia or other software projects would by necessity suffer the same fate. You don’t hear about developers who’ve refused to give up on Python 1. The problem IMHO was that there was a too big existing codebase in Python 2 that noone was touching actively, and that the advantages of Python 3 weren’t that great.