Compiler work priorities


#21

I had another one of those really painful “I’m trying to impress someone how cool julia is”, and then it ended (again, as so many times) in a long discussion that, well, the performance that one sees initially is really not representative, JIT times, you know, really it is fast, just don’t believe what you are seeing right now… I’ve been through that experience sooo many times now, and in my mind it really is a major, major barrier to get wider julia adoption, I cannot even count how many folks clearly lost all interested in those first five minutes of exposure…

In any case, I think from my point of view, while I cannot wait to see all the PARTR stuff, the compile-time latency issues that are listed as priority 3 in the original post, would go way up on top of my list. I have at least a handful of projects that would really, really benefit from better multi-threading, but essentially in pretty much everything I do, the compile latency issues are way, way, way more problematic. In particular because they make it so hard to effectively advocate for julia, and many of my projects (not the stuff that the community here knows about around Queryverse.jl, but my actual scientific work) really need buy-in from other, currently non-julia users.

So, my hope would be that things like this and this might make it onto a julia 1.2 milestone :slight_smile:

Now, I can of course follow the rational for the current ranking that @StefanKarpinski wrote down, but at least from my situation (as a user) that is not the right trade-off. I completely understand that I might be a minority, and that other user needs are different (and more important), but I wanted to make sure this point of view also gets voiced.


#22

I know open-source development isn’t a democracy and definitely folks should work on what’s important to them, but it would make me very happy to see improvements in the time-to-first-X experience. For me it’s not really about evangelism, but pure selfishness. Despite using Juno (and sometimes Revise) I still find myself restarting Julia pretty often (switching environments, or changing types or const values). Whenever I use python I’m always reminded of how fast it feels, because most things happen instantly. I think over the lifetime of my using Julia I’ve probably spent more time waiting for things to JIT than I’ve saved in faster execution for computationally-intense work.


#23

Whether you’re in the minority or not, I don’t know, but I have the same preference. Lower latency/faster compilation is really the only feature I wish for at the moment.


#24

Just a quick counter point to the way this thread is heading : I couldn’t care less about compile time. My work consists of very demanding long running simulations. Moving from C++ I’d be happy to wait an hour for compilation if it gave 1% better performance. And multithreading is essential.

My personal priorities would be 1,2,4,3


#25

I agree entirely. I’ve started running some large parallel computations that look like taking days to run, but improvements over the current tools will be minimal for me and probably the majority of users compared to improvements in compile time.

Waiting for precompile is a running joke with the people I work with. The worst part is the break in focus and the multitasking that it encourages. Compile time for No. 1!!

Ordered: 3, 1, 4, 2


#26

Point 4. and 3. are overlapping since “Caching more things” and “PackagePrecompiler” are basically the same thing.

For me the latency issue, especially for type instable code, is a major issue. So it would be really great if this could improve. On the other hand PARTR seems to be close to be done and it often makes sense to first finish a project before starting a new one.

Point 1. is kind of a meta issue. Its clear that compiler bugs are always high priority. Hopefully, the number of compiler bugs is a monotonically decreasing sequence converging to zero, which will give more time to 2.,3., and 4.


#27

IMO points 3 and 4 are similar, but not exactly the same thing. Point 3 is more focused on the problems of compiler latency and ability to cache generated native code, whereas point 4 is targeted at making the entire development experience faster and easier (which of course benefit from progress on point 3). Also, PackageCompiler is not a panacea; you still have to spend the CPU cycles to generate the static code, make space for the (very large) binaries, and then accept that much of that native code becomes useless once you make even the slightest updates to the code that comprises them.

Correct me if I’m wrong, but isn’t type-unstable code better for latency? You have less that can be explicitly compiled down to non-dynamic native code, less union splitting to deal with (assuming you aren’t including small unions as type-unstable), and a lot of the latency should thus end up in the dynamic dispatches that occur, which are only a few hundred nanoseconds at most (from my limited benchmarking of dynamic dispatch, I could be very wrong here). All of that ensuing dynamic dispatch should only show up as general code slowdown; the latency is probably too tiny to be noticed.

I personally would like to see the most focus on any critical bugs per point 1, an equal interest in points 2 and 4, and then finally work on point 3. I put point 3 last, just because I feel like it’s really hard to get native code caching working properly (which could get rid of the most “easily” avoidable latency), while PARTR (multithreading) is making amazing progress, and a lot of the tooling around point 4 already exists, mostly just needing some updates for 1.0 and general reliability.


#28

I worry that this thread is not the best place for posting wish-lists. as I think there’s a danger it can be interpreted as us telling the compiler devs they we think they should be spending their time differently. If one isn’t careful, that can come off as ungrateful given that we are here enjoying the fruits of the compiler devs labour for free.

While I’m sure many of the compiler devs want to see the language grow and would be very happy to know that a feature they worked on helped someone, I doubt thats a primary motivator for what they’re doing. I’m sure most have different reasons much higher on their priorities than making Julia users happy so I worry that exclaiming in this thread that one wants compiler latency improvements and PackageCompiler.jl development is unproductive.

I think that if we want certain things to happen and we are unwilling or unable to do them ourselves, it’s be nice if there were a somewhat standardized way to contribute money to specific Julia compiler projects. Ie. perhaps we should think of a way to compensate people in order to convince them to work on compiler latency and and static compilation.

I know personally that there are a few choice Julia organizations and projects that I’d sponsor if there was a clear way to do it.


Edit:

The first two paragraphs came out much more harshly than I intended. I’ve changed the wording to something hopefully less unnecessarily inflammatory and more accurate.


#29

I did not mean to be rude or tiresome with this. In any case, I think this topic is mostly about communicating the compiler team’s plans, which is great since it manages expectations.


#30

This is one option:
https://www.bountysource.com/search?query=Julia


#31

I didn’t mean to imply any of that. I just think you often do a good job of explaining how open software is different from purchased software and I thought those points were relevant here as the thread was in danger of turning into personal wish-lists.

My phrasing there could use a lot of work, sorry.


#32

I do also think wish-lists are a lot less obnoxious than demands. Even the original post references a “pain point” for users (i.e., a widespread wish). Feature requests are the same way. Some projects even have voting on feature requests, to see what’s actually wished for. The “don’t care about you” part I guess can be chalked up to phrasing, as you say, but I would think most project contributors do care about their users. There are degrees, of course, but this thread hasn’t seemed that bad to me, so far? Anyway, I’m not under the illusion that these priorities will change much anyway, so I guess it’s all a bit moot – which is perfectly fine with me. I’m sure it’ll all turn out awesome.


#33

Personally I don’t find people giving their views on these priorities or their own preferred orderings problematic—it’s useful feedback. It happens that there is actual funding and therefore some schedule for multithreading whereas no one has decided to fund latency work. Threading work is also very close to being ready so it gets a bit of a push to get over the finish line. Of course, we’re also all painfully aware that compilation latency is an issue; so we think about it, discuss it, and work on it regardless of funding specifically for that.


#34

i tell everyone that asks me about julia that despite the 1.0 tag it’s not ready for general consumption because of the compilation latency and lack of debugger.


#35

It appears to me that you are combining facts with your personal judgment. That there are compilation latency issues and no debugger are objective facts. Whether that makes Julia “not ready for general consumption” is subjective and fairly debatable given how many people at this point clearly consider it to be so—while others disagree. That’s up to each individual to decide. It would seem better to present the facts to people instead of making a judgment for them.


#36

I completely understand your perspective, as I think those are necessary for broadscale adoption to displace matlab… but as a general statement it isn’t accurate, as not everyone needs those features to make a compelling case to use Julia.

A better approach is to segment people into certain groups and say “if you are in XXX group, then you should wait to use Julia”. And “if you are in YYY group then it is time to start learning and using Julia”. What the groups are depend on the sociology of your field.

In economics at least, there is a huge segment of users who primarily write scripts, need a responsive interactive environment and live inside of the code stepping exploration tool (which is poorly named as a" debugger") . I am frequently in that category myself. Many of them are better off waiting… Although I have told entering PhD students they should learn Julia in preparation for the future.

But other groups need fast code, to write packages, use specialized packages that Julia does well, have sufficiently large projects that writing them in matlab is a software engineering disaster, or simply have the stronger software background to work around any warts in the current environment. For them, Julia has been ready for quite a while, and gets better every day.


#37

Very true this. I happen to be in the second group, and for me Julia has opened so many doors that were previously closed. The key for me is the performance/code complexity ratio that can be achieved, which is incredibly high. But I also feel and understand your point of view, @bjarthur. Latency in particular is an important caveat to bear in mind with Julia, particularly as solving it completely is likely going to be difficult (just my opinion/guess). See, the Julia model is quite revolutionary as far as I can tell, but it does have its drawbacks. It’s not really possible to precompile code for all possible type combinations, as these are actually infinite, and furthermore call paths can change by adding code that specialises methods. But AFAIU it is this very peculiarity (possibility to completely infer and specialise a whole call tree) one of the reasons for Julia’s potential to pull ahead of even C/Fortran, performance-wise, in some cases.

I would suggest that, despite the above, you keep in mind the potential of this new thing. It’s early still, but doesn’t it feel very strongly as the future of scientific computing to you? It sure does to me.


#38

This really depends on your personal preferences and needs.
In more standard compiled languages, I used the debugger all the time. In dynamic languages like Julia or R, I interact with the code more closely and execute a few lines, check intermediate values, modify something then execute a few more lines etc until everything is just how it should be. This is in an industrial process engineering environment, modelling reactors etc, but should work the same in many if not most application areas.
Sure, setting a conditional breakpoint would be useful every now and then in a long loop, but I can get along without it when I have to. The benefits of using Julia far outweigh any longing for a debugger at this time.
As for JIT lag, our models run hundreds of thousands of times per session and again the benefits of the execution speed gains far outweigh any issues with first run delays. We simply keep the session active as long as possible.
So, while everyone has different wants and needs, my personal preferences would always be anything that shaves a millisecond off execution time here and there as a first priority. Everything else is gravy, although being able to compile binaries for distribution would be pretty awesome too. And is already possible, if not single-button-click simple just yet.
From my perspective, Julia is not only production ready, it is kicking the competition’s backsides. The Julia version of our main model is a third shorter in lines of code, hugely more readable/maintainable and runs more than 100x faster than the compiled code it replaced. Yes, 100x. This was not a typo.
My team and I have been actively pushing for wider adoption in our company to the point of of compiling and presenting in-house courses on Julia. No-one so far who has seen the ease of learning the language and the speed at which their models run has even asked about a debugger. They all naturally adopted the interactive development style instead.


#39

What I have found recently is that a long of people who think JIT lag is an impossible barrier have a big misunderstanding, so I would like to clear that up just in case. JIT time is additive, not multiplicative. It is a fixed cost dependent on the code and not the runtime. Mixing compilation and the run doesn’t make the run any slower. A common example I have seen of this misunderstanding is people calling a function once before the real call to compile it: that doesn’t save any time because either way you pay compilation just once!

Knowing that, compilation only gets in the way when at a given code size the compilation takes much longer than the actual run. Since runtime scales by values like the length of the array while compile time doesn’t, compilation is small enough to not be a factor in things like big data calculations, ML training, and production differential equations solving. For example, we have an independent audit on using our pharmaceutical modeling differential equation platform and their timings included compilation time but it was still multiple times faster than the industry standard MATLAB toolbox (for QSP), showing that when solving real problems compilation time is simply not a factor. So that is not the worry.

That said, there is still an issue. Opening a REPL and waiting 2 seconds for a simple command can really suck. Does this effect big scripts? No. Is it annoying when doing interactive work? Yes, it can be. And that is precisely the issue: it is annoying, but it doesn’t actually interfere with Julia’s usage on the large scientific problems that Julia seems to do so well! In fact, there is not reason to be future tense there: compilation time does not effect the large DiffEq solves, the complicated JuMP models, or the training of Flux neural nets that have become so popular. This is the realm that is the bread and butter of “I need Julia”, so in some sense it should be prioritized.

But what it does do is present a barrier for using playing with small problems. It is small one but an annoying one. Is that worth more than focusing in Julia’s main mission? PARTR multithreading and increased optimizations will effect the large models and packages, so in some sense that is the bigger deal for Julia’s mission. So it really is a trade-off and the only true answer is to work on both ends. That said, I can see the ability for layered multithreading driving a lot of heavily funded large applications which could then be used to help the latency issues, and it seems this is one of the reasons for prioritizing in this manner.

So as a non compiler guy looking in, I get it and this makes a lot of sense to me. But I would like to see the interpreter completed for “small interactive use”. Maybe the key to Julia is one langauge, multiple optimization levels. Julia is easy to optimize, but applying all of it can be overkill, and making it easier to make the right choice might be the way to really address compile time. The future is interesting.


#40

I totally agree with you.
I think there is also one big factor which is missing - Time Schedule.
Namely, if you are on the group which must have the Debugger & Compilation Time solved, how long should you wait?
Is it 1-3 months or is it 1-3 years?

Another interesting point is when you set such priorities are the current group of active Julia users is a good representation of the audience Julia is aiming for?
I think it is a little biased and the current users have much better programming skills than the average MATLAB / Python / R user Julia is aiming (This is my own assumption that this is what Julia is aiming at).
This might cause a shift in priorities.