Fantastic progress in master branch!

It blows my mind that JavaScript is this fast. Can you imagine the amount of time and money that must have gone into this number? And all that effort for such a horrible, horrible language.

I’m very excited about constant propagation. Any comments on the effect this might have on compile times?

2 Likes

Compile times are already a problem that we need to address more generally. The way forward to better compile times is to implement several larger solutions, including the following:

  1. Interpreting more code rather than compiling everything;
  2. More thorough pre-compilation and caching of generated code at various levels;
  3. Better specialization heuristics so that we can more often compile a single generic version of code when specialization isn’t necessary.

All these improvements should more than compensate for spending a little more time on compilation when it can really improve performance. Fortunately, the nice thing about optimizations is that they don’t (by definition) change the behavior of the code, so they can happen at any time.

5 Likes

There is a plaform without perl ? what is the world coming to.

Those are fightin’ words around these parts :wink:

Me too. I am doing data science/analytics after 8 years of theoretical/computational physics (I have a bunch of papers in the pipeline, hope I can get them out [!]) I use Julia. But, I am learning and using python for some things now because: I have to work with other people; we are pretty sure what the python API and ecosystem will look like in two years, etc. (The depth and breadth of python is great. But, immediately I am sorely missing the ability to “talk about types” at the core of the language.)

Exact diagonalizations of QM models like @mason is doing are a bread-and-butter task for which Julia is well suited. In fact, for a very large number of physical science projects, the code is shared by one or a few people. And its lifetime is measured in months. Or maybe a couple of years. And there is usually no PHB vetoing your language choice. This makes physical science a great vector for Julia. In the past two years I did get one new postdoc to try Julia for a project. I offered basically unlimited support. I don’t think it was language partisanship that prevented uptake, rather perceived practicality. I gave a Julia talk a the Barcelona supercomputer center too. There is obviously a great interest. The room was full, which doesn’t happen often; I hope someone gave it a try. My contacts there have not had time to try it for anything yet, but have a genuine interest. My guess is that language adoption has some features of dynamic growth of a scale-free structure. There won’t be a critical-mass event, but adoption will still be in a sense fast.

A possible reason is in this post. The author claims that being dynamic doesn’t, in itself, make a language hard to optimize. Python is more difficult to make fast than java script, because python has a “rich object model”. Python developers implemented a lot of cool features that were relativley easy to do in an interpreted language. But they were not thinking about the implications for building an optimizing compiler in the future.

Another possible reason is that the JavaScript JIT compilers are very, very good at this stage, as a lot of effort (and money) went into them, and they really shine where

  1. the “same” calculation is performed repeatedly,
  2. and this “sameness” is cheap to check by the JIT.

Julia was designed to allow the compiler to reason about 2 (among other things).

An implied criticism of microbenchmarks with JS-style JIT compilers is that a small number of repeatedly executed hot code paths are not very informative about language performance for more complex code.

2 Likes

20 posts were split to a new topic: Adding VB to benchmark comparison

In my situation, my code is 1.3x faster when I switch from v0.6 to v0.7. however, the “Task Manager” of manjaro showed that the new version cost more memory. That is confusing.

Speed versus memory is a standard tradeoff in computer science. In this case, the Julia compiler is probably specializing your code more aggressively, which means that more code is generated but that code runs faster.

2 Likes

Some benchmarks about a new inlining algorithm were posted here in graphical format. The new algorithm is again one of those optimizations that can both improve performance and cost more in terms of compile time. This is a difficult tradeoff, but as @StefanKarpinski points out there are several paths forward.

But if we can AOT some package code then a large chunk of compile time is cutoff for most users, so that is a nice tradeoff.

1 Like

But if we can AOT some package code then a large chunk of compile time is cutoff for most users, so that is a nice tradeoff.

But we don’t have that currently. And compile time is currently a major issue for real life software. I have a GUI that starts up in 3 minutes and pressing certain buttons the first time freezes the system for more than 20 seconds. Don’t want to blame anything here but just make clear that compile time is a real issue currently. AOT of packages is a long term project that will certainly not be usable before 1.0. I hope that I will be proven wrong.

10 Likes

Compile time is a top priority after getting 1.0 out, which we’ve hopefully made abundantly clear at this point – it seems like it gets brought up about once a week.

13 Likes

It seems the IPO constant propagation PR (https://github.com/JuliaLang/julia/pull/24362) is about to drop (although only partially enabled for the moment, pending future optimisations). The nanosoldier benchmarks seem really good in mosts tests, but I am often puzzled to see apparent regressions in some others. So please, educate me: in this particular nanosoldier run for the #24362 PR (https://github.com/JuliaCI/BaseBenchmarkReports/blob/25ad2f659d82ca7f65fe912ab9149d4e41c2aa4d/3c40a45_vs_eace3e9/report.md), should I understand that apparent regressions in things like e.g. sparse - dense multiplications are actually real, or are they noise? How can one tell?

2 Likes

There’s something funny going on with nanosoldier (our benchmarking infrastructure) that’s causing more noise than usual and we haven’t gotten to the bottom of it yet. So unfortunately, the answer is “it’s hard to tell”. Fortunately, we have a lot of people working on core Julia who are pretty meticulous about performance testing on their own and nanosoldier is mostly just an additional safety net for catching unexpected regressions.

Right! Actually, the last run you launched yesterday came out quite different.

https://github.com/JuliaCI/BaseBenchmarkReports/blob/c1a853ca11520522bafa859ebe7e82a612de4098/766e64c_vs_b8ee561/report.md

Spectacular…

1 Like

I wonder, how does it compare to the current 0.7?

2 Likes