I stumbled over a very strange thing. I’m updating from 1.6 to 1.8. The only problem all the scripts use much more CPU even to a point where everything is used. The scripts are CPU intensive and use lots of threads. Any ideas why that could happen?
Some more information would probably be needed to answer.
Do you have a reproducible Project.toml file so we know what versions and what packages we are talking about? Just having a nice clean Project.toml file will avoid a ton of pitfalls.
Do you see the slowdown in a brand new project environment or only in the old project environment inherited from 1.6?
Do you have any small examples of code experiencing slowdown? It would be pretty difficult to track the issue without that. You can use @time and BenchmarkTools.@benchmark to see where the slowdown is.
Is runtime slow? Or precompilation? Or installation itself? As in, is julia slow to compile your code or to execute the compiled code?
It’s not so easy. The script run for the whole day transforming live data. There are multiple threads and communication between different services.
I tried to minimize everything that could effect it - clear install, deleting .julia folder etc. Didn’t work. After a while the 1.8 version just spins up
I don’t know if I started Julia and the scripts the right way. I wrote a c# API which starts a Julia process within a the build in Process class. Three I just use the absolute path to julia.exe and after that the script path. Maybe there was a change from 1.6 to 1.8?
Thank you this looks nice for different use case. My scripts need to run for about 8h and compile time shouldn’t be a problem. They compile in the morning and should be good for the day. It was with way till I tried to update to 1.7/1.8.
I use more that 10 threads. One steams data and deserialize it, second one cumulates, third one waites for events and so on. Real-time data analysis…
I’m sorry, your use case seems a little complicated and I might not be able to help you.
Are you saying that each of these threads runs in some kind of ‘server’ mode, runnig waiting for inputs to arrive all around the day ? Or do you re-launch julia and the scripts several time per day ?
In the second case, DeamonMode.jl might help. In the first case, I cannot help you without deeper and more precise information about what exactly did go slower than 1.6 : which packages, which operation, etc, e.g. with a MWE.
@Impressium, you could also try the nightly 1.9-DEV. Of course we don’t want regressions, but if fixed there good to know. You might want to bisect or narrow down, e.g. test on 1.7 too (at least of not fixed on master).
Yes, and no. More designed for long-running (HPC type) code, but as you point out you can do scripts with that package, or simply also an option:
julia -O0
Have you tried that and do you see if 1.8 is faster (than 1.6) that way? Of course we don’t want a regression for any code, I suppose 1.8 is faster for most code.
Yes, julia -O3 should have longest compilation time, i.e. higher number means more optimization, thus slower to compile (potentially faster at runtime). Of course worth to try that too, or lower -O1 or -O0, though I was answering for scripts (short running where you want to minimize optimization time), still good to try all possibilities for you. And:
(@v1.8) pkg> status --outdated
and make sure you have same versions of your packages (and their dependencies). Julia tends to downgrade packages for me (or for people (like me) that misuse…).
I assume you are multithreading? And also maybe calling Linear Algebra?
Possibly your BLAS is using more threads than before and this is interfering with your Julia threads? Try adjusting the number of blas threads with BLAS.set_num_threads
Try setting it to some number less than the number of physical cores.
It seems to me you need to use ProfileView.jl on a small chunk of your problem and see where the time is being spent. You could also compare that with 1.6. That should point you towards a fix.
Anything else is speculation as we cant read your code and the context seems quite complicated.
I would try 1.9 as I mentioned (then you don’t need to compile our own 1.8.3) and 1.8.3 seems near, it’s getting very stable (1 regression), and might be close.
I’ve got one more observation.
I’m running multiple scripts on this server. One script is just running fine (1% CPU load). I started a second one and both of them use 100% split 50/50. I stop the second script and the first one goes back to 1%. I can do it again and the same happens
Multiple scripts, of I assume this script (though would be bad either way).
I assume you do:
$ julia -t auto
julia> Threads.nthreads()
16
so try halving:
$ julia -t 8
Then run just two scripts like this, or 4 halving yet again etc. The auto is there to use the highest number of practically usable threads. Maybe going over the edge for one (or many scripts) it’s going to be expected to get bad performance.
It’s possible though that it shouldn’t fall of a cliff, if it did that. Do you get better (or same) overall performance with some such settings? Why are you running many scripts (of the same kind?) since you’re using threads anyway? This seems to be a regression in Julia (and if not to be expected) you could file an issue.
I have recently encountered a significant performance hit when running my scripts at maximum threads count. Try reducing nthreads value as suggested above by Palli.