Julia has 4 optimization levels “-O, --optimize={0,1,2*,3}”, but I’ve never seen what they do exactly e.g. -O1, nor until today what -O3 does, or if it’s recommended.
Despite what you might think -O0 doesn’t mean no optimization, nor fastest always for short-running scripts.
@elrod is the expert, and an “alias analysis pass” seems worthwhile, even if “basic”, and -O3 at least is not hazardous(?!)
So why do we not have the higher/est default optimization? Compilers are a tradeoff, i.e. optimization level (even using them, interpreters are sometimes better). More means longer compilation time, and that used to be an issue, now with precompiled packages maybe we could go all in with -O3 as the new default?
But I’m thinking only for packages. I think maybe we should lower the default for non-precompiled code, i.e. code not in package. Possibly to -O1, if not -O0.
A lot of code isn’t speed-critical, something like 90%, and while people may not know this, we already have e.g. Base.Experimental.@optlevel
to set level in packages, in practice it’s used to lower it (I’ve never seen it used to opt into -O3). This was used e.g. in Plot.jl for good effect.
Since that use in Plots and other packages, we got precompilation for packages in 1.9, and maybe the need for this strictly Experimental option is less now.
But only packages are precompiled, so if you lower optimization, your scripts potentially run faster. I would cautiously argue for -O1. It’s still some optimization, and I’m not clear what I’m missing out on. But it will compile faster, otherwise there’s no point. In C/C++ -O0 is abolutely no optimization, at least historically, i.e. developer/debug mode. I could argue for such, maybe only in the REPL. In Julia it actually does one optimization. [And there’s also the interpreter option: --compile=min which optimizes less, since it doesn’t even compile, only interpret.]
I intentionally didn’t put this into the Performance sub-category. I think it may be for questions about optimizing specific code. I want to reach all users, know what (regular) users think, and when they use -O3 (or -O1)?
If the new default were changed to -O1 you could always opt into the current default with -O2.
Note there is also: --min-optlevel={0*,1,2,3} Set a lower bound on the optimization level
I believe it’s to override the Experimental opt-in for packages, i.e. not for doing already what I’m proposing…
Python already has the system I’m proposing, in effect. It’s slow, even slower than our -O0 at runtime, while it compiles faster than any of Julia’s options. Since it compiles to bytecode, doesn’t try to optimize much at all, nor use LLVM (which is known to be slow). But Python is in effect as fast as Julia, for many thing, since it calls C code, e.g. NumPy. That’s the equivalent to the -O3 (or even current -O2) I’m proposing for precompiled packages. And it has served Python well.
I realize people may worry and think, why optimize less when I can do more, and say people can rather just opt into -O1 (for their scripts), but I believe (not sure), that would also affect precompiled packages you have, i.e. undo the precompilation, ask for recompilation at a lower level, which has no good use (and Python certainly would not do similar).
What I’m proposing would a a relatively simple change, with no bad effect, and all effects undoable, just if people support it. But a Phase II might be smaller precompiled binaries. Already they are large because LLVM is distributes with, the machinery for optimization. When you precompile all you packages (with LLVM) and distribute that code, then you can skip LLVM (already possible if all compiled, no compilation needed at runtime), and still have runtime compilation available, or maybe a smaller LLVM distributed with for the rest of the apps that needs light compilation.