I’m not sure if the
@inbounds macro handles top-level blocks like that. It may need to operate on a function body.
I’m not sure if the
So the pattern I usually use when writing stuff like this is somewhat like this.
function user_facing_function(args...) checkbounds(ags...) return _nobody_call_this_function_but_me(args...) end @inbounds function _nobody_call_this_function_but_me(args...) ... end
But I’m I’m sure there are other (and better) ways of doing this.
Have you tried with the @avx macro? [ANN] LoopVectorization
@inbounds doesn’t work an a function definition. It needs to be inside the function as
function _nobody_call_this_function_but_me(args...) @inbounds begin ... end end
What about this?
Sorry, I meant in a function body.
Would it be relatively feasible (and relatively easy) to have a macro that goes through and does standard tricks? e.g.
@makefaster function f(x) # ... macro adds inbounds to all loops # ... also does standard passes that are weakly better end
And it could potentially have some extra arguments for various passes. For example
@makefaster function f(x), settings = (inbounds=true, avx=true, views=true) # ... macro adds inbounds to all loops # ... also does standard passes that are weakly better end
Or whatever, which would add in
@view, etc. everywhere. Of course, these things are not always faster so being able to toggle is worth the trouble.
This sort of thing would not be a replacement for doing things right, but might be a good way to tell people with minimal julia experience how to get started - and a decent heuristic? Can’t fix issues such as bad type inference problems, but it could add in the mindless annotations.
As discussed with the global scoping, it seems like a way to declare a “script” that tells the compiler it can compile the whole file as a big
let would alleviate this common performance issue. Then the heuristic solution to the problem could be to tell people when running as a
.jl file to flag it as a script… Remembering to write scripts without globals is pretty tough for people coming from scripting languages.
If such a thing could exist trivially/those optimizations were always doable, don’t you think they would be done already and such a macro wouldn’t be necessary?
The “ugly” truth of
@inbounds and friends is that they are not trivial and always allowed - in fact, the docs even discourage it since you’re basically telling the compiler “don’t check for safety here, I know better than you”. The toggle you’re asking for is here, in the form of asking for the dragons and not in the form of making them vanish.
Exactly, the name of such a macro might as well be
Even in it’s present form, macros like @inbounds are convenient for the inexperienced like myself as they provide a quick way to get some performance benefits from scrappy code. I suppose the trick is to avoid the temptation of letting that be a crutch rather than learning the proper approach.
You might have been joking, but I think that is a great name for that kind of macro. Makes people think twice before trying it. The old
@fastmath gave the wrong message entirely. Or
No, I wouldn’t think that at all. The compiler could never know when to take off the safety wheels on its own. Those sorts of decisions can never be done automatically.
In effect, though, people are comparing code with safety wheels in Julia with code without it other languages (either directly in C, or sometimes using unsafe C behind a Python interface) … We tell people to look at the performance guide but scripters have trouble doing it for some reason.
So is it really that bad to have a macro called
@makecrashy which we can tell people to use to get a sense of whether the safety wheels are slowing them down? Potentially trying a few toggles to see what helps? The end users are lower skilled programmers people with big scripts (rarely organized in functions) , where they are unlikely to be able to scan the code looking for appropriate function and vector annotations.
There already are flags to turn off various things globally,
--check-bounds for example.
However, lower skilled programmers don’t need these macros. Their performance problems will come from type instabilities, unnecessary allocations, bad usage of CPU cache, suboptimal algorithms etc. When they are at the point where they write code where
@inbounds would matter (basically only when a bounds check would prevent SIMD) they are at the level where they can be properly taught about these macros.
There is no programmer level where a
@random_unsafe_operations is valuable. If anything, such a macro would give a wrong impression on how to program.
I fail to see why this couldn’t be done in a cases where you’re iterating over the indexes of an collection that is not modified at all during the loop, e.g.
function compileme(x) q = 0 for i in eachindex(x) q += x[i] end end compileme( (1,2,3,4) )
x is never reassigned in the loop and has fixed length, the compiler should know that
x[i] will always be inbounds. I’m pretty sure it already does the optimization
In the above tables, note that he gets a significant speedup between example E and G. But I agree that the others are more important.
Beginners dont know this (eg I am not even sure how to do that with jupyter), but more generally just because I am willing to believe a function is safe doesn’t mean I want everything to be run unsafe.
For sure. That stuff can never be automatic, and it is tough for non-programmers to handle. But if you can still get a 2x speedup with a simple macro on some functions, it is worth it.
That could be true, but most people in my field (economics) don’t want to learn to program. So it is one thing if such a macro (giving ways to blindly flip annotations inside a code block) wouldn’t be helpful
From what you say about SIMD, it may not.
But the argument that it end up as a crutch, preventing people from learning to program properly only makes sense if your intended user wants to learn to program properly. I don’t think purposefully inconvenience is a good strategy to teach better patterns… But, all of that is moot if blindly adding in the inbounds, avx, view, etc annotations is rarely helpful.
I am not sure about this. Most economists I know who are into computational work actually want learn to program very well, or already did so. Papers that use numerical techniques (for nonlinear solutions, estimation, etc) can easily span years and comprise many 10k lines of code, and become unmanageable without at least intermediate coding skills. Consequently, most projects have at least one coauthor who programs rather well.
Instead, I wish there was less emphasis on the micro-optimizations like
@inbounds, and friends on this forum. Seeing discussions about optimizing code as a newbie, it may be very easy to get the impression that this is the magic where performance comes from, because people participating in these already instinctively apply all the usual performance tips to their code so they are wringing out the last 20%.
But for most users, ignoring
@simd etc first and just writing compiler-friendly code with reasonable memory traversal and allocation patterns is best: it will get you very far, with robust performance across Julia versions and the underlying hardware.
Sure, they are there, but not too many of them, and between the two of us we probably know most of them personally. They largely prefer Fortran and C because they it is easy and requires no software training to write reasonably fast Fortran code due to its simplicity and aliasing rules for arrays.
Furthermore, most of the computational code is written by RAs who are mostly self-taught - especially the big projects. The researchers seem to appreciate that if they use languages other than Fortran, they can more easily experiment with better algorithms (which is where the real benefits appear) but have trouble moving past the transition of everything suddenly getting slower.
Sure, some are that size, but software engineering and training is rare. This is a tiny proportion of the number of people I wish would use Julia… and most of them are on Fortran (and occasionally a largely C subset of C++). Since these people are using fortran/C for speed, it is tough to convince them to change to julia if they port something and it is orders of magnitude slower than matlab (let alone Fortran or C). Helping them past that more easily will help them get addicted to julia.
We 100% agree on this. In fact, my proposal of the
@makecrashy has this as its goal. i.e. instead of making people think that they need to learn a whole bunch of new rules for when to annotate with various macros, we train them them to: (1) get everything out of globals; (2) some basic heuristics for ensuring type stability; and (3) if it is still “slow” relative to non-julia, try
@makecrashy if they want to see if microoptimizations might help in their circumstances… and only learn more or tweak if they need to. The global way to skip inbounds isn’t pervading…
But, as I said before, if a
@makecrashy can’t be written to actually help, then this is all moot.
Or, is there a way for the
@inbounds able to extend to a whole function and recursively apply to blocks? That would go a long way…
@code_llvm, there seems to be no bound checks in your example and simd is used.
However, when x is a list instead of a tuple, bound checks are still done (preventing simd), even though they are in principle not needed as far as I can see. Adding
@inbounds gives a significant speedup (and simd back).
Disclaimer: I tested on Julia 1.2 (Jupyter Datascience Docker image has not been updated yet )
Is there anyway to get the desired performance from this code without using @inbounds ? Someone mentioned earlier that loops using eachindex should compile to equivalent behavior since the compiler should know that the index can’t go out-of-bounds.
However it looks like the fact that the the index goes to “size-1” instead of “size” would be a problem if you were trying to avoid using @inbounds.
Also, is the scope of @inbounds the entire block, or just the loop over pp ? I’m surprised you don’t need to use @inbounds on each loop.
@inbounds for pp = 1:SizeZ-1 for nn = 1:SizeY-1 for mm = 1:SizeX Hx[mm,nn,pp] = (Chxh[mm,nn,pp] * Hx[mm,nn,pp] + Chxe[mm,nn,pp] * ((Ey[mm,nn,pp+1] - Ey[mm,nn,pp]) - (Ez[mm,nn+1,pp] - Ez[mm,nn,pp]))) end;end;end
This is a terrific thread. I’m trying to write some FD code and now all my questions about such code have been answered
edit: i was just thinking that @inbounds probably tells the compiler that everywhere inside this block of code that you think you need to check bounds, don’t do it. nothing like realizing the obvious after you’ve posted.
It would be great to see how you add the suggested modified code and a new benchmark column to the first message
I’m not familiar with a “list” in julia