In C++, there is no performance penalty for writing exception handlers provided that exceptions aren’t actually thrown (which should be “rare”). I was wondering if handling exceptions in Julia adds any overhead in the non-exceptional case.
Julia like C++ has low (but not quite 0) cost exceptions. Note that C++ exceptions are not 0 overhead either since you need at least 1 instruction to check.
At least some time ago, throwing exceptions and catching them in try-catch-finally was expensive. Has this changed?
No nothing has changed and it is expensive.
Also zero cost would not need “one instruction to check”
Does this mean things statements like expr && error("throw error")
are costly to have? I have quite a few of these in my scripts.
@Oscar_Smith C++ exceptions are zero cost (at least in most compiler implementations). In the non-exceptional case, there is no additional overhead. See the 2006 Technical Report on C++ Performance (section 5.4.1.2) for a more nuanced explanation.
I was hoping that Julia had similar optimizations, so that it wouldn’t matter where a try-catch
is placed. But apparently putting it in an inner loop can be costly (in a toy problem, it results in a 2x slowdown with zero thrown exceptions). I wonder if exception optimization is on the roadmap for the compiler team. Otherwise, this is a language construct that has to be used cautiously.
I’m no expert on this, but I think the general strategy is to use if-else
blocks in performant loops. You can always throw an exception in an outer function if you really need to.
EDIT:
Can you give an example of a case where you’d like to use a try-catch
block in an inner loop?
It would be interesting to see what you tried. The test is, of course, not zero-cost: you still have to execute the condition to see if the exception should be triggered. I’m not seeing any difference, although there is some inconsistency in the @btime
results:
function f(n::Int)
x = rand(Int)
t = 0
try
for i = 1:n
i == x && error("bad i value")
t += i
end
catch
end
return t
end
function g(n::Int)
x = rand(Int)
t = 0
for i = 1:n
i == x && break
t += i
end
return t
end
and timings:
julia> using BenchmarkTools
julia> @btime f(10_000)
4.541 μs (0 allocations: 0 bytes)
50005000
julia> @btime g(10_000)
2.296 μs (0 allocations: 0 bytes)
50005000
julia> @btime g(10_000)
4.517 μs (0 allocations: 0 bytes)
50005000
julia> @btime f(10_000)
4.540 μs (0 allocations: 0 bytes)
50005000
julia> @btime g(10_000)
4.516 μs (0 allocations: 0 bytes)
50005000
julia> @btime f(10_000)
4.540 μs (0 allocations: 0 bytes)
50005000
There’s that one @btime
for g(10_000)
that’s 2.296 μs but I’ve seen that for f(10_000)
as well, although far less often. Could be something to do with the branch predictor.
For instance, assuming that you have a while loop and expect an interruption (in repl) through C-c.
As far as i know, one can do it with a try… catch.
try
while(true)
processing(); # waiting for user interrup
end
catch exception
# handling interruption correctly
end
Is there any way to do this differently and that would lead to a performance increase ?
No. As long as the error isn’t thrown.
Zero cost is about try-catch
. It does make executing throwing an error more expensive but none of these have an effect on the cost of having an error throwing expression that isn’t executed normal.
The main issue is that we (currently) support throwing the error across C frame. It is doable IFF the C code is compiled with all the unwind infos.
Thanks to everyone who contributed. Didn’t expect this level of activity for this question.
My question arose from this exercism.io solution (I commented as “shmiggles”). Moving the try-catch
to the outer loop (or removing it entirely) gave a 2.5x performance boost. I verified the benchmarks on my machine (AMD Ryzen 7, if that matters). Relative benchmarks were pretty consistent across multiple runs with varying input sizes.
“Zero-cost” exceptions are coming even in Python 3.11 (is it really, or just much faster, “minimal”):
https://bugs.python.org/issue40222
Even before they were used a lot, as Python users aren’t that performance obsessed. While C++ people are, and if I recall the cost can be zero there at runtime, i.e. no extra instructions, if not thrown, but there might be indirect costs? Extra code, and affecting code generation, thus placement of code in cache, so maybe not truly zero. For some reason Microsoft’s docs rather use the term “minimal” cost.
“Divergent error handling has fractured the C++ community into incompatible dialects”
“Exceptions are required to use central C++ standard language features (e.g., constructors) and the C++ standard library. Yet in [SC++F 2018], over half of C++ developers report that exceptions are banned in part (32%) or all (20%) of their code, which means they are using a divergent language dialect with different idioms”
“C++ is the only major language without a uniform error handling mechanism that is recommendable for all code”
The main thing about exceptions IMHO is not performance, but rather that they introduce an implicit control-flow primitive that circumvents the type system. Control can leave a function at any point and show up elsewhere in the call stack. This makes it harder for humans and tools to make a system reliable.
On the other hand, the Rust syntax of using foo()?
to propagate errors seems lightweight, readable, and reliable. I wonder how that might go in Julia.
https://github.com/iamed2/ResultTypes.jl reports a performance boost from avoiding exceptions.