I could not find a way to make function calls like sqrt(-2) return NaN instead of raising an error. Is that possible?

It would be quite useful, at least sometimes, to just let the NaN propagate to the final result. Also this would save a comparison with zero which is always present is sqrt code now.

If your input are floats, then you can use @fastmath, but it will not work with other types:

julia> @fastmath sqrt(-1.0)
NaN
julia> @fastmath sqrt(-1)
ERROR: DomainError with -1.0:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
[1] throw_complex_domainerror(::Symbol, ::Float64) at ./math.jl:31
[2] sqrt at ./math.jl:479 [inlined]
[3] sqrt at ./math.jl:505 [inlined]
[4] sqrt_fast(::Int64) at ./fastmath.jl:361
[5] top-level scope at none:0
julia> @fastmath sqrt(-1//1)
ERROR: DomainError with -1.0:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
[1] throw_complex_domainerror(::Symbol, ::Float64) at ./math.jl:31
[2] sqrt at ./math.jl:479 [inlined]
[3] sqrt at ./math.jl:505 [inlined]
[4] sqrt_fast(::Rational{Int64}) at ./fastmath.jl:361
[5] top-level scope at none:0

The package looks like it does whatâ€™s needed. And even through they wrap the Base.sqrt function, the compiler probably is smart enough not to perform the comparison twice. And by the way, their trig functions differ from Base ones not only in NaN handling: they just use libm functions, while Base has Julia implementations. Not sure why the case for sqrt is different.

Good to know that.
Just to be sure I understand you correctly: you are saying that using NaN values as input ta a @fastmath function will lead to undefined behavior, right? Not that @fastmath sqrt(-1.0) leads to undefined behavior, correct?

No Iâ€™m saying that what you suggested is undefined behavior, i.e. @fastmath sqrt(-1.0) is undefined behavior. @fastmath assumes NaN doesnâ€™t exist anywhere. Itâ€™s a mere implementation detail that @fastmath sqrt(-1.0) return NaN. It could throw an error or even crash.

The suggested solution with NaNMath package is ok and works, but:
Is there any way to make all math functions return NaN in a large block of code? Like I have a block like this:

for i in ...:
a[i] = sqrt(f(i))
...
if ...:
arr = [sqrt(f(j)) for j in ...]
b[i] = log(f(arr))
...
end

And would really like all functions like sqrt and log here to just produce nans. The only way I see is manually find offending function and replace them with NaNMath implementations. Oh, and what to do if f(...) - some external function - also contains sqrt and others, and thus throws domainerror instead of just returning nan? And if I forget some function, which just happens not to throw when run on test data, it will explode at some unexpected time later.

Note that in this example itâ€™s not easily possible to put catches in relevant locations. Even for simpler examples, this looks far from an expected quick solutionâ€¦

Bumping this thread - any new thoughts on the NaN propagation, in light of almost a year of Julia and ecosystem evolution? The main issue is that itâ€™s very painful to just propagate all the nans to the end, compared to basically any other used language. Actually, thatâ€™s almost impossible to do reliably from the first try with many functions calling each other. Something like a global/local option for that would be really useful. If it worked like

propagate_nan() do
# code called here silently propagates nans, no matter what functions are called and how nested they are
end

this would be even better than other languages, because in other parts of code unexpected nans would still error.

Oh, there are so many things going bad or wrong with this kind of partial solution A few examples below, where we assume that f function is 3rd-party, from some other project, and definitely donâ€™t want to edit its code.

f(x) = sqrt(log(x))
g(x) = try f(x) catch; NaN end
a = rand(10_000);
@btime f.($(2 .+ a));
93.934 ÎĽs (2 allocations: 78.20 KiB)
@btime g.($(2 .+ a));
483.374 ÎĽs (7 allocations: 78.31 KiB)
# 5 times slower!
@btime g.($(0.5 .+ a));
221.003 ms (65249 allocations: 3.06 MiB)
# 2000 times slower!
g(2f0) |> typeof
Float32
g(0f0) |> typeof
Float64
# wrong type

f(x) = (sqrt(log(x)), sin(x))
f(2)
(0.8325546111576977, 0.9092974268256817)
g(2)
(0.8325546111576977, 0.9092974268256817)
# so far so good
g(0.5)
NaN
# how to get (NaN, 0.909)?

This gets only more complicated when called functions are more and more nested or modular.

I am not sure how this would make sense for generic code. NaN propagation is for floating point computations, you can stretch it to <:Real conceptually in some cases, but why would eg getproperty â€śpropagateâ€ť NaN when in normal use scenarios neither argument is a NaN?

This seems the best way suggested in this thread so far! The wrapper type should cover more cases and hopefully (?) be close to zero-cost in performance. Not sure if it will SIMD, for example.

One obvious drawback is that it requires to track and define all mathematical functions, and also stuff like show, parse, convert, and so on for all float-related functions used in the code. Sounds like a complicated and error prone task.
Another practical limitation is that if a 3rd-party library calls C (or other) function or external program, and performs some calculations on this output, one cannot inject the wrapper.

I donâ€™t think so, you can just follow the examples of similar packages that redefine arithmetic. IMO this is something you can hammer out in an afternoon.

Frankly, I find it amazing that this is even possible, let alone that it is so easy.

This implies modifying a 3rd-party library, unfortunately.

It also feels very strange for me, that a performance-oriented language like julia doesnâ€™t give the option (at least) to just silently propagate nans without any exceptions. This is one of the really few things which are much more convenient to handle in python, and for me in particular it means that the corresponding code cannot be easily and performantly ported to Julia.