Return NaN instead of DomainError

I could not find a way to make function calls like sqrt(-2) return NaN instead of raising an error. Is that possible?

It would be quite useful, at least sometimes, to just let the NaN propagate to the final result. Also this would save a comparison with zero which is always present is sqrt code now.

2 Likes

If your input are floats, then you can use @fastmath, but it will not work with other types:

julia> @fastmath sqrt(-1.0)
NaN

julia> @fastmath sqrt(-1)
ERROR: DomainError with -1.0:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
 [1] throw_complex_domainerror(::Symbol, ::Float64) at ./math.jl:31
 [2] sqrt at ./math.jl:479 [inlined]
 [3] sqrt at ./math.jl:505 [inlined]
 [4] sqrt_fast(::Int64) at ./fastmath.jl:361
 [5] top-level scope at none:0

julia> @fastmath sqrt(-1//1)
ERROR: DomainError with -1.0:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
Stacktrace:
 [1] throw_complex_domainerror(::Symbol, ::Float64) at ./math.jl:31
 [2] sqrt at ./math.jl:479 [inlined]
 [3] sqrt at ./math.jl:505 [inlined]
 [4] sqrt_fast(::Rational{Int64}) at ./fastmath.jl:361
 [5] top-level scope at none:0

Wow, I would never guess that @fastmath changes the visible behaviour that much. At least the docs give impression that it only enable less precision.

Edit: the docs indeed contain the info about NaNs, a few pages below.

julia> isnan(x) = @fastmath isnan(x)
isnan (generic function with 1 method)

julia> isnan(NaN)
false

No. Don’t do that!!! It’s undefined behavior when you have NaN in @fastmath.

1 Like

The package looks like it does what’s needed. And even through they wrap the Base.sqrt function, the compiler probably is smart enough not to perform the comparison twice. And by the way, their trig functions differ from Base ones not only in NaN handling: they just use libm functions, while Base has Julia implementations. Not sure why the case for sqrt is different.

Good to know that.
Just to be sure I understand you correctly: you are saying that using NaN values as input ta a @fastmath function will lead to undefined behavior, right? Not that @fastmath sqrt(-1.0) leads to undefined behavior, correct?

No I’m saying that what you suggested is undefined behavior, i.e. @fastmath sqrt(-1.0) is undefined behavior. @fastmath assumes NaN doesn’t exist anywhere. It’s a mere implementation detail that @fastmath sqrt(-1.0) return NaN. It could throw an error or even crash.

3 Likes

The suggested solution with NaNMath package is ok and works, but:
Is there any way to make all math functions return NaN in a large block of code? Like I have a block like this:

for i in ...:
    a[i] = sqrt(f(i))
    ...
    if ...:
        arr = [sqrt(f(j)) for j in ...]
        b[i] = log(f(arr))
    ...
end

And would really like all functions like sqrt and log here to just produce nans. The only way I see is manually find offending function and replace them with NaNMath implementations. Oh, and what to do if f(...) - some external function - also contains sqrt and others, and thus throws domainerror instead of just returning nan? And if I forget some function, which just happens not to throw when run on test data, it will explode at some unexpected time later.

Note that in this example it’s not easily possible to put catches in relevant locations. Even for simpler examples, this looks far from an expected quick solution…

Bumping this thread - any new thoughts on the NaN propagation, in light of almost a year of Julia and ecosystem evolution? The main issue is that it’s very painful to just propagate all the nans to the end, compared to basically any other used language. Actually, that’s almost impossible to do reliably from the first try with many functions calling each other. Something like a global/local option for that would be really useful. If it worked like

propagate_nan() do
  # code called here silently propagates nans, no matter what functions are called and how nested they are
end

this would be even better than other languages, because in other parts of code unexpected nans would still error.

Maybe that would work:

try
  a=sqrt(-2)
catch
  a=0/0
end

You also can do something like this:

julia> sqrt(-2+0im)
0.0 + 1.4142135623730951im

Oh, there are so many things going bad or wrong with this kind of partial solution :slight_smile: A few examples below, where we assume that f function is 3rd-party, from some other project, and definitely don’t want to edit its code.

f(x) = sqrt(log(x))
g(x) = try f(x) catch; NaN end
a = rand(10_000);

@btime f.($(2 .+ a));
  93.934 μs (2 allocations: 78.20 KiB)

@btime g.($(2 .+ a));
  483.374 μs (7 allocations: 78.31 KiB)
# 5 times slower!

@btime g.($(0.5 .+ a));
  221.003 ms (65249 allocations: 3.06 MiB)
# 2000 times slower!

g(2f0) |> typeof
Float32

g(0f0) |> typeof
Float64
# wrong type
f(x) = (sqrt(log(x)), sin(x))

f(2)
(0.8325546111576977, 0.9092974268256817)

g(2)
(0.8325546111576977, 0.9092974268256817)
# so far so good

g(0.5)
NaN
# how to get (NaN, 0.909)?

This gets only more complicated when called functions are more and more nested or modular.

I am not sure how this would make sense for generic code. NaN propagation is for floating point computations, you can stretch it to <:Real conceptually in some cases, but why would eg getproperty “propagate” NaN when in normal use scenarios neither argument is a NaN?

I’m talking about floating-point code only, and all examples in this thread involve simple math floating point functions.

I think you just have to use NaNMath functions everywhere. Alternatively, you could define a wrapper type

struct NaNProp{T}
    value::T
end
# and define all methods you need, eg
Base.log(x::NaNProp) = NaNProp(NaNMath.log(x.value))

where defining the methods would of course be easier with a macro (look for @eval blocks like this one as examples).

Then for black box f functions you can just wrap the input and have it propagate.

This seems the best way suggested in this thread so far! The wrapper type should cover more cases and hopefully (?) be close to zero-cost in performance. Not sure if it will SIMD, for example.

One obvious drawback is that it requires to track and define all mathematical functions, and also stuff like show, parse, convert, and so on for all float-related functions used in the code. Sounds like a complicated and error prone task.
Another practical limitation is that if a 3rd-party library calls C (or other) function or external program, and performs some calculations on this output, one cannot inject the wrapper.

1 Like

I don’t think so, you can just follow the examples of similar packages that redefine arithmetic. IMO this is something you can hammer out in an afternoon.

Frankly, I find it amazing that this is even possible, let alone that it is so easy.

Of course, you have to unwrap things, but you can automate this to a large extent. You just have to rewrap once done.

1 Like

This implies modifying a 3rd-party library, unfortunately.

It also feels very strange for me, that a performance-oriented language like julia doesn’t give the option (at least) to just silently propagate nans without any exceptions. This is one of the really few things which are much more convenient to handle in python, and for me in particular it means that the corresponding code cannot be easily and performantly ported to Julia.

I am not sure it does, but I don’t know your context. If you are looking for specific solutions, an MWE would help.