Gettign this error while inferring using turing.jl. What could be the issue ?

```
nested task error: TaskFailedException
nested task error: DomainError with -0.00024058756172958402:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] throw_exp_domainerror(x::Float64)
@ Base.Math ./math.jl:37
[2] ^(x::Float64, y::Float64)
@ Base.Math ./math.jl:1123
[3] ^
@ ~/.julia/packages/ForwardDiff/QdStj/src/dual.jl:524 [inlined]
[4] CTL(du::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, u::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, p::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, t::Float64)
@ Main ./In[7]:23
[5] Void
@ ~/.julia/packages/SciMLBase/hLrpl/src/utils.jl:468 [inlined]
[6] (::FunctionWrappers.CallWrapper{Nothing})(f::SciMLBase.Void{typeof(CTL)}, arg1::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, arg2::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, arg3::Vector{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 11}}, arg4::Float64)
@ FunctionWrappers ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:65
[7] macro expansion
@ ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:137 [inlined]
[8] do_ccall
@ ~/.julia/packages/FunctionWrappers/Q5cBx/src/FunctionWrappers.jl:125 [inlined]
[9] FunctionWrapper
...
@ ~/.julia/packages/Turing/HwTAU/src/inference/Inference.jl:197 [inlined]
[15] #sample#5
@ ~/.julia/packages/Turing/HwTAU/src/inference/Inference.jl:193 [inlined]
[16] top-level scope
@ In[18]:1
```

*disclaimer: I know nothing about Turing.jl

I donâ€™t know what x or y are for you, but you appear to be raising a negative number to a non-integer power. This may make more sense to you if you represent your x value in polar form and perform the operation. I have tried to reproduce something that will give the same error:

```
julia> (-3)^(1.5)
ERROR: DomainError with -3.0:
Exponentiation yielding a complex result requires a complex argument.
Replace x^y with (x+0im)^y, Complex(x)^y, or similar.
Stacktrace:
[1] throw_exp_domainerror(x::Float64)
@ Base.Math ./math.jl:37
[2] ^(x::Float64, y::Float64)
@ Base.Math ./math.jl:1123
[3] ^(x::Int64, y::Float64)
@ Base ./promotion.jl:444
[4] top-level scope
@ REPL[6]:1
```

And a link to some reference material - i.e. that in general, raising complex numbers to non-integer powers results in multiple possible values.

And how you might find all of those values.

2 Likes

Thank you:)

but I believe issue is due to very large and small values in states for my model. Due to overflow it may be giving a negative value and as I have x^y in my model (where x is state) resulting in domain error.

If this is a result of an issue with numerical precision but you know that whatever x youâ€™re using here *should always be nonnegative* then you might try using `max(x,0)^y`

rather than `x^y`

wherever this problematic exponentiation is happening.

Although there are limits to these sorts of corrections. For example, if you know that x should instead always be *strictly positive* then youâ€™re in a tougher spot because what value should you set it to? You might need to refactor your calculation of x in an attempt to make it more numerically stable. Itâ€™s probably worth trying to improve the calculation in any case.

2 Likes

Out of curiosity, what kind of model is it? And which one is always supposed to be positive, x or y?

itâ€™s more like a network of 3 different models with connected input output. Though this error is only due to one of these model. x need to be positive otherwise it goes to complex numbers.

What it mean to refactor my calculation ? Can you give a example ?

I tried your suggestion `max(x,0)^y`

, it solves that problem but now iâ€™m getting unstable solution warning.

The floating point numbers used by computers (not just by Julia) are *not* a perfect implementation of the real numbers we use on paper. In particular, they only have so much precision. For â€śdouble-precisionâ€ť `Float64`

values, this is a little under 16 digits. However, one dangerous thing that can happen is catastrophic cancellation, where a calculation results in many fewer digits of accuracy.

For a very basic example, suppose we want to compute `(x + 1)^2 - 1`

. Of course, we could equivalently have written this as `x^2 + 2x`

or `x * (x + 2)`

. But compare these methods with `x = 4 * 10.0^-12`

:

```
julia> x = 4e-12
4.0e-12
julia> (x + 1)^2 - 1
7.999823026239028e-12
julia> x*(x + 2)
8.000000000015999e-12
```

The exact result should be `8.000000000016`

. Notice that the second version is accurate to about 16 digits while the first is only accurate to about 4 (i.e., the first is about 10^{12} times *more wrong* than the second).

The first version has trouble because `(x + 1)^2 == 1.0000000000079998`

. Note that this, too, is accurate to almost 16 digits, but then we subtract `1`

from it and our final result becomes much less precise. Smaller `x`

will make the issue even worse (try `1e-16`

).

The best way to structure your computations depends on what exactly your computation is. The above example was just a simple demonstration of the concept and was resolved with some trivial algebra, but in general this can be much more challenging. Catastrophic cancellation is mostly a problem of subtracting numbers that are very close to each other (or adding numbers that are very nearly opposite each other) while multiplication and division are largely immune to this phenomenon (but have their own failure modes).

1 Like