The following code generate different result

```
1 Complex((-0.5) * 0.5^2)^(-0.2)
2. ((-0.5) * Complex(0.5)^2)^(-0.2)
```

I run the code in Julia 1.1, the results are as following

I thinks the point is how to define (-1)*(0.0im).

The following code generate different result

```
1 Complex((-0.5) * 0.5^2)^(-0.2)
2. ((-0.5) * Complex(0.5)^2)^(-0.2)
```

I run the code in Julia 1.1, the results are as following

I thinks the point is how to define (-1)*(0.0im).

Zeros are signed: 0.0 is different from -0.0 (in julia, `0.0 == -0.0`

but `!isequal(0.0, -0.0)`

). In the first example you have a x + 0.0im. In the second example you have (-0.5) * (x + 0.0im) = -0.5x - 0.0im. The complex power relies on the log, which has a branch cut around the negative axis:

```
julia> log(-1 + 0.0im)
0.0 + 3.141592653589793im
julia> log(-1 - 0.0im)
0.0 - 3.141592653589793im
```

So everything is consistent. Floating-point arithmetic is annoyingâŚ

8 Likes

Yeah, the difference is that `Complex((-0.5) * 0.5^2)=-0.125 + 0.0im`

, but `((-0.5) * Complex(0.5)^2)=-0.125 - 0.0im`

. This is, as you suspected because `-1*0.0im=-0.0im`

as floating point arithmetic dictates.

So the problem is in the Complex function. Perhaps I can define a new complex function that

`Complexnew(x:: Float64)=x+0im`

instead of

`Complex(x:: Float64)=x+0.0im`

That wonât fix anything. The problem is that everything here is working as designed. Itâs just IEEE arithmetic being annoying.

But in mathematic, we know 0=-0. Complex a real number x should generate x+0im

It does, the problem is that `Complex(0.5)^2)=0.25+0.0im`

. Multiplying that by `-.5`

gives `-.125-0.0im`

.

In floating point arithmetic, `0.0==-0.0`

, but hey are **not** the same number. For example, `1.0/0.0=inf`

, but `1.0/-0.0=-inf`

1 Like

Of course the problem is that youâre looking a holomorphic function *on* its branch cut, or dividing by zero or so on. Donât do that and everythingâs fine.

6 Likes

AFAIK there is a rationale for signed zeros, especially for complex functions. William Kahan has some papers about the reasons (eg this one).

1 Like

I agree with this point. In math, we usually need an infinitely small number in complex algebra. It seems that 0.0 in Julia is to this end. That is the reason that 0.0â -0.0 does make sense. But complexify a real number should generate a number which is of exact vanishing imagine part. So that why I think complex(x) should be x+0im. Otherwise, it will generate âwrongâ results in scientific computing.

Both of the answers are correct, they are just choosing a different side of the branch cut. Which side of the branch cut is selected depends on the sign of the imaginary part, i.e. the sign of zero.

Not being able to choose the branch cut would be even worse. Consider:

```
julia> z = Complex(-1)
-1 + 0im
julia> sqrt(z)
0.0 + 1.0im
```

Now, when we compute `1/z`

, the arithmetic rules are smart enough to flip the sign of the zero in the imaginary part:

```
julia> 1/z
-1.0 - 0.0im
```

This gives the nice property that `sqrt(1/z)`

and `sqrt(z)`

return the same thing, -i:

```
julia> sqrt(1/z)
0.0 - 1.0im
julia> 1/sqrt(z)
0.0 - 1.0im
```

On the other hand, suppose that `sqrt(1/z)`

ignored the sign of the zero in the imaginary part (or there was no signed zero). Since `1/z == -1`

, such a naive rule would return \sqrt{1/z} = +i \ne 1/\sqrt{z} = 1/i = -i. (Returning either +i or -i from \sqrt{-1} are both perfectly correct in the sense that (\pm i)^2 = -1, but it is important to be *consistent* about the choice of branch cut.)

If you are dealing with multi-valued functions of complex numbers, like exponentiation, you need to be aware of branch cuts and understand their consequences for your problem. But rest assured that a *lot* of thought went into the IEEE rules for complex arithmetic (which pre-date Julia by decades and are nearly universally followed in computer programming languages).

13 Likes

Agree, both the result are correct. But in practice, there is a canonical way to choose the branch. In Mathematica, it does make such a choice. The choice in Mathematica can keep the equation 0*anything=0 hold. If Julia can make such a choice of the branch automatically, it will be better.

The problem with that is that according to C, Python, IEEE, your processor, and basically everyone in computer science, that is wrong. If you really want this behavior, you can define

```
function smul(x,y)
if x==zero(x)
return zero(x)*zero(y)
elseif y==zero(y)
return zero(y)*zero(y)
else
return x*y
end
end
```

which will do what you want. It will often be much slower though.

Thanks for the code. Sorry, I thinks I word âwrongâ is not proper in my last reply. As @stevengj mentioned, they are all correct. But in practice especially in complicated science computing, the result is easy to be justified or compared between the different methods if Julia can fix the branch automatically.

Julia will not change which branch is taken since it currently takes the correct branches. It will also not change what multiplication of `0.0im`

by negatives is since the current behavior is not only correct, but also the only fast behavior due to processor floating point units.

Then Julia perhaps will lost some people in science computing. The Mathematica actually make the natural branch from either math or physics point of view. It also keeps the fast behavior. BTW, I am not talking about the 0.0im, but 0im,

Oh, I just re-read what you say you want. The problem with `Complex(x::float64)=x+0im`

is that then your complex object has a different real and imaginary type. `0`

is an integer, so that would mean your number did everything really weirdly. For example, `(1.0+0im)*1.0=1.0+0.0im`

due to promotion. In short, this would not be a good idea at all.

I agree. It should not like this. I need to think about other tricks to make choice of the branch efficiently.

As we keep emphasizing, these rules donât come from Julia. They are the same in Fortran, C, C++, Numpy, âŚ any language that needs to do high-performance computation uses hardware arithmetic and follows IEEE rules. When you use âmachine numbersâ in Mathematica they also use IEEE arithmetic (and hence have signed zero etc.).

That is, virtually everyone doing practical scientific computation has already accepted these rules.

If Julia can make such a choice of the branch automatically, it will be better.

Julia (really, IEEE rules) *does* choose the branch automatically. If you have a number right on the branch cut, it uses the sign of zero to choose the side. This is the most self-consistent choice in finite-precision floating-point arithmetic (âmachine numbersâ in Mathematica-speak).

5 Likes

I know that. I am not mean to change the current âcomplexâ function or âpowerâ functions. That choice of the branch does make sense and keep the efficient in lots of science and engineer problem. However, it (IEEE rule) is not so convenient for some problem because the rule make the equation 0*anything=0 seems not hold. In my current problem, the out put result of the function can have millions of branches. If two different methods give their own result in different branches, although they are all correct. It still need to spend lot of time to check the results. As far as I know, Julia have many similar pattens with Mathematica in defining the Functions. So if Mathematica can make such branch choice naturally. Julia should be also possible in principle.