Strange inconsistent in complex algebra

I just test in the Python3. It seems there is no such inconsistency.

complex((-0.5) * 0.5**2)**(-0.2)
(1.2262404609625575-0.8909158444501956j)

((-0.5) * complex(0.5)**2)**(-0.2)
(1.2262404609625575-0.8909158444501956j)

The reason is in Python, the complex function seems redefined. They always use the integral zero in the imagine part. It is not changed even under the product with a float number. The power function is also redefined.

complex((-0.5) * 0.5**2)
(-0.125+0j)

((-0.5) * complex(0.5)**2)
(-0.125+0j)

PS. I do not use any external package.

AFAICT thatā€™s just printing.

2 Likes

Itā€™s not just printing. python does some weird things here:

In [13]: (-0.0j).imag
Out[13]: -0.0

In [14]: (1.0-0.0j).imag
Out[14]: 0.0

so Iā€™m not actually sure how to check what it does for branch cutsā€¦ It behaves as if theyā€™ve imposed no signed zeros for complex numbers.

2 Likes

OK no itā€™s weirdness in parsing: Issue 22548: Bogus parsing of negative zeros in complex literals - Python tracker

It behaves correctly wrt branch cuts:

In [24]: cmath.log(complex(-.2, -0.0j))
Out[24]: (-1.6094379124341003-3.141592653589793j)

In [25]: cmath.log(complex(-.2, 0.0j))
Out[25]: (-1.6094379124341003+3.141592653589793j)

Iā€™m not sure I would call the behavior more sensible though:

In [31]: -complex(.25)
Out[31]: (-0.25-0j)

In [32]: (-1)*complex(.25)
Out[32]: (-0.25+0j)

(and therefore a different sign in the imaginary part of the log). Iā€™m guessing this is because (-1)*complex(.25) promotes the -1 to a complex number, which possibly changes the sign of the zero sign. In any case: signed zeros are a mess, and it looks like julia is more consistent here.

2 Likes

Python 3.8:

In [25]: z = complex(1.0)                                                       

In [26]: z                                                                      
Out[26]: (1+0j)

In [27]: z.imag                                                                 
Out[27]: 0.0

Yup.

2 Likes

Maybe you can define a single root choice for this zero

In history it took a long time to invent a number zero.

Maybe zero should be its own thing, like I for matrices.

The ā€œzero ringā€ is unfortunately not integral. (I was very disappointed when I found out.) For example, there is 0 / 0 = 0, or 0 * x = x. Thus ZeroElem is a subtype of Number, but not a subtype of Integer.

One could define a type Int0, and integer type with 0 bits, similar to how Julia interprets Bool very similar to Int. But that would be very similar to defining a constant trueZero that gets promoted to the various zeros in the various Number types. I donā€™t think that would be useful as part of the language standard.

1 Like