so as isequal(+0.0im,-0.0im)
.
I discovered this during a painful debug process.
I really want to understand the philosophy behind this designing, and I wish to get some advice on numerical stability. Thank you @everyone !
so as isequal(+0.0im,-0.0im)
.
I discovered this during a painful debug process.
I really want to understand the philosophy behind this designing, and I wish to get some advice on numerical stability. Thank you @everyone !
If you want +0.0
and -0.0
to be equal, use ==
, which implements IEEE floating-point equality rules. isequal
is used more for hash tables and other purposes that require finer-grained distinctions.
(In Julia, there are three different equality predicates: ==
, ===
, and isequal
, to express slightly different notions of equality.)
As an aside, I learned recently from Valentin that -0.0
is the additive identity, while 0.0
is not:
julia> mymulv1(a,b) = muladd(a,b,0.0)
mymulv1 (generic function with 1 method)
julia> mymulv2(a,b) = muladd(a,b,-0.0)
mymulv2 (generic function with 1 method)
julia> @code_native debuginfo=:none mymulv1(1.2, 3.4)
.text
vxorpd %xmm2, %xmm2, %xmm2
vfmadd213sd %xmm2, %xmm1, %xmm0 # xmm0 = (xmm1 * xmm0) + xmm2
retq
nopw (%rax,%rax)
julia> @code_native debuginfo=:none mymulv2(1.2, 3.4)
.text
vmulsd %xmm1, %xmm0, %xmm0
retq
nopw %cs:(%rax,%rax)
nop
So the compiler eliminates -0.0
, but not 0.0
.
It would be nicer if it were the other way around, because 0.0
is faster to create (via xor
instructions), making it the better choice if you can’t trust it to be eliminated. It’s also what you get from zero(::Float64)
or zero(::Type{Float64})
.
Although the people came up with the IEEE rules thought about this a lot more than I have, so there’s likely a good reason I don’t know about.
Thanks Steven!
In fact I was struggling with a version of unique()
with certain tolerance like 1e-16
. I tracked the Julia implementation of unique()
in base.jl
line 1403:
unique(A::AbstractArray; dims::Union{Colon,Integer} = :) = _unique_dims(A, dims)
In the lines below that, I saw the actual code of _unique_dims()
. It seems to me that the implementation of _unique_dims()
is more “hash”.
My version of unique()
with tolerance is
@inline uniquen(v,n) = unique( v .|> (x->round(x,digits=n)) )
I simply round the digits beyond certain precision.
Do you think this would be stable enough?
PS:
I just realized that
unique([-0.0,+0.0])
is a Vector{Float64}
with TWO elements [0.0,0.0]
are you from Mars?
See the discussion here: `unique()` with `isapprox()` instead of `isequal()` · Issue #19147 · JuliaLang/julia · GitHub
A fundamental issue is that approximate equality is not an equivalence relation, so a unique
function with a tolerance would have to give implementation-dependent results, or rather its exact behavior would be so complicated to describe that it would effectively be defined by its implementation.
thank you, I’ve learnt this way of thinking now!
What you might do is something like:
unique(x -> round(x, sigdigits=13), myarray)
which would perform the unique
operation based on the first 13 significant digits. (You could use digits
instead of sigdigits
if you want an absolute tolerance).
But this might not necessarily be what you want, e.g. unique(x -> round(x, sigdigits=13), [1.000000000000501, 1.0000000000005])
gives [1.000000000000501, 1.0000000000005]
because those two values round differently in the 13th decimal digit. (Again, the fact that approximate equality is not an equivalence relation — it’s not transitive — means that any implementation of unique
with a tolerance will have some odd behaviors.)
I see … I will investigate further with actual examples. I am dealing with degenerate eigenvalues, whose behaviour depends largely on the input matrix.
To get rid of the unique([-0.0,+0.0])
trouble, I am currently using the following implementation:
@inline correct_minus_00(x) = (real(x)==0.0 ? +0.0+imag(x)*im : x)
@inline uniquen(myarray,n) = unique(x -> correct_minus_00.(round(x, digits=n)), myarray)
which adds a sign correction to your suggestion
Hi Elrod, do you have more elegant solution of the -0.0
problem? I mean, -0.0
is practically equal to 0.0
in my code, so I use this
@inline correct_minus_00(x) = (real(x)==0.0 ? +0.0+imag(x)*im : x)
to correct the sign of 0.0
. But it looks silly …
If the sign of zero doesn’t matter in your code, why do you need to “correct” it?
I am dealing with degenerate eigenvalues, whose behaviour depends largely on the input matrix.
I would very strongly recommend that you re-think your algorithms to not rely on detecting exactly degenerate eigenvalues … sounds numerically unstable. What are you actually trying to compute?
I am trying to identify degenerate eigenvectors within a certain tolerance. These degenerate eigenvectors means quantum states of same energy, they are physically very interesting. I just need to identify and collect them, but my current algorithm is not very stable and generates wrong collections. I will flatten all our discussions in my mind …
I have the following list of vectors
testv = [ Complex{Float64}[-1.0+1.11022e-15im, -1.0-1.04777e-15im],
Complex{Float64}[-1.0+1.22125e-15im, -1.0-1.35308e-15im],
Complex{Float64}[1.0+1.55431e-15im, 1.0-1.30451e-15im],
Complex{Float64}[0.0+1.0im, -0.0+1.0im, 0.0-1.0im, -0.0-1.0im],
Complex{Float64}[-1.0+1.11022e-15im, -1.0-1.04777e-15im, -1.0-4.79e-16im],
Complex{Float64}[-1.0+0.0im, -1.0-0.0im],
Complex{Float64}[-1.0+1.11022e-15im, -1.0-1.04777e-15im],
Complex{Float64}[-1.0+0.0im, -1.0-0.0im],
Complex{Float64}[-1.0+1.22125e-15im, -1.0-1.35308e-15im],
Complex{Float64}[-1.0+0.0im, -1.0-0.0im],
Complex{Float64}[-1.0+1.22125e-15im, -1.0-1.35308e-15im],
Complex{Float64}[-1.0+0.0im, -1.0-0.0im], ]
within a tolerance of 1e-14
, I hope to reduce them to distinct values. Because unique([-0.0,0.0])
gives two elements, I couldn’t use it in an intuitive manner.
But I’ve got my solution now:
@inline uniquen(myarray,n) = unique(myarray .|> (x->(round(x,digits=n)+(0.0+0.0im))))
uniquen.(testv,12)
yields desired results.
BTW,
f1(myarray) = unique(x->round(x,digits=12), myarray)
and
f2(myarray) = unique(myarray .|> (x->round(x,digits=12)))
behaves differently. For example:
julia> tv = Complex{Float64}[-1.0+1.11022e-15im, -1.0-1.04777e-15im, -1.0-4.79e-16im] ;
julia> f1(tv)
2-element Array{Complex{Float64},1}:
-1.0 + 1.11022e-15im
-1.0 - 1.04777e-15im
julia> f2(tv)
2-element Array{Complex{Float64},1}:
-1.0 + 0.0im
-1.0 - 0.0im
I wished that f1
could give me the result of f2
(shown above), but it didn’t.
If you are looking for non-accidental degeneracies (for a Hamiltonian that commutes with some symmetry group), then a more reliable way is to project the eigenvector onto irreps of the symmetry group.
yes I am doing that in the afternoon
Here’s my obligatory mention of one of my favorite-titled papers of all time, Branch cuts for complex elementary functions, or much ado about nothing’s sign bit, by the estimable William Kahan.
Thanks for the paper, I realized that handling zeros is a highly non-trivial task indeed.