After reading a few threads here, I’ve come to understand that for subtypes of Number
, construction and conversion should have nearly identical behavior (with one exception: it is expected that construction always returns a new object – relevant for types that are not pure bits).
I’ve been writing packages that implement new Number
subtypes, and I’ve come across some cases where it may be desirable for conversion and construction behavior to differ.
Case 1: construction as grade projection in CliffordNumbers.jl
My package, CliffordNumbers.jl, implements several types representing a Clifford number (multivector), elements of a Clifford algebra (geometric algebra). Clifford algebras admit a basis spanned by 2^D
k
-blades (k
wedge products of 1-blades), where the grade k
ranges from zero to D
, the dimension of the space.
The CliffordNumber
type is a dense representation of all coefficients associated with each basis blade of a multivector. But in practice, you can get away with sparser representations only representing blades of even grade (EvenCliffordNumber
), odd grade (OddCliffordNumber
), or a single grade K
(KVector{K}
).
Construction of a KVector{K,Q}
from an EvenCliffordNumber{Q}
is interpreted as grade projection: drop all coefficients for basis blades that are not grade K
. This operation always succeeds, even if the result does not represent the same value as the input:
julia> x = EvenCliffordNumber{VGA(3)}(1, 2, 3, 4)
4-element EvenCliffordNumber{VGA(3), Int64}:
1 + 2e₁e₂ + 3e₁e₃ + 4e₂e₃
julia> KVector{2}(x) # Grade 0 (scalar) portion is dropped
3-element KVector{2, VGA(3), Int64}:
2e₁e₂ + 3e₁e₃ + 4e₂e₃
julia> ans == x
false
This operation always succeeds, unlike conversion, which fails if the value is not representable as a KVector{K,Q}
, throwing an InexactError
:
julia> convert(KVector{2}, x)
ERROR: InexactError: convert(KVector{2}, EvenCliffordNumber{VGA(3), Int64}(1, 2, 3, 4))
Stacktrace:
[1] convert(T::Type{KVector{2}}, x::EvenCliffordNumber{VGA(3), Int64, 4})
@ CliffordNumbers ~/.julia/packages/CliffordNumbers/hka3J/src/convert.jl:4
[2] top-level scope
@ REPL[49]:1
Case 2: Sign
type
My unregistered package SignType.jl implements a Sign
type: an abstraction of the signbit of a signed integer, float, or other real number – logically identical to a Bool
but arithmetically distinct. (Perhaps I could also call it Int1
since it subtypes Signed
.)
In general, Sign(x::Real)
is identical to reinterpret(Sign, signbit(x))
. This provides reasonable defaults for inputs that are zero:
julia> Sign(0)
Sign(+)
julia> Sign(-0.0)
Sign(-)
However, conversion of a zero element to a Sign
throws an InexactError
, even if the type can represent signed zero:
julia> convert(Sign, -0.0)
ERROR: InexactError: convert(Sign, -0.0)
Stacktrace:
[1] convert(::Type{Sign}, x::Float64)
@ SignType ~/git/SignType.jl/src/SignType.jl:147
[2] top-level scope
@ REPL[68]:1
The reason for this is that Sign
can only represent the values +1 and -1. Even if the input type has a signed representation of zero, Sign
cannot represent 0.
(Note: right now, convert(Sign, x)
does not throw an error if x
represents something other than +1 or -1, and just calls Sign(x)
. The behavior may change to reflect the logic above, so that only representations of +1 and -1 can be converted to Sign
. I still haven’t finalized the design.)
The questions
However, there is a key semantic difference: since
convert
can be called implicitly, its methods are restricted to cases that are considered “safe” or “unsurprising”.
Offhand, I anticipate it would be surprising in some circumstances if 0 supplied to a constructor for a type containing a Sign
field was implicitly converted to +1 or -1. And while CliffordNumbers.jl is a niche library, I know there are definitely times where implicit grade projection done by the constructor can cause serious issues.
From my interpretation, in general:
- the conversion
convert(T, x)
means “represent the valuex
using the typeT
”. While there may be some loss of precision, you’d expect its result to be equal (==
or≈
) tox
. - the constructor
T(x)
means “produce an instance ofT
using information fromx
.” This does not imply thatT(x)
needs to be equal tox
in any sense.
Although the default implementation for a Number
treats them as the same operation, I think I’ve found cases where it makes sense for them to differ. However, unlike iterators and arrays, there isn’t a documented interface for numbers as far as I know. So my questions are:
- Is my interpretation of the semantics of construction vs. conversion correct?
- Are these semantics necessarily more narrow for
Number
? - What, if anything, can break if construction and conversion differ for a subtype of
Number
?