What is the best way to cast an Int64 to an UInt64, preserving the binary representation?
For instance, given -1 as input I would like to get 0xffffffffffffffff. Given -256 as input I would like to get 0xffffffffffffff00.
Simply doing UInt64(-1) gives an error because UInt64 expects nonnegative input. Naively doing UInt64(-1 + (1<<64)) gives the same result, since 1<<64 evaluates to zero.
One possibility is to define f(x) = UInt64(x + (UInt128(1) << 64)). However it seems a bit weird to have to resort to 128-bit arithmetic just to perform a type cast.
julia> reinterpret(UInt64, Int64(-1))
julia> reinterpret(UInt64, Int64(-256))
x % T is shorter and more forgiving unless you want to make sure only
Int64 were allowed as input types.
I would strongly recommend the
x % T version of this over the
reinterpret version. The reason is that
x % T is a well-defined operation independent of the representation of the values, it’s just one that happens to be entirely free for
UInt64. It’s better to express the behavior you want than rely on the details of the representation.
Thanks for adding the rationale.