I’ve got a function that currently accepts a 64 bit integer, and masks it with some other 64 bit integer e.g.
x & 0x3333333333333333
In such code, the hex is written in full, literally. But if I want to generalise this to other sizes of UInt, how do I specify 0x33 (for UInt8), 0x3333 (UInt16) and so on, without writing a lot of separate methods with different hex literals, with a fair amount of code duplication? Choosing a hex literal during compilation with a generated function seems like overkill?