The stackoverflow post you linked is pretty good already.
Because there’s no hardware support and no standard for it.
Is it? What problems do you have in mind? What real number problems never involve negation or substraction (see below)?
There are many properties that unsigned integer format has but are not possible for a “unsigned floating point format”.
The unsigned int and signed int has the same representation. Not possible with existing signed floating point type since it’s not stored as 2s complement. In fact, signed int is a special interpretation of unsigned int, not the other way around. I’m not aware of a way to make a signed floating point format that is based on an unsigned version with easy to understand semantics.
Unsigned int has well defined wrapping behavior (related to 1). Not possible for floating point format. (Not any simple ways that I can come up with at least)
Negation and substraction are well defined for/between any unsigned int(s). Direct consequence of 2. Not possible with floating point.
Unsigned int is used to represent addresses and sizes, so computers natually need them. There isn’t a need like this for floating point.
Also note that the OP in the stkovr question isn’t really asking for a different format, he is really asking for range check, which is a somewhat useful feature to have in languages and is possible in julia with custom types. (and there was a very recent thread about it here)
That used to be the case, and while I think no standard (yet) I believe there’s Chinese hardware with UE8M0 already (and supported by Nvidia in PTX, not sure in actual hardware though, as a type, only intermediate coed?):
DeepSeek’s UE8M0 FP8 represents a breakthrough in efficient AI training, enabling massive language models to run on alternative hardware while maintaining competitive performance. This specialized 8-bit floating-point format trades precision for unprecedented efficiency, allowing models like DeepSeek-V3.1 to be trained without relying entirely on expensive Nvidia hardware.
So the concept is there, just a question how Julia should support or adapt… it’s a trait now? Real and UReal abstract type?
The type for AI, is I think never used on its own, but could be…
Hmph, isn’t UE8M0 just rebranding integer arithmetic? As in, couldn’t they take the same Int8 circuits from a MOS 6502, and call them the new floating point?
No they’re not trolling, it’s literally supported by NVIDIA and used by DeepSeek! Ironic because I could have been a BASIC programmer in the 70s, when you had to emulate floating point by hand. (Also embedded people had to do it until fairly recently.) After being laid off since the 80s, I could be rehired as the expert in this new-fangled AI technique!
More seriously, I guess hardware now does fast math with almost arbitrary precision, while complicating the software. Except hopefully you still use high-level PyTorch and it gets compiled into appropriate kernels with UE8M0 that nobody has to read. It’s just weird because floating point units were such an advance because they handled everything.
I thought I couldn’t keep up with modernity, but maybe I can. Neural networks were antithetical to AI, now they define AI. Logistic regression was hopelessly out of date, now it’s deep learning (especially if you use SGD). Integer is the new floating point, slide rules are the new calculators. I just wish I had saved my flared jeans from the 70s.
No, not exactly and note (Signed and) Unsigned Julia abstract type do assume integers, and all integers in a range (and 2’s complement for Signed), so can’t be used here but UE8M0 is unsigned yes (i.e. reprecenting half the number line, or rather up to its very limited typemax), but are still floating point reals, e.g. 1, 2, 4, 8 etc, though none in-between since yes, there’s neither any mantissa, also unusual. I assume but haven’t confirmed that you can represent 0, and having floats without 0 is interesting, so I would also like such a type. It most likely has an Inf, maybe only one bitpattern for it sharing with NaN and -Inf.
It’s not too hard to emulate, with integers yes (even easier than old-style 8-bit micros/Microsoft floats), since it seems to be close to log2 logarithmic numbers, if not exactly the same.