At a fine enough scale, every floating point calculation is wrong. If Float32
is inadequate, Float64
is 536,870,912 times more precise (sometimes 2^{29} just doesn’t get the point across). “rand(Float32)
isn’t accurate enough” is a strawman when Float64
is the default.
That said, there has been extremely little pushback to extending Float32
from 24 to 32 bits and Float64
from 53 to 64. We have a number of proposed and mostly-performant ways of producing such “full-width” values correctly. What’s missing for those is optimization and a PR.
The only reason I can imagine this thread is still going is because we’re debating whether we should go further. We’d all like to have a bazillion bits of precision, but the actual solution must be considered with the performance tradeoffs. Some of us (even those of us more comfortable with the status quo) have been trying to produce more bits in a way that does not blow up the run time. This is somewhat more challenging since we must compete with vectorized generation.
The time for philosophical discussion of “what’s right” ended a few-hundred posts ago. The time for debating whether rand
should produce some other range than [0,1) ended five years ago when breaking changes were outlawed.
Any discussion of a solution without runtime considerations will only continue this interminable debate, one that has done little to move people’s opinions. The task at hand isn’t to convince people that more bits are better, it’s to convince people that more can be cheap. What we need are implementations.