Suppose we sample a float v \in [0,1). Define the smallest nonzero value we may observe as \eta. Afterward, we dutifully verify that v \notin \{0\} so that v \in (0,1). We might now reasonably expect that 1-v \in (0,1). But if 1-\eta rounds to 1 (e.g., because we apply “extra” precision to small v), we instead get 1-v \in (0,1]. Obviously, this is now the domain of floating point pitfalls more than random number generation, but it’s a sharp edge that a casual user might reasonably trip on. I imagine there are a number of related inconsistencies, possibly more insidious than this.
It’s usually not difficult to construct failure modes with floating point math, and a careless implementation will often run afoul at least one. My example doesn’t seem intrinsically more or less an issue than the “generate small-probability Bernoulli random variables” issue raised above. I don’t see how the two can be simultaneously resolved by any adjustment to the set of values returned by rand. Neither resolution avoids pitfalls for all possible uses.
This reminds me of a snippet of code I had in a recent project. I needed to compute \log(1-\exp(x)) for x\le0. My implementation was x > -1 ? log(-expm1(x)) : log1p(-exp(x)). Note that the first calculation is unsuitable when x \ll -1 and the second is unsuitable when x \approx 0. No single code path was suitable for the entire domain (at least not with standalone log/exp-like functions), so extra care was necessary.
If you truly want accurate Bernoulli(1f-40) random variables, applying a few bits of extra precision won’t fix a rand() < p implementation. The sampler itself must find a way to produce a reasonable value (or error) for such a challenging request. This probably means switching to a different algorithm in certain regimes.
By the reasoning I’ve discussed here, and status-quo bias, I’m marginally more sympathetic to the current “fixed-point” output precision. Though I’m willing shift my stance given further examples.