Hello, this is my first post.
I started tinkering with Julia and I wanted to understand more how random floating point numbers are generated. I will usually call rand(Float32) and be happy. I make the following statement:
If a floating point x is uniformely distributed in [0,1] then mantissa(x) (as a bitstring) is uniformely distributed in [0..2^m)
I’m pretty sure of the result because if it’s uniform in [0,1] it must be uniform in every interval [2^i, 2^(i+1)) and being uniform in such intervals means having a uniform mantissa.
I have produced a notebook (which I cannot upload for some reason) in which I test the distribution of the mantissa bits for different generation policy.
Why the designers of the language have chosen this particular generation style? Isn’t better to have (mostly) uniform mantissa bits? Btw, here’s the alternative generation scheme, copied from the standard library of C++ (there may be errors)
struct CppRNG <: Random.AbstractRNG
rng::Random.AbstractRNG
CppRNG(rng::Random.AbstractRNG) = new(rng)
end
function Random.rand(
rng::CppRNG,
::Type{T}
) where { T <: Union{Float16, Float32, Float64} }
U = Base.uinttype(T)
x_int = rand(rng.rng, U)
max_int = 2^(8*sizeof(U) + 1)
converted = T(x_int)
normalized = converted / T(max_int)
@assert 0.0 <= normalized < 1.0
@assert typeof(normalized) == T
return normalized
end
I’m interested in the motivation for the choice. I understand that a possible change would break all the current programs so it’s out of scope



