Not quite. As I remember, the bits-to-float transformation used in Base is slightly more sophisticated and preserves one more bit of entropy. It’s equivalent to sampling a random number in [0.5, 1) and then flipping a coin whether to subtract 0.5 or not.
I think CUDA.jl’s device RNG library may be using the [1, 2) - 1 variant.
Only the brave enter here: Output distribution of rand(Float32) and rand(Float64), thread 2