Set the state of the GLOBAL_RNG



Hello, I would like to know whether it is possible to set the Random.GLOBAL_RNG to a specific state.

I know that I can set the seed but this is not exactly the same. The goal is to save a state to potentially start at this state later.

I also know about the possibility to provide the rng object to the rand, ... calls but this would require to go through all the code and change it manually.



So there are a couple of interesting observations that can be made

The first one is born from personal experience, never depend on the
GLOBAL_RNG. You can never know when some other piece of code will
sneak in and take several random numbers from your random stream.
This was a bigger issue awhile ago when many of the functions in
Distributions did not take an rng argument and they all used the

So having said that, you should always provide an rng, your rng,
to any function that needs one. If it means rewriting code, so
be it, it is just best practice.

However, your question about getting back to a known state that
may not have been the result of a (re)seeding operation is a good one.
The answer is that you can simply copy the rng.

my_rng = MersenneTwister(1)

rand(my_rng, ...)
# sample others dists using my_rng

Now you want to save the state at some point

cached_rng = copy(my_rng)

# continue on with current random stream
rand(my_rng, ...)

Now you want to go back to a saved state

new_rng = copy(cached_rng)

rand(new_rng, ...)

The other thing is that is it much faster to copy a cached
MersenneTwister object than it is to reseed (even if you know it) to
and known seed.

Finally, if you really, really want to change the GLOBAL_RNG, (I can
already see the Villagers with torches coming for me…) you can do
(No guarantees/warantees…)

import Random

mrng = Ransom.MersenneTwister(2718)

Random.eval(:(GLOBAL_RNG = $mrng))


It’s slightly annoying at first to have to thread an explicit RNG argument through your function calls, but it’s really the right way to go and will make things better in the long run. We had a similar issue a long time ago when we had to convert all of Julia’s I/O functions from using implicit dynamically scoped global “current input/output” objects to threading the io parameter through all of the I/O functions, which felt pretty painful at the time, but in retrospect it was clearly the right move. The performance is much better and it’s the only way to build even mildly complex software where there can be more than one I/O object in play. The same reasoning applies to RNGs (which, if you squint, is a funny kind of input stream).


In addition to the copy method for MersenneTwister already mentioned, copy! is also available, which can improve efficiency a bit in some scenari.


Why? From my perspective, IOContext is an ad hoc reimplementation of dynamic scope / special variables, because threading all relevant variables (:compact, :limit_output, etc.) is painful. And performance-wise, it’s a Dict, right? So the type of :limt_output is lost.


True, but implementing dynamic scoping by passing a context object around is the simple, obvious way to do it. It turns out that the cases where IOContexts are used are not performance sensitive which is not true of I/O in general. So the current design strikes a practical balance between performance and flexibility.