Lux.jl QuickStart: why `Lux.apply` returns the state?

In the QuickStart section of Lux.jl documentation, there is the line

y, st = Lux.apply(model, x, ps, st)

Why does Lux.apply return not only y but also st? How can inference computations change the neural network state? In fact, for the quick start example, the returned st is identical to the old st, as you could verify with ===.

OK, reading the docs a bit more, it looks like Dropout and BatchNorm layers have useful states that are updated, like tracked statistics.