Hi guys,

I’m wondering if there is a way to include a variable which is a random walk in a model. Particularly, if inside my model I have the lines of code shown below, is there a walk around to do it more efficiently? I also don’t know how it will be interpreted.

```
δ[1] = 0
σ_δ ~ Exponential()
n = length(δ)
for i in 2:n
δ[i] ~ Normal(δ[i-1], σ_δ)
end
β = logistic.(δ)
```

Any help will be appreciated

1 Like

This works fine in Turing atm. We currently don’t have a mechanism to make this more efficient, though. But we discussed the topic internally yesterday and it’s very likely there will be additional API functionally to express random walk like constructs in a Turing model, which will in turn speed up inference.

A Gaussian (slightly mean reverting) random walk has a tridiagonal precision matrix. You can just write the joint density of `δ`

as multivariate Normal with tridiagonal precision matrix. Perhaps Turing can deal with that already?

1 Like

This currently only works if the precision matrix is dense (see this issue). For short time series, that might still yield a speedup over the explicit loop, but ultimately kind of defeats the purpose of using a precision matrix.

I’ve been working on Gaussian Markov random fields, which can be seen as a generalization of the random walk process. Like a random walk, they can be described using a multivariate normal distribution with a sparse precision matrix. However, there are currently issues with autodiff and the sparse matrix libraries: since Julia calls out to SuiteSparse for sparse matrix operations, it can’t handle dual numbers, so you can’t currently use any gradient-based MCMC methods if your model has sparse `MvNormalCanon`

variables.

Anyway, @Camilo1704, you can try this dense version and see if it gives you a speedup:

```
@model function RandomWalk(x)
n = length(x)
σ_δ ~ Exponential()
Q = diagm(-1 => -0.5/σ_δ^2 * ones(n-1), 0 => 1/σ_δ^2 * ones(n), 1 => -0.5/σ_δ^2 * ones(n-1))
x ~ MvNormalCanon(Q)
end
```

1 Like

I’ve been working on Gaussian Markov random fields, which can be seen as a generalization of the random walk process.

, that’s close to my interests as well, e.g. in https://twitter.com/MoritzSchauer/status/1234726235051302912 we do data assimilation in a state space model with latent state modelled by a Gaussian Markov random field

1 Like