Observation is sum of random variables in Turing

I’m playing around with Turing.jl for the first time and am trying to infer my weight from a sequence of noisy observations. The reason I’m looking at MCMC is I want to experiment with different noise models and not only use a Kalman filter.

My problem is I can’t figure out how to tell Turing that my observation is
obs = true_weight + gaussian + exponential. I tried the code below

using Turing
weight = 75 .+ cumsum(0.2randn(20)) # Generate testdata
N = length(weight)

@model weightfilter(y) = begin
    N = length(y)
    shock = Vector{Real}(undef,N-1) # noise vector
    state = Vector{Real}(undef,N) # The true weight state
    for i = 1:N-1
        shock[i] ~ Exponential(.5) # Ate before etc.
        # noise[i] ~ Normal(0, 0.2) # Hydration level etc.
    end
    state[1] ~ Normal(77, 4) # Initial state
    for n in 2:N
        state[n] ~ Normal(state[n-1], .2) # The state drifts normally
        stateshock = state[n] + shock[n-1] # The measurement is corrupted with exponential noise
        y[n] ~ Normal(stateshock, 0.2) # Measurement is also corrupted with gaussian noise
    end
end;

iterations = 500
chain = sample(weightfilter(weight), SMC(iterations)); 

but it does not seem to produce any reasonable results. I would ideally like to set the initial condition of state to the observations to make the sampling converge faster, but that does not seem to be an available option.
Any input on how to accomplish this would be greatly appreciated :slight_smile:

1 Like

According to https://github.com/TuringLang/Turing.jl/issues/574, this example should be working on Linux and Mac after replacing normal Julia vectors with Turing’s TArray types.