Two equivalent conditioning syntaxes giving different likelihood values?

I’m new to Turing.j,l and probabilistic programming in general.
While playing around with small example codes, I faced a behaviour I don’t understand about the conditioning syntax:

If I understand well, these two model definitions are equivalent:

@model function model1(x)
    μ ~ Uniform(0, 2)
    x ~ LogNormal(μ, 1)

@model function model2()
    μ ~ Uniform(0, 2)
    x ~ LogNormal(μ, 1)

model2(x) = model2() | (; x)

However, when evaluating the negative log likelihood (using syntax from source code here and there) for these two models I obtain different results:

v = 1.0
w = [1.0]

ctx = Turing.OptimizationContext(DynamicPPL.LikelihoodContext())
Turing.OptimLogDensity(model1(v), ctx)(w)  # = 1.4189385332046727
Turing.OptimLogDensity(model2(v), ctx)(w)  # = 2.112085713764618

When computing it by hand it seems that the first model is correct:

nll(μ) = -logpdf(LogNormal(μ, 1), v)
nll(w[1])  # = 1.4189385332046727

Am I missing something ?

1 Like

This is a bug with OptimizationContext :grimacing: I’ll get on it asap!

1 Like

Should be resolved once this goes through: Bugfix for `condition` + optimization by torfjelde · Pull Request #2016 · TuringLang/Turing.jl · GitHub

1 Like

Thank you !

This should now be fixed on 0.16.2:))