# Is there a turing alternative to pm.Deterministic from pymc3?

Hey all,

I am looking an alternative to `pm.Deterministic` so that I can store and return a computed value.

Thanks

1 Like

Hi,
You can do something like this:

``````using Turing, Random

# Define a deterministic distribution
struct Determin{T<:Real} <: ContinuousUnivariateDistribution
val::T
end

Distributions.rand(rng::AbstractRNG, d::Determin) = d.val
Distributions.logpdf(d::Determin, x::T) where T<:Real = zero(x)

@model function testmodel(x)
y ~ Normal()
m ~ Determin(y * 2)
for i in eachindex(x)
x[i] ~ Normal(m)
end
end

# instantiate a model
model = testmodel(randn(10))

# inference
chain = sample(model, SMC(), 1000)
``````

And if I would recall how to properly define the bijection for a custom distribution, you could also use HMC or NUTS. @torfjelde could you jump in?

3 Likes

As @trappmartin says, you need to define a transformation from unconstrained to constrained, but in this case that’s just going to be the identity operation, so you only the need additional bit of code:

``````Bijectors.bijector(d::Determin) = Identity{0}() # <= `0` indicates 0-dimensional, i.e. univariate
``````

and you’ll be able to use HMC and NUTS:)

3 Likes

I’m trying to use this solution in my model but so far it ends very poorly. Here is a simplified version:

``````using LinearAlgebra
using Turing
using Distributions
using Bijectors
using Random

a1 = [0.05, 0.03, 0.02, 0.03, 0.04, 0.09, 0.13, 0.16]

# Define a deterministic distribution
struct Determin{T<:Real} <: ContinuousUnivariateDistribution
val::T
end

Distributions.rand(rng::AbstractRNG, d::Determin) = d.val
Distributions.logpdf(d::Determin, x::T) where T<:Real = zero(x)

Bijectors.bijector(d::Determin) = Identity{0}() # <= `0` indicates 0-dimensional, i.e. univariate

@model model1(a, ::Type{T} = Float64) where {T} = begin
ma ~ Normal(0.0, 0.5)
for i in 1:length(a)
a[i] ~ Normal(ma, 0.05)
end

b ~ Determin(-exp(-2*ma))
return ma, b
end

function test_model1()
# sampler = HMC(0.001, 10)
# sampler = HMCDA(1000, 0.75, 0.05)
sampler = NUTS(0.65)
c1 = sample(model1(a1), sampler, 2000)

println(c1)
end
``````

Results typically look like this:

``````julia> test_model1()
┌ Info: Found initial step size
└   ϵ = 0.025
Object of type Chains, with data of type 1000×14×1 Array{Float64,3}

Iterations        = 1:1000
Thinning interval = 1
Chains            = 1
Samples per chain = 1000
internals         = acceptance_rate, hamiltonian_energy, hamiltonian_energy_error, is_accept, log_density, lp, max_hamiltonian_energy_error, n_steps, nom_step_size, numerical_error, step_size, tree_depth
parameters        = b, ma

2-element Array{ChainDataFrame,1}

Summary Statistics
parameters                       mean                       std                 naive_se                      mcse       ess   r_hat
──────────  ─────────────────────────  ────────────────────────  ───────────────────────  ────────────────────────  ────────  ──────
b  13816403112283807744.0000  7133491167269512192.0000  225580797572648256.0000  2172001283591027712.0000    4.8969  1.3982
ma                     0.0690                    0.0179                   0.0006                    0.0007  830.2122  0.9998

Quantiles
parameters                       2.5%                      25.0%                      50.0%                      75.0%                      97.5%
──────────  ─────────────────────────  ─────────────────────────  ─────────────────────────  ─────────────────────────  ─────────────────────────
b  -5026632495806943232.0000  12090906044107700224.0000  15653120955249981440.0000  18317459011682740224.0000  22252367610115112960.0000
ma                     0.0349                     0.0565                     0.0694                     0.0811                     0.1047
``````

HMC and HMCDA samplers also give similarly bad results. Did I make something stupid here or is this a bug?

EDIT: I just tried SMC and it actually works.

I have reported this as an issue in the Turing repository: https://github.com/TuringLang/Turing.jl/issues/1335 .