Constrained parameter in Turing

Hi there,

I get “ERROR: ArgumentError: Bernoulli: the condition zero(p) <= p <= one(p) is not satisfied.” from the code below, when using Turing to fit parameters with a constrain, α_1 + β_1 * t <=0. From the error, I think I got θ < 0 or θ > 1. However, I feel it is impossible, because I define θ = exp(α_1 + β_1 * t) and apply constains to the parameters, i.e., α_1 + β_1 * t <=0. Since t is a vector with positive elements, I think α_1 <= -β_1 * t_min are good enough to make sure α_1 + β_1 * t is less than 0, given β_1 is less than 0, and t_min is minimum(t). Could someone please help me to figure out the issue?

using Turing, Random 
using MCMCChains, Distributions

t_test = [146.0:156.0;]
y_test = sample([0,1], 11)

@model function test_fit(y, t, ::Type{T} = Real) where {T}

    n_1 = length(t)

    β_1 ~ truncated(Normal(-0.01), -0.5, 0)
 
    t_min = minimum(t)
    α_1 ~ truncated(Normal(0), -2.0, -β_1 * t_min)

    θ = Vector{T}(undef, n_1)

    for i = 1:n_1
        θ[i] = exp(α_1 + β_1 * t[i])
        y[i] ~ Bernoulli(θ[i])
    end

end

model_test = test_fit(y_test, t_test)
chain_test = sample(model_test, NUTS(.65), 100)

Thanks,
Chuan

Hmm, interesting.
Mathematically I cannot see either why it wouldn’t work, so I went to try and debug. Sadly, println statements were no help for me in this case because they only gave weird objects back which I could not figure out what values were corresponding to what.

However, I did test, and it is definitely caused by \theta going above 1.0, and therefore caused by \alpha_1 + \beta_1 * t[i] somehow going above 0.0

Again, mathematically I can’t figure out why, but maybe there is some indeterminacies with really small numbers etc. causing it to underflow/overflow and what not.

@model function test_fit(y, t, ::Type{T} = Real) where {T}

           n_1 = length(t)

           β_1 ~ truncated(Normal(-0.01), -0.5, -0.01)

           t_min = minimum(t)
           α_1 ~ truncated(Normal(0), -2.0, -β_1 * t_min)

           θ = Vector{T}(undef, n_1)

           for i = 1:n_1

               θ[i] = exp(α_1 + β_1 * t[i])
               if θ[i] >= 1.0
                   θ[i] = 1.0
               end

               y[i] ~ Bernoulli(θ[i])
           end
       end 

To figure out that that was indeed the cause, I did the above and then it gave no errors.

But I would definitely like to hear more from more experienced people as to what could be causing this? Because I just tried with really small and big numbers and taking exp in julia and that itself wasn’t causing any issues, at least none of the numbers that were that small or big can theoretically occur in the above case anyway.

problems with floating point accuracy. The following worked for me (adding in log space rather than multiplying). But check you are convince the maths is still doing what you think it should be doing.


using Turing, Random 
using  Distributions

t_test = [146.0:156.0;]
y_test = sample([0,1], 11)

@model function test_fit(y, t, ::Type{T} = Real) where {T}

    n_1 = length(t)

    β_1 ~ truncated(Normal(0.01), 0.0, 0.5)
 
    t_min = minimum(t)
    α_1 ~ truncated(Normal(0), -2.0, exp(log(β_1 + t_min)))

    y .~ Bernoulli.(exp.(α_1 .- exp.(log.(β_1 .+ t))))


end

model_test = test_fit(y_test, t_test)
chain_test = sample(model_test, NUTS(.65), 100)

println statements can be used here, but when you see something like DynamicPPL.DefaultContext}, Float64}}(153.80677098150497,0.1117035752021545,8.86975921916185) that first value 153.80677098150497 is what you would be interested in in this instance. The errors occurred when t[i] == t_min, so the difference should be zero, and exponentiate to 1. But some floating point errors lead to values slightly above zero.

Thanks, Michiel! The step function solves the problem. I was afraid it will ruin the differentiability, but Turing takes care of it :slight_smile: I have marked this as solution.

1 Like

Thanks for your insight, Arthur! The solution you proposed is interesting. However, exp(log(β_1 + t_min)) is equal to β_1 + t_min, instead of β_1 * t_min. I tried whether exp(log(β_1 * t_min)) will do the trick, but still got the same error.

Doh! Silly me. Doesn’t help even when done properly i.e. exp(log(x) + log(y))

I’ve done this for models in the past as well.but I’d also like to know if there’s a cleaner solution i.e. the fact that this error occurs implies that there may be problems with other calculations in the model. If only one parameter gets corrected this could introduce further bias. May not be hugely important in this specific example. But we need to get it right for these simple cases so we can have confidence in more complicated models.

1 Like

Thanks for your insight, Arthur. I agree that a cleaner solution is preferred. I am running the model with real data using the step function solution Michiel suggested, and it is still running. I feel the efficiency can be improved, if we do not add the step function step.