# Turing.jl: DomainError with -960.1841700397322:

Running the code results in the following error

``````using Turing, ReverseDiff, Memoization
Turing.setrdcache(true)

y1 = rand(10)
A1 = rand(10, 10)

function logsumexp(mat)
max_ = maximum(mat, dims=1)
exp_mat = exp.(mat .- max_) .- (mat .== max_)
sum_exp_ = sum(exp_mat, dims=1)
res = log1p.(sum_exp_) .+ max_
end

@model function mwe(y, A, ::Type{T} = Vector{Float64}) where {T}
n,m = size(A)
scale = rand(10)
vec_raw = T(undef, n)
for i in eachindex(vec_raw)
vec_raw[i] ~ truncated(Laplace(0, 1), ((10-30)./scale[i]), ((100-30)./scale[i]))
end
vec = vec_raw .* scale .+ 30
μ = logsumexp(A .- vec_raw)[1,:]
y .~ Normal.(μ, 1)
return vec
end

model = mwe(y1, A1);
chain = sample(model, NUTS(.65), 300);
``````

Error:

``````DomainError with -9.892202109140928:
log will only return a complex result if called with a complex argument. Try log(Complex(x)).
``````

Full Stacktrace: https://pastebin.com/GzfY3Qfu

If I change

`vec_raw[i] ~ truncated(Laplace(0, 1), ((10-30)./scale[i]), ((100-30)./scale[i]))`

to

`vec_raw[i] ~ truncated(Laplace(0, 1), 5, 6) # any number for bounds are fine`

the model starts sampling but, this is not how I want to paremeterize the model.

How to fix this?

The sampling works if I make `scale = ones(10) * any integer`. The error resumes if I change to `scale = rand(10:15, 10)`. This seems like a bug to me.

Just a guess but try without the caching. The Laplace distribution’s logpdf uses the absolute value so I suspect it is not compatible with ReverseDiff caching.

Edit: I tried it and it works. Use more adaptation steps though to ensure it finds a good starting point.

I think your implementation of `logsumexp` is incorrect (see my comment on Github https://github.com/TuringLang/Turing.jl/issues/1374#issuecomment-673375285).

BTW I’m a bit confused where to reply, just posting here as well in case you missed my reply on Github.