Running the code results in the following error

```
using Turing, ReverseDiff, Memoization
Turing.setadbackend(:reversediff)
Turing.setrdcache(true)
y1 = rand(10)
A1 = rand(10, 10)
function logsumexp(mat)
max_ = maximum(mat, dims=1)
exp_mat = exp.(mat .- max_) .- (mat .== max_)
sum_exp_ = sum(exp_mat, dims=1)
res = log1p.(sum_exp_) .+ max_
end
@model function mwe(y, A, ::Type{T} = Vector{Float64}) where {T}
n,m = size(A)
scale = rand(10)
vec_raw = T(undef, n)
for i in eachindex(vec_raw)
vec_raw[i] ~ truncated(Laplace(0, 1), ((10-30)./scale[i]), ((100-30)./scale[i]))
end
vec = vec_raw .* scale .+ 30
μ = logsumexp(A .- vec_raw)[1,:]
y .~ Normal.(μ, 1)
return vec
end
model = mwe(y1, A1);
chain = sample(model, NUTS(.65), 300);
```

Error:

```
DomainError with -9.892202109140928:
log will only return a complex result if called with a complex argument. Try log(Complex(x)).
```

Full Stacktrace: https://pastebin.com/GzfY3Qfu

If I change

`vec_raw[i] ~ truncated(Laplace(0, 1), ((10-30)./scale[i]), ((100-30)./scale[i]))`

to

`vec_raw[i] ~ truncated(Laplace(0, 1), 5, 6) # any number for bounds are fine`

the model starts sampling but, this is not how I want to paremeterize the model.

How to fix this?