As people mentioned, Turing is very hackable. So if you really want to do it now, you can hack Turing model with customized distribution to achieve so.

```
using Turing
@model mymodel() = begin
d = 1 # the actual dimension of your problem
p ~ DummayPrior(d)
1.0 ~ Target(p)
end
struct DummayPrior <: ContinuousMultivariateDistribution
d # the dimension of your problem
end
Base.length(dp::DummayPrior) = dp.d
Distributions.logpdf(::DummayPrior, _::Vector{T} where {T}) = 0.0
Turing.init(::DummayPrior) = [0., 1.] # your initialization
Distributions.rand(dp::DummayPrior) = randn(dp.d)
struct Target <: ContinuousMultivariateDistribution
p
end
Distributions.logpdf(t::Target, _) = our_log_target(t.p)
mf = mymodel()
alg = MH(1_000) # MH for 1_000 samples; you can also customize your proposal
chn = sample(mf, alg)
# If our_log_target is differentiable, you can also use HMC or NUTS, e.g.
# alg = HMC(1_000, 0.2, 3) # HMC for 1_000 samples with 0.2 as step size and 3 as step number
```

The `DummayPrior`

just plays a role for initialization and allow Turing to book-keep the parameter space, and the `Target`

will try the computation of `our_log_target`

. However, the problem of working with the `our_log_target`

is that you need to take care of transformed space by yourself. But I think in general all models that you can implement as `our_log_target`

by hand could also be implemented using `@model`

with Turing.