Using Turing as a replacement of Klara

i used to use Klara for MCMC sampler where i just call back the log density of the target function and it was working the way i wanted and now when i want to go back where i’m using it i found problems since i can’t use Klara anymore so i decided to look for a package that has maybe the same interface of Klara i found Turing and many more as an active options and i have already checked the post about MCMC landscape, i decided for the moment to go with Turing to sample arrays using MH sampler as you can see below:

using Random
using Distributions
using StatsPlots
using QuantumRelay  # a module where it has all the called functions in this code
using DataFrames
using Turing
using LinearAlgebra
function qrelay(alpha, delta, name)
    n = 6
    chi = fill(sqrt(0.06), n)
    phi = [im * tanh(chi[i]) for i=1:n]
    omega = [1.0 / prod(cosh(chi[j]))^2 for j=1:n]
    syms, op = qrelay_op(n, phi, alpha, delta)      #an exported function that is working in a made package
    op_a, op_ab, mat, coef = op_mat(op)             #an exported function that is working in a made package

    op_q2 = [syms.apH[1], syms.apV[1], syms.bpH[end], syms.bpV[end]]
    op_q1 = [syms.apH[2:end]..., syms.apV[2:end]..., syms.bpH[1:end-1]..., syms.bpV[1:end-1]...]
    mask_q1 = [op in op_q1 for op in op_a];
    mask_q2 = [op in op_q2 for op in op_a];
    qq = [x in syms.apH || x in syms.bpV ? 1 : 0 for x in op_a]
    
    pdet0 = pdet_maker(0.04, 1e-5) #a function of generating the detection probability for detectors with transmittance eta(first argument) and dark count(second one) probability
    qrs = QRelaySampler(mat, coef, omega, pdet0)  

    targetcache = Dict{Vector{Int}, Float64}()
    
    @model plogtarget(na) = begin
        log(qrs.prob(qq, na, mask_q1) * qrs.prob(na))   #prob is the distribution function with two methods inside the QRelaySampler
    end
    c1=sample(plogtarget(zeros(qq)),MH(Dict{Symbol, Any}(:p=>qrs.psetproposal)),1000) #sampling using metropolis hasting sampler
    

    funcQ(v)=qrs.prob(qq, v, mask_q2)
    return qrs, c1, funcQ
end

qrelay (generic function with 1 method)

dataname="data3"
results = []
for i = 0:12
    beta = i*pi/12
    name = string(i)
    mkpath("$dataname/$name")
    qrs, c1, funcQ = qrelay(pi/4, beta, name)
    println("beta:", beta)

    
    push!(results, (qrs, c1, funcQ))
end

now when i try to run the code it shows me :

MethodError: no method matching zeros(::Array{Int64,1})
Closest candidates are:
  zeros(!Matched::Union{Integer, AbstractUnitRange}...) at array.jl:440
  zeros(!Matched::Type{StaticArrays.SArray{Tuple{N},T,1,N} where T}) where N at /Users/midow/.julia/packages/StaticArrays/1g9bq/src/SVector.jl:29
  zeros(!Matched::Type{StaticArrays.MArray{Tuple{N},T,1,N} where T}) where N at /Users/midow/.julia/packages/StaticArrays/1g9bq/src/MVector.jl:27
  ...

Stacktrace:
 [1] qrelay(::Float64, ::Float64, ::String) at ./In[7]:31
 [2] top-level scope at ./In[9]:4774:

i know the error is in zeros(qq)which is the initial value since i am sampling arrays .
here you will find the github link of the same code using Klara, to see how it was working,
https://github.com/marouanehanhasse/quantum_relay_sampler/blob/master/quantumopticssampler(1).ipynb
and i do apology if this post is long but i couldn’t reduce it or simplify it since everything is related and if you want to know something that is unclear on this post please ask.

You need to use zero(qq). If you transition your code from pre Julia 1.0 to 1.0 you should run it through Julia 0.7, which would have told you:

julia> zeros([1])
┌ Warning: `zeros(a::AbstractArray)` is deprecated, consider `zero(a)`, `fill(0, size(a))`, `fill!(copy(a), 0)`, or `fill!(similar(a), 0)`. Where necessary, use `fill!(similar(a), zero(eltype(a)))`.
│   caller = top-level scope at none:0
└ @ Core none:0
1-element Array{Int64,1}:
 0
2 Likes

i guess you are right but still get this error

KeyError: key 1 not found

Stacktrace:
 [1] getindex at ./dict.jl:477 [inlined]
 [2] MH(::Dict{Symbol,Any}) at /Users/midow/.julia/packages/Turing/LQFio/src/inference/mh.jl:55
 [3] qrelay(::Float64, ::Float64, ::String) at ./In[10]:31
 [4] top-level scope at ./In[11]:7 

it is run on jupyter notebook

Looks like you’re not passing in the right type of arguments to the sampler. But I don’t know Turing, so I cannot help.

1 Like

what about if i want to use KissMCMC for this one, will it work? because i have to use a discrete sampling

You mean that the parameters which are sampled are discrete? If so, I don’t think KissMCMC works.

i guess the error that i’m getting is in the Dict now

@marouane1994 I don’t think you are using Turing correctly. Please read the docs Getting Started.

1 Like

i already did and i’m trying to understand that but i don’t know why this one doesn’t work i’m trying to se it on a customized distribution that was created in a module called QuantumRelay and i want to sample the arrays using the metropolis hasting sampler with a function of distribution: qrs.prob(qq, na, mask_q1) * qrs.prob(na) and i want to call back the log density of this function i read the doc but there is no paragraph about such a thing i guess.

The issue here is your model definition:

    @model plogtarget(na) = begin
        log(qrs.prob(qq, na, mask_q1) * qrs.prob(na))   #prob is the distribution function with two methods inside the QRelaySampler
    end

What you have here won’t do anything in Turing’s @model macro. You’ve written a simple log density function, which is far simpler than what Turing is meant for – if you want to generate the posterior for just a likelihood like this, I’d recommend one of Turing’s satellite packages to work with the likelihood function directly.

AdvancedMH.jl, which is currently unreleased, is built to do this with Metropolis-Hastings algorithms. You can take a look over there if you want to use a simple log density function.

1 Like

actually that’s exactly what i want only to generate only a simple likelhood for this distribution function as for AdvancedMH.jl do you have any information when will it be released? and is there any way to do it with Turing

Alternatively, if you have coded up a posterior already,

https://github.com/tpapp/DynamicHMC.jl/

may also work for you. The docs has a worked example.

1 Like

i’ll check it hopefully it will work for this case

No, not for your case at the moment. Turing is for much more complex models where you can’t or don’t want to write up the likelihood.

Probably soon, but it doesn’t need to be released for you to use it. You can call

] dev https://github.com/TuringLang/AdvancedMH.jl

to install the package.

Does it support discrete sampling?

I initially thought so, but I think I’ve made some design decisions that made it so discrete sampling won’t work. I can fix that though. Thanks for asking, I wouldn’t have noticed that until it was too late!

EDIT: It does support discrete sampling, but the functionality is kind of limited by your ability to define a proposal distribution. If your model has both discrete and continuous variables, there’s not really (as far as I know) a good way to describe them using both using the same multivariate distribution.

@torfjelde does Bijectors have some way of allowing for multivariate draws from like a [Categorical(5), Normal(0,1)] distribution?

1 Like

Because i am using a struct of: struct OrthoNNDist <: DiscreteMultivariateDistribution .
The sampling that i’m trying to perform is discrete one and a long one with a lot of measurement, and i need a package that can perform discrete sampling

Distributions.Product should be able to support this imo. It currently doesn’t but I think that’s a bug.

In AdvancedMH, uou can do discrete sampling as long as you have a proposal distribution that returns something you can add to your last sample. I’m assuming the results of rand(::OrthoNNDist) are integer valued? If so, you can basically just set it up with something like this:

# The first argument is a vector of initial parameterizations,
# and the second is the proposal distribution.
spl = MetropolisHastings(zeros(n_params), Categorical(5)) 

You have to catch cases where your samples are out of bounds in your model. From the README, here’s a quick way to do this for a discrete model:

# Define the components of a basic model.
insupport(θ) = θ[2] >= 0
dist(θ) = Normal(θ[1], θ[2])

# This is the function we use as our log density.
density(θ) = insupport(θ) ? sum(logpdf.(dist(θ), data)) : -Inf

of course i have my own i have created a customized proposal distribution and yes you’re right the rand(::OrthoNNdist) gives valued integers, but here what is the data ? i know it is the generation of the of the proposal, here in this case how can i generate them in this case. waht you have suggested is working but i want to put everything in a chain.