Solving inference problem in quantum error correction with Turing.jl

Hi everyone,

I am trying to use Turing.jl to solve an inference problem in quantum error correction. Here is a simple problem.

Problem Statement:
Consider a parity check bit c \in \{0,1\} computed from four binary variables x_1, x_2, x_3, x_4 \in \{0,1\} as follows:

c = x_1 \oplus x_2 \oplus x_3 \oplus x_4,

where \oplus denotes the XOR operation. Equivalently, c represents the parity (even or odd) of the sum x_1 + x_2 + x_3 + x_4.

Each bit x_i is an independent Bernoulli random variable with success probability p_i, i.e.,

P(x_i = 1) = p_i \quad \text{and} \quad P(x_i = 0) = 1 - p_i.

Given the observed parity c = 1:

  1. Determine the most probable configuration of (x_1, x_2, x_3, x_4).
  2. Find the most probable value of parity of a partial sum x_1 + x_2.

This is the decoding problem, which refers to the task of identifying the most likely error affecting a quantum state based on the observed error syndrome. The goal is to reverse the inferred error to recover the original logical state. Current approaches to solving this problem include belief propagation, integer programming, perfect matching, tensor networks, and other methods.

Here is my attempt.

using Turing

@model function one_check(y)
    p1 ~ Bernoulli(0.15)
    p2 ~ Bernoulli(0.05)
    p3 ~ Bernoulli(0.05)
    p4 ~ Bernoulli(0.05)
    for i in eachindex(y)
        y[i] = p1 ⊻ p2 ⊻ p3 ⊻ p4
    end
end;

model = one_check([true])
map_estimate = maximum_a_posteriori(model)

mle_estimate = maximum_likelihood(model)

There are errors on the last two function, like

InexactError: Bool(NaN)

I don’t know how to solve this problem properly with Turing.jl.

Thanks!

Welcome! I can’t comment on the problem but three general things: 1) You need a likelihood for the observation, say y != c ? Turing.@addlogprob! -Inf which says that observing a mismatch is impossible. Now it’s inserting numbers to the input vector. 2) You can’t use any optimization or sampling methods that require continuous parameter spaces. For sampling you need to look for Gibbs, Metropolis-Hastings, etc. 3) Bernoulli uses 1 and 0 instead of true and false, watch out for that.

Thank you for your advises! I tried to apply 1) and 3) of them and I didn’t get your point in 2). Do you mean that maximum_a_posteriori and maximum_likelihood require continuous parameter spaces and Bernoulli distribution is not a continuous distribution?

The following is my second attempt. The error is similar.

using Turing

@model function one_check()
    p1 ~ Bernoulli(0.15)
    p2 ~ Bernoulli(0.05)
    p3 ~ Bernoulli(0.05)
    p4 ~ Bernoulli(0.05)

    y ~ Bernoulli(0.5)
    if y != p1 ⊻ p2 ⊻ p3 ⊻ p4
        Turing.@addlogprob! -Inf
    end
    return nothing
end;

model = one_check() | (; y = 1)

map_estimate = maximum_a_posteriori(model)

This problem doesn’t seem like the best fit for Turing, and definitely not for the MAP optimizer. The parameters you’re trying to infer are discrete, so none of the continuous or gradient-based optimizers will be able to do anything with them.

That’s the bad news. The good news is that the parameter space is finite and quite small (at least for n=4), so you can just do an exhaustive search:

using Distributions

function one_check(y, x)
    d1 = Bernoulli(0.15)
    d2 = Bernoulli(0.05)
    d3 = Bernoulli(0.05)
    d4 = Bernoulli(0.05)

    lp = sum(logpdf.([d1, d2, d3, d4], x))
    parity = isodd(sum(x))

    if any(yi -> isodd(yi) != parity, y)
        return -Inf
    else
        return lp
    end
end

y = [true]
tf = [true, false]
# generate all possible configurations of x
xx = collect(Iterators.product(tf, tf, tf, tf))
logprobs = [one_check(y, x) for x in xx]
lp, i = findmax(logprobs)
xmax = xx[i]

Here xmax gives your MAP estimate, and exp.(logprobs) is your full posterior distribution.

For higher-dimensional x, where the exhaustive search might become impractical, this problem would be a good fit for approximate Bayesian computation (ABC).

1 Like

Thanks for the detailed explanation! The code example is very helpful for clarity. I’ll try this approach and go on to learn more about approximate Bayesian computation (ABC). Appreciate the guidance!