Bayesian regression with parametrized basis functions in Turing.jl

I’m trying to migrate a simple Bayesian model handwritten in Python to Turing.jl for a more modern approach.

The model is a classical parametrized basis function expansion: a 1D (speech) signal f(t) is expanded into parametrized basis functions, i.e. f(t) = \sum_k w_k \phi_k(t; \theta), where w_k are the expansion coefficients and \theta are the model parameters. The number of parameters \theta of these basis functions vary from dozens to hundreds, depending on the signal’s complexity and computing budget.

The weights w_k have a multivariate normal prior and f(t) is assumed to be corrupted by white Gaussian noise. Therefore, the weights can be analytically integrated out of the posterior for \theta and the only parameters that need to be inferred are the \theta.

The catch is that the basis functions \phi_k(t; \theta) are not known in closed form: they depend on the \theta in a rather complicated way. (They arise from the Karhunen-Loève expansion of a Gaussian process kernel, which depends on the diagonalization of a 2D matrix of Fourier coefficients obtained from the kernel in my case). I am not sure if automatic differentiation would work in this case.

My aim is to perform inference both with optimization (MAP, annealing) and sampling (nested sampling, HMC) approaches. Given my description, do you think that Turing.jl is the appropriate tool for the job?

1 Like

It sounds like you will have to perform a K-L expansion at every step. This is doable but might be more computational complexity than just using a fixed basis set and letting the coefficients be parameters. I think Turing should handle this for you but you might need to try a variety of models to see what is actually efficient.

The \theta parameters are kind of essential (they have physical interpretations), so I unfortunately cannot use fixed basis functions. The expansion order of K-L is small compared to the number of data points, so it’s not that bad :slight_smile:.

Could you confirm that Turing will recognize automatically that the \theta are parameters to be inferred, even when their likelihood is a complicated nondifferentiable function, given that the @model will contain the necessary statements such as θ₁ ~ Normal(0,1)?

Approximately, things on the left hand side of a Turing ~ statement are parameters unless they appear in the argument to the model function.

@model function foo(a,b,c)
   theta ~ Normal(0.0,1.0) # theta is a parameter because doesn't appear in arguments to foo
   a ~ Beta(thingy(theta),thingy(b)) # a is data because it's in the arguments to foo, this is a likelihood term
   c ~ CustomDistribution(theta,a,b) # more likelihood
end

So Turing should do the right thing.

4 Likes

I am currently using Turing to model acoustic scattering in the ocean, which involves calculating parameterized basis functions (in my case, frequency-dependent acoustic target strengths). I don’t have as many parameters as you do, but it’s a structurally similar problem and its working for me. Simplified hypothetical example:

@model function BackscatterExample(freqs, backscatter)
    n ~ Exponential(10) # prior for numerical density of bubbles
    bubble_radius ~ Uniform(0.001, 0.01) # prior for bubble radius
    noise ~ Exponential(1.0) # observation noise
    backscattering_cross_section = map(f -> complicated_scattering_function(bubble_radius, f), freqs)
    pred_backscatter = n * backscattering_cross_section
    backscatter ~ MvNormal(pred_backscatter, noise)
end

With a more complicated model performance may become an issue, but at least in principle you should be able to get it to work.

5 Likes

Thank you both very much for your answers. I will start implementing today.

Cheers!

A bit late to the party but you might be interested in https://github.com/JuliaGaussianProcesses/BayesianLinearRegressors.jl

Yes, I am using that library :). Thanks for the suggestion!