Spike-and-Slab Implementation using Turing.jl?

So I was looking for an implementation of spike-and-slab prior using the Turing.jl package. Say I have a model as follows:

y_{ig} \sim N(\theta_g, \sigma^2) where g = 1, \cdots, G and i = 1, \cdots, n_g independently
And I would like the priors on \theta_g to be independently a spike-and-slab as \pi\delta_0 + (1-\pi)t_v, which is a mixture of a degenerate mass at 0 and a central t distribution.

Can somebody give me some guidance as to how I can do this?

Distributions.jl has a Dirac distribution but it didn’t seem to work here. One option could be something like this?

using Turing
@model function M₁(g; G = maximum(g))
	ν ~ Exponential(10)
	π ~ Beta(1, 1)
	θ ~ filldist(MixtureModel([Normal(0, 0.001), TDist(ν)], [π, 1-π]), G)
	σ² ~ Exponential(1)
	y ~ MvNormal(θ[g], σ²*I)
end

While you can probably define a spike-and-slab prior somehow, it will be hard to sample from – due to its non-differentiable density at the location of the point measure. There are some modern differentiable alternatives/approximation available, such as the horseshoe prior. Betancourt has a very detailed discussion of sparsity priors and their properties.