Does Julia have fast expectations over multivariate Gaussians?

The following approach using HCubature.jl seems rather slow. Is there an easy way to make this faster in Julia?

``````using BenchmarkTools, HCubature, Distributions
softplus(x) = log(exp(x) + 1)
function integrand(r1, r2, u; f = softplus)
let n = MvNormal(zeros(2), [1. u
u 1.]),
f = f, r1 = r1, r2 = r2
x -> pdf(n, x)*f(r1 * x[1])*f(r2 * x[2])
end
end
integral(f) = hcubature(f, (-15, -15), (15, 15), atol = 1e-9)
f = integrand(1., 1., .3)

julia> @btime integral(f)
17.225 ms (441288 allocations: 21.25 MiB)
(0.7266538753041055, 9.999992783800257e-10)
``````

I guess this specialized Fortran code would be faster, but I didnâ€™t try (my Fortran is a bit rusty).

I donâ€™t have any optimal solution, but the below code is approximately 2.5x faster, using StaticArrays, though with less effect than I anticipated.

Why did you put a `let` block inside the `integrand` function, btw?

``````using StaticArrays
function myintegrand(r1, r2, u; f=softplus)
p = MvNormal(SA[0.0, 0.0], SA[1.0 u; u 1.0])
return x -> pdf(p, x) * f(r1 * x[1]) * f(r2 * x[2])
end
g = myintegrand(1.0, 1.0, 0.3)

julia> @btime hcubature(\$g, (-15, -15), (15, 15); atol=1e-9)
``````
1 Like

Thanks, @DNF.

With some modifications and Cuba.jl I get something pretty fast (but itâ€™s not ForwardDiffableâ€¦)

``````using Cuba, BenchmarkTools, HCubature

softplus(x) = ifelse(x < 34, log(exp(x) + 1), x)

chofv(x) = ((2x-1)/((1-x)*x), (2x^2-2x+1)/((1-x)^2*x^2))
mvnormal(c, x, sinv) = c*exp(-1/2*(sinv[1, 1]*x[1]^2 + sinv[2, 2]*x[2]^2 + 2*sinv[1, 2]*x[1]*x[2]))
function mvnex_integrand(f, u)
c = 1/(2Ď€*sqrt(1-u^2))
sinv = inv([1 u
u 1])
ret = [0.]
function(x, ret = ret)
y1, dy1 = chofv(x[1])
y2, dy2 = chofv(x[2])
if isinf(y1) || isinf(y2)
ret[1] = 0.
else
ret[1] = mvnormal(c, (y1, y2), sinv)*f(y1, y2) * dy1 * dy2
end
end
end
mvnex(f, u; kwargs...) = cuhre(mvnex_integrand(f, u); kwargs...)[1][]
mvnex_hcub(f, u; kwargs...) = hcubature(mvnex_integrand(f, u), (0., 0.), (1., 1.); kwargs...)[1]
f = (x, y) -> softplus(x)*softplus(y)
julia> @btime mvnex(f, .3, atol = 1e-9, rtol = 1e-9)
957.658 ÎĽs (45521 allocations: 1.39 MiB)
0.7266538752913418
julia> @btime mvnex_hcub(f, .3, atol = 1e-9, rtol = 1e-9)
5.126 ms (139850 allocations: 5.26 MiB)
0.7266538752458989``````

Canâ€™t you do a similar rewrite for Hcubature? The current code looks pretty suboptimal, with single-element vector, and no StaticArrays.

In fact, I did the same rewrite with HCubature (see updated example above) and got something slightly faster than your example. However, Cuba is still more than 5 times faster.

EDIT: fixed rtol in the the example above.

Whatâ€™s the deal with the single-element array, `ret`? That seems like something that would have a negative performance impact.

Oh, thatâ€™s because Cuba expects and integrand of this form (see here). It doesnâ€™t really have an impact on HCubature, i.e. I get basically the same timings with

``````function mvnex_integrand_hcub(f, u)
c = 1/(2Ď€*sqrt(1-u^2))
sinv = inv([1 u
u 1])
function(x)
y1, dy1 = chofv(x[1])
y2, dy2 = chofv(x[2])
if isinf(y1) || isinf(y2)
0.
else
mvnormal(c, (y1, y2), sinv)*f(y1, y2) * dy1 * dy2
end
end
end
mvnex_hcub(f, u; kwargs...) = hcubature(mvnex_integrand_hcub(f, u), (0., 0.), (1., 1.); kwargs...)[1]
``````