Is there a package which has an implementation of autocovariances using FFT? With lots of lags, it should be much faster than the simple approach (eg in StatsBase.autocov).

If there is none, which appears to be my impression, I will code one up and submit to a package, but I wanted to check to save effort.

I assume you want to use in context of MCMC convergence? Would be lovely to have a package with fast FFT-based autocov for that, instead of picking just a few lag values.

FWIW, I think even the most naive autocorrelation implementation has an insignificant cost compared to MCMC. And then you need a few lags anyway (after a while, they are just noise).

True - still, with an FFT, it’ll be very performant and no need to decide on lags up front. Also, could be a good basis for estimating effective number of samples, right?

That’s right, I’ve been needing it to approximate autocorrelation time for just that purpose, effective number of samples as well as informing how many samples to discard prior to equilibrium. And so I have a lot of lags and long timeseries. I have been getting by though with this little function that uses the conv function from DSP.jl. The conv uses FFT.

function autocov_con(x::AbstractVector{<:Real},lags::UnitRange{Int})
lx = size(x,1)
x .-= mean(x)
A = conv(x,reverse(x))/lx
A = [A[k + lx] for k in lags]
end

I have found it to be much faster than autocov form StatsBase.jl when there are many lags. You see in this naive implementation I compute the autocorrelation for all lags and then truncate. And, @Tamas_Papp thank you for your Dec '17 post about inserting code, that was really helpful.

I very recently ported Dan Foreman-Mackey’s implementation of an FFT-based integrated autocorrelation time estimator from the emcee Python package to Julia.

MCMCChains’s upcoming 4.0 release (which is breaking) has FFT-based autocovariance calculations for effective sample size. Currently it’s not in an exposed API.