Im working with KissABC.jl package smc() function. Based on my reading through source code for same, it doesn’t look like classic smc() algorithm but kind of hybrid algorithm.

As there is not much reference provided for the package algorithm. If anyone of you is familiar with this package, can you explain whats going on in smc() function specifically?

Im in interested in weight calculation part of accepted particles.

This is an experiment, the answer is generated by AI (Sonnet3.5)!

Thank you for sharing the code and your question about the KissABC.jl package’s smc() function. I’ll try to explain the weight calculation part for accepted particles in this algorithm.

The smc() function implements an adaptive Sequential Monte Carlo (SMC) algorithm with an Affine-invariant proposal mechanism. It’s not a classic SMC algorithm but rather a hybrid approach. Here’s an overview of how the weight calculation works:

The algorithm starts with a set of particles drawn from the prior distribution.

In each iteration, it adapts the threshold (ϵ) based on the quantile of the current costs:

ϵ = quantile(Xs[alive], α)

Particles are considered “alive” if their cost is below the current threshold:

alive = Xs .< ϵ

The algorithm doesn’t explicitly calculate weights for particles. Instead, it uses the “alive” status as an implicit weight. Particles that are “alive” (i.e., have a cost below the threshold) are kept, while others are discarded.

Resampling is performed when the effective sample size (ESS) drops below a certain threshold:

if α * ESS <= nparticles * min_r_ess
# Resampling logic
end

This resampling step essentially assigns equal weights to all surviving particles.

The MCMC step uses an affine-invariant proposal mechanism (similar to the stretch move in emcee). The acceptance probability for a proposed move is based on the prior probabilities and the cost function:

lM = min(lπp - lπs[i] + logcorr, 0.0)
if lprob < lM
# Accept the move if the cost is below the threshold
end

In summary, this algorithm doesn’t use explicit weight calculations like traditional SMC methods. Instead, it uses a combination of adaptive thresholding, implicit weighting through the “alive” status, and occasional resampling to maintain a set of particles that represent the posterior distribution.

The main idea is that particles with costs below the adaptive threshold are considered equally weighted, while those above the threshold are effectively given zero weight. This approach, combined with the MCMC moves, allows the algorithm to efficiently explore the parameter space and approximate the posterior distribution.

I don’t understand need of this MCMC part. So, if in first part we already have samples based on some epsilon why we need MCMC?

Also, Im confused. SMC algorithm normally just collect particles randomly right and then check for cost for simulated output for this particles. If it’s just random picking of particles. Whats the need of this MCMC? How it makes things better ?

Please make sure that your answers always improve the discussion. If people want to directly prompt a chatbot and copy the output, they can do it by themselves (and maybe they already have).

I tried to do this because this comprehensive context for the LLM isn’t available for everyone actually.

So with this we could get information through 10.000 package and each internal code of the julia itself.

So if any of the question related to any package or can be guessed by any of the package documentation or code in the ecosystem then I could answer which wouldn’t be possible without these as the 10 million token search is highly accurate and directed.

Also sorry for experimenting, I just saw this question wasn’t answered in the last hour so I thought it can be helpful.

Note, I wanted wanted to improve the discussion, tbh.

Thank you for your contribution, I didn’t mean to imply that it was worthless. But if a new account just copy-pasted LLM output to answer a question, they would at the very least get a warning.
I don’t think the moderation team has reached a consensus on LLM policy for this Discourse, but it might be worth discussing. In the meantime, whenever you go for a chatbot answer, be sure to explain why you did and why it is useful.