Fast/Approximate Sampling for HMMs/HDP-HMMs

Hi All,

First post here. I’m interested in building some tools based off bayesian hidden Markov models. Indeed, what I would really like as a project for myself is a Julia equivalent to the PYHSMM library which is a fantastic library but has some issues/bugs and lacks support.

After following the tutorial for estimating an HMM’s parameters using turing, I wanted to work with longer sequences (think 100-1000 observations). For an EM based HMM this is peanuts. However using the fully bayesian approach as in the tutorial yields impossibly long sampling time.

I can only imagine the bottleneck is sampling the state sequence. Are there approximate approaches to doing this for HMMs?

Cheers

1 Like

What is the state space and its dimension?

It was a trivial HMM. 2 states (essentially modeling an “on” vs “off” signal), ergodic, univariate gaussian emissions. I’ll copy in some code when I get on my personal laptop

I’d say look around for an HMM package to actually compute a likelihood and run Turing on top of that.

Oh interesting. This seems like it might be the issue, the bayesian hmm in the tutorial is not calculating the likelihood using the forward (DP) algorithm, so the sampling grows intractable quickly.

I’ll try that

Did you manage to do it?
Could you post an example?
Best

1 Like