Hi everyone! I’m really happy to announce a new Julia package for modelling multivariate time series data using probabilistic state space models!
State-space models are a core tool for analyzing high-dimensional time series, especially in modern neuroscience where recordings from hundreds or thousands of neurons are common. While Python ecosystems offer mature tools (e.g. ssm, Dynamax), Julia has lacked a general-purpose SSM library that supports non-Gaussian observations, mixed discrete–continuous latent structures, and efficient EM-based inference.
These models are incredibly popular in computational neuroscience for unsupervised analysis of neural dynamics across diverse imaging modalities (e.g., electrophysiology, calcium-imaging, fNIRS, LFP, etc.). Specifically, many researchers, as opposed to fitting potentially difficult to interpret black-box models, many now opt to fit models like the switching Linear Dynamical System (SLDS) and its variants thereof.
Our packages primary goal is to allow users to be able to fit these style of models to neural data in an efficient manner so that neuroscientists working in Julia have an equivalent package to those previously mentioned in python.
Inference in StateSpaceDynamics
Given that LDS-style models are latent variable models, one needs a way to perform tractable inference over the posterior distribution of the latent states.
Our package uses a direct optimization approach that has been previously advocated in the neuroscience literature (e.g. Paninski et al., 2010)..
For continuous latent-variable models (e.g. LDS), the package performs inference by directly maximizing the complete-data log posterior with respect to the latent state trajectory. By exploiting the block-tridiagonal structure of the Hessian, inference scales linearly in the number of time steps and recovers the standard Kalman filter and RTS smoother exactly in the Gaussian case.
Importantly, this framework generalizes naturally to non-Gaussian observation models (e.g. Poisson and Bernoulli), requiring only the gradient and Hessian of the observation log-likelihood. For these models, StateSpaceDynamics computes an exact MAP latent trajectory and performs approximate EM using a Laplace approximation to the latent posterior, while maintaining computational efficiency via fast block-tridiagonal solvers.
Learning In StateSpaceDynamics
We currently supports EM-based parameter learning for all LDS-style models. For conjugate Gaussian models, learning reduces exactly to the standard EM updates for linear dynamical systems. For non-conjugate observation models (e.g. Poisson LDS), we use Laplace EM, where the latent posterior is approximated locally by a Gaussian around the MAP trajectory.
For models that mix discrete and continuous latent variables, such as the SLDS, the package implements Variational Laplace EM (vLEM). This approach alternates between:
- variational inference over discrete state sequences, and
- Laplace-approximated inference over continuous latent trajectories,
allowing efficient and scalable learning for models that would otherwise could be computationally intractable with generic sampling-based methods.
What’s next?
StateSpaceDynamics is still in active development. Over the coming months, we plan to focus on:
- Interoperability with HiddenMarkovModels.jl
Development of StateSpaceDynamics began around the same time as HiddenMarkovModels.jl, and we were not initially aware of how fast and robust that package would become. We are now working toward deprecating our internal HMM implementations in favor of using its backend routines. - Additional observation models
Currently, we support Gaussian and Poisson observations (the most common for neural data). We plan to add additional model classes, including Bernoulli, GP-based observations, and others. - Greater reliance on automatic differentiation
We aim to support AD-based workflows so that new model classes with difficult or intractable derivatives are easy to prototype and integrate. - Community-driven development
We’re very happy to support features and extensions that the community finds useful
Final thoughts
Lastly a huge thanks to the reviewers of our Joss publication. They made this a really great experience and over all really made the package better. This was my first major Julia project and I learned a lot from their guidance. Lastly, a big thanks to the while Julia communtiy. This really is a great place to learn and become a better programmer. Cheers everyone!