Expectation-Maximization for Kalman Filter with Continuous State Variables

I would like to use a Kalman Filter + the EM algorithm to fit model parameters for a Hidden Markov Model with continuous hidden state variables, and then use the Kalman filter for prediction using the fitted parameters. I’ve implemented the KF and tried using Optim.optimize to do MLE inference of the parameters that generated synthetic data I created and it fails miserably so I’d like to try EM. It may be that my likelihood as a function of the parameters to be fit is too flat, i.e. that a change in 1 param can approximately (but not exactly) offset a change in another.

At any rate, does anyone have a concrete example of how to do this? Or is there, perhaps, a package that does both KF and EM?

Here are a couple of examples

I don’t use EM in any of them, but I find that MLE works rather well in most cases.

Which optimizer were you trying? In cases like this, it might be beneficial to use second-order information, i.e., a Newton-type algorithm. If you’re minimizing prediction errors (equivalent to MLE unless you’re estimating noise properties as well), you can use a Gauss-Newton algorithm to get quadratic convergence near the optimum without computing the exact hessian. This is also demonstrated on the link above.

1 Like

Thank you. I will look.

Did you manage to make any progress?