Yes, if you have a regression y_t = f(x_t, \theta[i_t]) whose parameters \theta depend on the hidden state i_t of a Markov model, then in my package docs the “controls” are the regression inputs x_t and the “observations” are the regression outputs y_t.
Yes, but unfortunately for every new controlled model you have to code the estimation procedure yourself. The “learning” section of the tutorial demonstrates this:
- the first part of the
fit!function is standard transition estimation for HMMs, - the second part of the
fit!function estimates \theta[i] for each possible value of i.
The key here is that to recover \theta[i], you use every pair (x_t, y_t) but you weigh them individually by the posterior probability of being in state i, which is stored in \gamma_{i,t}. In the case of linear regression there is an explicit formula (which I hope I got right), for logistic regression you have to use numerical optimization with a slightly adjusted loss function. Does that make sense?
Yes, this is alluded to in the “model” section of the tutorial. By overloading HMMs.transition_matrix(hmm, control), you can impose this additional dependency. But then the transition estimation in the fit! function must also be adapted manually.