Robust resampling/interpolation method

If you’re not estimating the covariance matrices, you can set a very small value for the dynamics covariance and the behavior of the filter will approach that of simulation error estimation (less influence of the measurements on the filter estimate)

I have not included estimation of covariance matrices yet, but I do plan on doing so.

Not sure I’m following entirely. Why would you do that in this case?

The behavior of a kalman filter is such that as the dynamics noise covariance approaches zero, the filter approaches the pure simulator (zero kalman gain). You can use this as a continuous tradeoff between filtering and simulation to see if you get better estimation of the parameters.

2 Likes

Thanks, I’ll experiment with it.

Do I understand correctly if I want to also simultaneously estimate covariance matrices within the same framework I have to use -LowLevelParticleFilters.loglik?

I found this example that uses this:

I’ve found some small issues in the code that might impact the performance

You need a -1 in the indexing of u here to align with how lsim works and you call predict! before correct!.

predict!(kf, u[:, t_idx-1], p, t_idx * Ts)

same in

u_future = [u[:, t_idx+k-1] for k in 1:n_steps]

You don’t use x_noisy, but if you did, the way you apply the noise does not correspond to the model used by the Kalman filter, the noise has to be added as each new x is being generated, it cannot be added to a trajectory afterwards.

Simulating form the statistical model implied by the Kalman filter can be done with

  x,u,y = simulate(kf::KalmanFilter, u, p=parameters(f); dynamics_noise=true, measurement_noise=true)

  Simulate dynamical system forward in time T steps, or for the duration of u. Returns state sequence, inputs and measurements.

    •  u is an input-signal trajectory

  A simulation can be considered a draw from the prior distribution over the evolution of the system implied by the selected noise
  models. Such a simulation is useful in order to evaluate whether or not the noise models are reasonable.

There is a strong correlation between the insolation input and the external temperature. This correlation is of course present in the real world as well, but not as perfect. Correlation between inputs reduces identifiability and may make the problem harder.


Not an issue, bu there is a method

KalmanFilter(sys::StateSpace{Discrete}, R1, R2, d0 = MvNormal(Matrix(R1)); kwargs...)

that could be useful.


I’d suggest simplifying the problem until you find a good solution and in doing so, identifying what aspect of the problem is the biggest source of difficulty. With zero noise and the true parameters, the cost function should be 0 (within numerical tolerance), if it isn’t, there’s a bug in the cost function computation.

I’d also consider narrowing down the bounds lb, ub and use a global estimator, like OptimizationBBO.BBO_adaptive_de_rand_1_bin_radiuslimited()

Lastly, I’d look at the cost Hessian at the optimum to see if it is well conditioned. If it isn’t, identifiability is poor with the combination of model and data used.


You don’t have to, but it makes sense to do. The log likelihood is the squared prediction error, weighted by the covariance of the error. This automatically takes into account things like

  • Different reliability of different sensors
  • Scaling to handle potentially different units
  • Penalizes overconfidence in the estimate
1 Like

With zero noise, I get

H = ForwardDiff.hessian(cost_fn, p_true)

julia> eigvals(H)
9-element Vector{Float64}:
   6.41545681495365e-8
   2.5500836483083085e-6
   0.0011214244240807224
   0.0048218081658013895
   0.06717248684588828
   0.28944335461622084
   1.888730646071487
   9.631005269450567
 167.36954454247558

julia> param_names[last.(findmax.(abs, eachcol(eigvecs(H))))][1:2]
2-element Vector{String}:
 "R_ea"
 "C_e"

indicating that the two parameters printed above are practically unidentifiable (despite the fact that I have added extra Gaussian excitation to the input before computing this).

1 Like