This could be done for any model estimated, linear or otherwise? It does not have to have the special VARX structure?
I don’t want to prevent you from using these models if those are what you are familiar with, but I cannot see the benefit of using them over something strictly more general.
For
you could actually construct a “VARX” function that just estimates a more general model under the hood and then converts it to the VARX structure.
Here’s a script that simulates a VARX(2,1) model and then estimates a standard linear statespace model using two different approaches. The fit is close to 100%
using ControlSystemIdentification, Plots
A1 = 0.3randn(2,2) # True VARX parameters
A2 = 0.3randn(2,2)
B1 = randn(2,2)
N = 300 # Number of time steps to simulate
Y = [randn(2), randn(2)]
u = randn(2, N)
for i = 3:N
yi = A1*Y[i-1] + A2*Y[i-2] + B1*u[:,i-1]
push!(Y, yi)
end
y = hcat(Y...)
d = iddata(y, u, 1)
model = subspaceid(d, 4) # Estimate using subspace-based identification
model2, _ = newpem(d, 4) # Estimate using the prediction-error method
simplot(d, model, title="Simulation performance subspaceid")
simplot(d, model2, title="Simulation performance PEM")
These coefficients could be used as your features for the machine-learning classifier
Alternatively, if you want a minimal set of parameters, convert to a polynomial model (transfer function) and extract those coefficients instead:
julia> numvec(tf(model2))
2×2 Matrix{Vector{Float64}}:
[1.64374, 0.744194, 0.728624, -0.117102] [-0.0783438, 0.174749, 0.0175149, -0.0055857]
[-0.0929754, 0.813401, -1.1772, 0.146595] [-1.22746, -0.271548, -0.0117105, 0.0069934]
julia> denvec(tf(model2))
2×2 Matrix{Vector{Float64}}:
[1.0, 0.768187, 0.770503, 0.0302766, -0.00970963] [1.0, 0.768187, 0.770503, 0.0302766, -0.00970963]