# How to create JuMP expressions without adding them to a model?

I’m trying to build a GARCH(1, 1) model, and this requires recursively constructing the variance:

\sigma_t^2 = \alpha + \beta \epsilon_{t-1}^2 + \gamma \sigma_{t-1}^2

Since the variances are unobserved (only the epsilons are), I have to pull \sigma_1^2 out of thin air to initialize the model. Thus, I can’t do:

@NLexpression(
model,
variances[t=2:T], α + β * series[t - 1]^2 + γ * variances[t - 1]
)


…because the variable variances doesn’t exist here. Doing:

@NLexpressions(
model,
begin
variances[1], initial_variance
variances[t=2:T], α + β * series[t - 1]^2 + γ * variances[t - 1]
end
)


…doesn’t work because I can’t redefine variances in the second line. The error message suggests using the anonymous construction syntax, but I specifically need to reference variances by name.

This doesn’t work either, because variances is undefined on the RHS:

@NLexpression(
model,
variances[t=2:5], α + β * series[t-1] + γ * (t - 1 < 1 ? initial_variance : variances[t-1])
)


Currently I’m creating a lot of expressions in a loop:

variances = Vector{JuMP.NonlinearExpression}(undef, T)

for t ∈ eachindex(series)
variances[t] = if t ≤ 1
@NLexpression(model, initial_variance)
else
@NLexpression(model, α + β * series[t - 1]^2 + γ * variances[t - 1])
end
end


variances is used only once to construct the log-likelihood, but inspecting model.nlp_data.nlexpr reveals that these intermediate expressions I created in the loop are stored in the model (and there are thousands of them), even though I don’t need them anymore.

Is it possible to create expressions without storing them in the model? Are there better ways of computing such recursive expressions like the one for \sigma_t^2?

Currently I’m creating a lot of expressions in a loop:

Yes. This is the right way to do it.

even though I don’t need them anymore.

Is this a bottleneck in your code? Is there a problem with doing it? Why do you want them removed?

Is it possible to create expressions without storing them in the model?

No.

Are there better ways of computing such recursive expressions

If you only need a particular t, e.g., T, then you could write out the expression as a sum, rather than in recursive form.

Why do you want them removed?

I have a struct GARCH that contains a JuMP model, and a fit! method, like in SciKit-learn. The fit! method calculates the variances and adds all these expressions to the same model on each call. This leads to thousands upon thousands of expressions that are only used once, but stick around until the model object is garbage collected (I fit many models using the same GARCH object).

So, I don’t want the old expressions to keep piling up like this.

I need \sigma_t^2 for all t \in \{1, 2, \dots, T\}, and the expression for \sigma_3^2 already looks pretty complicated - not sure how to write out \sigma_t^2 as a sum for any t

EDIT: I guess the simplest solution is to create a new model on each call to fit!

The expressions have to stick around for us to compute derivatives using them.

not sure how to write out σt as a sum for any t …

Something like this? Not sure if I got it right, because I don’t know what are variables and what are data, but it should point you in the right direction.

@NLexpression(
model,
expr[t=1:T],
γ^t * initial_variance + sum(γ^(t-i) * (α + β * series[i]^2) for i in 1:t),
)


Unless this is a learning exercise I would use ARCHModels.jl which is a brilliant package

2 Likes

I’m trying to implement MN-GARCH (Mixed Normal GARCH), and I’m sort of testing the waters with GARCH(1, 1) for educational purposes. I know there’s the R package MSGARCH that can fit MN-GARCH, but it doesn’t seem to support models for means of the mixture components - only for variances. So, I’m going to try and implement this…

For anyone who stumbles upon this in the future - I ended up redefining the model on each call to fit!, like this:

function fit!(garch::MyGARCH, series::AbstractVector{<:Real}, ϵ::Real=1e-10)
# Redefine the model
garch.model = Model(Ipopt.Optimizer)
@variables(garch.model, begin
α0 ≥ ϵ
α ≥ ϵ
β ≥ ϵ
end)

# Compute variances in a loop, as shown in the original post
# variances = ...

register(garch.model, :normal_pdf, 3, normal_pdf, autodiff=true)

@NLobjective(
garch.model, Max,
sum(
log(normal_pdf(ϵ, 0, sqrt(σ_sq)))
for (ϵ, σ_sq) ∈ zip(series, variances)
)
)

optimize!(garch.model)

garch
end