Bizarre d(variable) syntax encountered when reading model written in .mdl file

I was wondering if anyone had familiarity with something called MoDeL language. It seems to be some AML type optimization language. I found bizarre syntax in which constraints are written with d(variable). Meaning “change in” variable. There is an example here:

d(log(F_n[f, s])) = d(log(Y[s])) - d(log(PROG[f, s])) + d(SUBST_F[f, s])

This is an economics based model, this constraint comes from the Lagrangian solution to minimizing expenditure given a production function. This constraint describes the change in demand for a specific factor f in a specific sector s. It is equal to the change in output Y minus the productivity of the factor f in sector s plus the change in the subsitution for factor f in sector s.

The variables are indexed by factor f, e.g labor, capital, land etc. and Sector s meaning something like agriculture, wholesale retail & trade etc.

I have never heard of MoDeL language nor of .mdl files… And I am unsure as to how this d(variable) syntax could possibly work? The constraint from the documentation is written as such:

\Delta \log F_{n,f,s}
= \Delta (\log Y_{s}) - \Delta (\log PROG_{f,s}}) + \Delta (SUBST_{F_{f,s}})

I am not an optimization guru per se, so I may be unfamiliar with the various AMLs that exist and the syntax. But I have never seen something like this before.

I am hoping to write future models of this type in JuMP with Julia, a more open source, well-documented, and openly available language. However, not being an optimization expert, trying to transfer syntax to JuMP is difficult if, for example, I encounter bizarre syntax as noted above. Is there equivalencies in JuMP and do optimization experts have any guidance regarding .mdl language and documentation on the syntax.

Would appreciate any guidance

Disclaimer: never heard about MoDeL and have zero experience with economics.

The \Delta notation looks like deviation variables. They are usually used to simplify/linearize differential equations, see: Linearization - Wikipedia and work around zero instead of actual values. Something along the lines: Let’s say that we know that for some initial values y, a, b, c this equality is satisfied: f_1(y)=f_2(a)+f_3(b)-f_4(c) (1). We are now interested in what happens for y*, a*, b*, c*, i.e. we want f_1(y*)=f_2(a*)+f_3(b*)-f_4(c*) (2). With some abuse of mathematics, we can approximate every term in (2) as: f_1(y*)=f_1(y)+\Delta f_1(y), f_2(a*)=f_2(a)+\Delta f_2(a), f_3(b*)=f_3(b)+\Delta f_3(b), f_4(c*)=f_4(c)+\Delta f_4(c). When we substitute all these into (**), we get:

f_1(y)+\Delta f_1(y)=f_2(a)+\Delta f_2(a)+f_3(b)+\Delta f_3(b)-f_4(c)-\Delta f_4(c) (3)

which we can rewrite as:
f_1(y)-f_2(a)-f_3(b)+f_4(c)+\Delta f_1(y)-\Delta f_2(a)-\Delta f_3(b)+\Delta f_4(c)=0

From (1), we see that the first four elements are zero, hence we end up with:
\Delta f_1(y)=\Delta f_2(a)+\Delta f_3(b)-\Delta f_4(c)

Now we can solve our problem in terms of the variables with \Delta, called deviation variables. After we solve the problem, we go back to our known values y, a, b, c to get (2) using (3). Possibly this link gets it better than me: http://inside.mines.edu/~jjechura/ProcessDynamics/03_SpecialTechniques.pdf

Going back to your constraint, if this is the case and d() are deviation variables, then d(log(F_n[f, s])) would most likely mean that the solution returned by MoDeL+optimization should be the value d(log(F_n[f, s])) plus some value of log(F_n[f, s]). Then possibly the way to implement the constraint in JuMP would be:

@variable(model, dlogF[f=1:F, s=1:S])
@variable(model, dlogY[ s=1:S])
@variable(model, dlogPROG[f=1:F, s=1:S])
@variable(model, dSUBST_F[f=1:F, s=1:S])

@NLconstraint(model,[f=1:F, s=1:S], dlogF[f, s]=dlogY[s]-dlogPROG[f, s]+dSUBST_F[f, s])

This should work if the whole model is written in terms of these deviations variables. But I feel like I making far too many assumptions here, so I will stop :slight_smile:

Yes, this much I’ve found, I think. Thank you for the response, its helpful in understanding the approach here. In fact in reviewing further, I’ve noticed that the syntax comes from EViews, which is proprietary modeling software used by economists. Sigh, they haven’t seem to get up to speed with open source tools.

Nevertheless the documentation lists the following:

The d operator may be used to specify integer differences of series. To specify first differenc-
ing, simply include the series name in parentheses after d. For example, d(gdp) specifies
the first difference of GDP, or GDP–GDP(–1).

Higher-order and seasonal differencing may be specified using the two optional parameters,
and . d(x,n) specifies the n-th order difference of the series X:

d(x, n) = (1 - L)^n * x

where L is the lag operator. For example, d(gdp,2) specifies the second order difference of
GDP:

d(gdp,2) = gdp – 2*gdp(–1) + gdp(–2)

d(x,n,s) specifies n-th order ordinary differencing of X with a multiplicative seasonal dif-
ference at lag s:

d(x, n, s) = (1 - L)^n * (1 - L^s) * x

I suppose I can then create this function in Julia as such:

function integer_difference(x, n, s, t)
    return (1 - x[t - 1])^n * (1 - x[t - 1]^s) * x[t]
end

The final step is to somehow use this in conjunction with the natural log in a function with variables that are the full values. Imagine a variable Y[s]. How could I build a similar constraint. Of course, I would now have to index by time as well. Y[s, t].

This is a follow-up from our conversation on stack overflow: optimization - Correct way to enter constraints using derivative values in JuMP? - Stack Overflow

I still don’t really understand that the derivative is with respect to. Is Y[s] a single number or a function of time?

This might help:

using JuMP
model = Model()
@variable(model, y[t=1:10])
@variable(model, dydt[t=1:10])
@constraint(model, dydt[1] == 0)
@constraint(model, [t=2:10], dydt[t] == y[t] - y[t-1])

Hi odow,

Y is output, meaning its production of goods, or more speficially the total value of goods produced. It’s indexed by [s] which is sector in an economy. It’s a function of time, in this case the time-steps are years, with a base year being indexed at 0, year after 1, 2, 3. So on and so forth. The way these models are programmed they solve the equilibrium for a single year. Use those values to initialize the next year, solve that year, and so on and so forth…

This is referred to as recursive dynamic structure (at least in econ so take it with a grain of salt, economists are not optimization experts of course).

The d variable in this other language (from E Views) has a function that I suppose enables you take the difference from the lag variable (the previous year Y) and use that difference in a constraint.

In this case d(Y[s]) is equal to the difference between the current year output and last year output. d(Y[s]) = Y[s] - Y[s]{-1} where {-1} means last years value for Y[s].

It’s a function of time, in this case the time-steps are years

So then you need to add an explicit time index, like I showed above.

I guess your model and example would be something like

model = Model()
@variable(model, F_n[f = 1:F, s = 1:S, t = 1:T])
@variable(model, Y[s = 1:S, t = 1:T])
@variable(model, PROG[f = 1:F, s = 1:S, t = 1:T])
@variable(model, SUBST_F[f = 1:F, s = 1:S, t = 1:T])
@NLconstraint(
    model, 
    [t=2:T], 
    (log(F_n[f, s, t]) - log(F_n[f, s, t-1])) = 
        (log(Y[s, t]) - log(Y[s, t-1])) -
        (log(PROG[f, s, t]) - log(PROG[f, s, t-1])) + 
        (SUBST_F[f, s, t] - SUBST_F[f, s-1])
)