Advice on using ModelingToolkit to solve Lagrange Problems

How would I use ModelingToolkit.jl to solve Lagrange problems?

I know of the @varable macro, and the @derivative macro, but I’m not sure the purpose of the @parameters macro.

to solve Lagrange problems

Can you be more specific?

Say I in economics l have a utility function -log(x)-y=0 and I want to find the cost optimization at 5x-y=0

You want the solution to a system of nonlinear equations?

https://github.com/JuliaNLSolvers/NLsolve.jl

Otherwise, you can use JuMP

julia> using JuMP, Ipopt

julia> model = Model(Ipopt.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Ipopt

julia> @variable(model, x >= 0.1, start = 1)
x

julia> @variable(model, y)
y

julia> @NLconstraint(model, -log(x) - y == 0)
(-(log(x)) - y) - 0.0 = 0

julia> @constraint(model, 5x - y == 0)
5 x - y = 0.0

julia> optimize!(model)
This is Ipopt version 3.13.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        4
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:        1

Total number of variables............................:        2
                     variables with only lower bounds:        1
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        2
Total number of inequality constraints...............:        0
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        0
        inequality constraints with only upper bounds:        0

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0  0.0000000e+00 5.00e+00 1.00e+00  -1.0 0.00e+00    -  0.00e+00 0.00e+00   0
   1  0.0000000e+00 9.58e-01 8.64e-01  -1.0 8.33e-01    -  1.00e+00 1.00e+00h  1
   2  0.0000000e+00 1.02e-01 4.27e-01  -1.7 4.36e-01    -  4.91e-01 1.00e+00h  1
   3  0.0000000e+00 9.86e-04 1.06e-02  -1.7 5.72e-02    -  1.00e+00 1.00e+00h  1
   4  0.0000000e+00 8.98e-08 2.15e-05  -3.8 5.62e-04    -  1.00e+00 1.00e+00h  1
   5  0.0000000e+00 6.66e-16 4.64e-12  -5.7 5.12e-08    -  1.00e+00 1.00e+00h  1

Number of Iterations....: 5

                                   (scaled)                 (unscaled)
Objective...............:   0.0000000000000000e+00    0.0000000000000000e+00
Dual infeasibility......:   0.0000000000000000e+00    0.0000000000000000e+00
Constraint violation....:   6.6613381477509392e-16    6.6613381477509392e-16
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   6.6613381477509392e-16    6.6613381477509392e-16


Number of objective function evaluations             = 6
Number of objective gradient evaluations             = 6
Number of equality constraint evaluations            = 6
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 6
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 5
Total CPU secs in IPOPT (w/o function evaluations)   =      0.004
Total CPU secs in NLP function evaluations           =      0.000

EXIT: Optimal Solution Found.

julia> value(x), value(y)
(0.26534493304844, 1.3267246652421998)

Should work. I can try more examples later on.

And the symbolic version:

https://mtk.sciml.ai/dev/tutorials/nonlinear/

Using the symbolic version can be a lot faster because it can perform nonlinear tearing which can greatly reduce the size of the nonlinear systems being solved. Take for example:

@parameters t
@variables u1(t) u2(t) u3(t) u4(t) u5(t)
eqs = [
       0 ~ u1 - sin(u5),
       0 ~ u2 - cos(u1),
       0 ~ u3 - hypot(u1, u2),
       0 ~ u4 - hypot(u2, u3),
       0 ~ u5 - hypot(u4, u1),
]
@named sys = NonlinearSystem(eqs, [u1, u2, u3, u4, u5], [])

It can tear the system to write all variables except for one in terms of an explicit relationship of the others:

# u1 = f1(u5)
# u2 = f2(u1)
# u3 = f3(u1, u2)
# u4 = f4(u2, u3)
# u5 = f5(u4, u1)

which then makes a scalar nonlinear problem. Given the O(n^3) scaling of the system, that’s a pretty nice change :sweat_smile:. Of course that’s just a small example, but it shows how the cost savings can grow (depending on the sparsity pattern) as the problem grows.

Solving with tearing looks like:

using ModelingToolkit, NonlinearSolve

@parameters t
@variables u1(t) u2(t) u3(t) u4(t) u5(t)
eqs = [
       0 ~ u1 - sin(u5),
       0 ~ u2 - cos(u1),
       0 ~ u3 - hypot(u1, u2),
       0 ~ u4 - hypot(u2, u3),
       0 ~ u5 - hypot(u4, u1),
]
@named sys = NonlinearSystem(eqs, [u1, u2, u3, u4, u5], [])
simplified_sys = structural_simplify(sys)
prob = NonlinearProblem(simplified_sys,[u1=>0.0,u2=>0.0,u3=>0.0,u4=>0.0,u5=>0.0])

sol = solve(prob,NewtonRaphson())
sol[u1] # 0.9993449829534078
sol[u5] # 1.6069926958662222

The numerical solver library here is then NonlinearSolve.jl

It both has native solvers but also wrappers to NLsolve.jl, MINPACK.jl, and Sundials.jl.

4 Likes

Which are the constraints?

I’ve got a sort of similar problem, but it’s not at all symbolic. It’s basically find a vector that maximizes the expected utility subject to some constraints, where the expected utility is approximated by a mean of utilities across the results of an MCMC run. Something like:

maximize:

mean(utility.(outcome(sample[i],decisionvars) for i in 1:nsamps))

subject to something like:

sum(abs.(decisionvars)) = fixedquantity

Can someone point me at how to do this using JuMP, ModelingToolkit etc? So far I’m doing it with NLopt which is working, but a bit cumbersome.

I’m not sure if I need it to be entirly symbolic, or even mostly numeric. I think I’m going to be doing a lot of stuff like this, where mean changes with new samples(ie. watches the price of two variables, to estimate the relative cost of two inputs, to optimize based on the current price, and the projected price.)

Can NLsolver be set up to be self learning, by replacing x and y with the median of updating datasets?

so for equation and constraints, I instead have multiple equations?

@parameters x
@variables x f(x)
eqs = [ 0 ~ 5x-f(x), 0 ~ -log(x)-f(x)]

@named sys = NonlinearSystem(eqs, [x, f(x)],[])

sol = solve(prob,NewtonRaphson())
sol[x] 
sol[f] 

I’m particularly confused by the @named macro.

That’s 2 variables and 2 equations. Just write it like the one above.

I thought I had, what did I do differntly?

Literally do the simplest thing.

@parameters x
@variables x f
eqs = [ 0 ~ 5x-f, 0 ~ -log(x)-f]
@named sys = NonlinearSystem(eqs, [x, f],[])
prob = NonlinearProblem(structural_simplify(sys),[x=>1.0,f=>1.0])
sol = solve(prob,NewtonRaphson())
sol[x] # 0.2653449228120895
sol[f] # 1.3267246140604474

ok so f is a variable. That makes more sense.

Gives an error message saying it can’t show values because there is no value t, but

sol[x] # 0.2653449228120895
sol[f] # 1.3267246140604474

produce the correct values.

how do I plot the solution?

Can NLsolver be set up to be self learning

how do I plot the solution?

In general, JuMP and/or ModelingToolkit just do the modeling and solution of the optimization model.

If you want to have it update every time new data comes in, or plot the solution, you’ll need to write that part separately.

I thougt that, so the price equation is:

pricexx+priceyy

I would set it up so that prices of x and y are arrays, and have:

sum(pricex)x+sum(pricey)y

then somewhere else I would define pricex and pricey as arrays?

You should probably instead do is write a function that takes as input the prices, formulates and solves the problem, and then returns the x and y solutions.

using JuMP, Ipopt
function solve(price_x::Float64, price_y::Float64)
    model = Model(Ipopt.Optimizer)
    @variable(model, x >= 0.1, start = 1)
    @variable(model, y)
    @NLconstraint(model, -log(x) - y == 0)
    @constraint(model, price_x * x - price_y * y == 0)
    optimize!(model)
    return value(x), value(y)
end

price_x = [5.0]
price_y = [1.0]
solve(sum(price_x), sum(price_y))

Separate your data/manipulation/plotting code from the optimization. They’re independent things.

Ok, thanks, but that is the next step.