How to modify JuMP model in-place to improve performance?

Hi all, I have a problem with improving the optimization speed of the JuMP model.
My optimization problem is defined as follows:
Below are some constants and assistant functions:

using Distributions, LinearAlgebra

using Plots, DataFrames, CSV, ShiftedArrays

using Parameters, QuantEcon

using JuMP, Ipopt
@consts begin
    ℓ = 20
    Ncell = 5
    α_est, β_est = 0.1245, 4e-4
    R0 = 0.1803
    # for FC-DLC cycle
    # Imin, Inom, Imax = 0.04, 0.7, 2.0
    # RUL_min,  RUL_max = 500, 500
    Imin, Inom, Imax = 0.2, 0.7, 1
    RUL_min,  RUL_max = 100, 100
    pct_FT = 0.1
    # calculated by V_nom decreasing pct_FT percent
    RUL_nom = 1788
    FT = R0 + RUL_nom * α_est * β_est # equivalent FT in R
    α_0, β_0 = α_est/ℓ, β_est*ℓ
    Lmin_0 = 0.8036
    Lnom_0 = 2.38
    Lmax_0 = 3.08
    # parameters used in parabola funciton for α
    a, b = 0.04225, 0.2129

end

abstract type FCS end
mutable struct FCstack <: FCS
    Rs::Float64
    Lmax::Float64
    done::Bool
end

FCS_aging_paras = @with_kw (K = R0*5.93e-7*30*5.39/(Lmax_0-Lmin_0), ΔR_ss=1.96e-5*2)
fc_params = FCS_aging_paras()

function rand_loads(cur_state, states, prob; step=2)
    mc = MarkovChain(prob, states)
    return simulate(mc, step, init=findfirst(==(cur_state), states))
end

The main optimization loop is define within function:

main optimization loop

function simulate_cycle(mode, runs)

    for i in 1:runs
        FC1 = FCstack(R0+0.01, Lmax_0, false)
        FC2 = FCstack(R0, Lmax_0, false)

        simulation_time = 0.0
        

        Ns = 2
        states = [4.2, 5.2] # 2.7, 5.2; 4.0,
        transition_matrix = [0.0 1.0
                            1.0 0.0]
        steps = [250, 350, 500]
        t_chances = Categorical([0.3, 0.3, 0.4])

        L0 = states[1]
        L11 = L0 / Ns
        L21 = L0 / Ns
        Lmax_fc1 = Lmax_0
        Lmax_fc2 = Lmax_0
        Lmax_fc3 = Lmax_0

        for _ in 1:1
            
            if mode[1] == "dec"
                d1 = 1 - FC1.done
                d2 = 1 - FC2.done
                # d3 = 1 - FC3.done
                # for random loads
                Ld = rand_loads(L0, states, transition_matrix)[end]
                op_durations = steps[rand(t_chances)]

                ## how to modify opt model inplace?
                model = Model(Ipopt.Optimizer)
                set_silent(model)
                
                lb = Lmin_0 / Ld
                ub = Lmax_0 / Ld
                
                @variable(model, lb<=x<=ub, start=0.5)
                @variable(model, lb<=y<=ub, start=0.5)
                # @variable(model, lb<=z<=ub, start=0.5)
                @NLconstraint(model, x + y ==1)
                # @NLconstraint(model, d1*u[1]*x + d2*u[2]*y==1)

                w1 = (FT - FC2.Rs) / (2FT - FC1.Rs - FC2.Rs)
                w2 = 1 - w1
                # defined aging parameters
                w_dL = 1 # weight term for load varying caused aging

                # 23/04 currently used obj
                @NLexpression(model, f0dL,
                (w1 * (x*Ld-L11)^2 + w2 * (y*Ld-L21)^2) * fc_params.K * 3600) #+ w3 * (z*Ld-L31)^2
                @NLexpression(model, α_x,  
                    ifelse(x * Ld < Lnom_0, (a * (x* Ld-Lnom_0)^2 + α_0), 
                    (b * (x* Ld-Lnom_0)^2 + α_0)))
                @NLexpression(model, α_y, 
                    ifelse(y * Ld < Lnom_0, (a * (y* Ld-Lnom_0)^2 + α_0), 
                    (b * (y* Ld-Lnom_0)^2 + α_0)))

                # @NLobjective(model, Min, obj(x, y)) ? how to include start-stop opt in this obj
                @NLobjective(model, Min, f0dL * w_dL + (w1*α_x + w2*α_y)*β_0*op_durations)
                optimize!(model)

                L12_ratio = value(x)
                L22_ratio = value(y)
                # L32 = value(z*Ld)

            elseif mode[1] == "ave"
                # L12 = Ld / Ns
                # L22 = Ld / Ns
                L12_ratio = 1 / Ns
                L22_ratio = 1 / Ns
                # L32 = Ld / Ns

            end
            # for random loads and random durations
            Ld = rand_loads(L0, states, transition_matrix)[end]
            op_durations = steps[rand(t_chances)]
            L12 = L12_ratio * Ld
            L22 = L22_ratio * Ld

            FC1.Rs += 1e-4
            FC2.Rs += 2e-4

            simulation_time += op_durations
            L11 = L12
            L21 = L22

            L0 = copy(Ld)

        end
        return simulation_time / 3600
    end

end

The optimization is running through:
simulate_cycle(["dec", "det"], 1)
I need to repeatedly run this optimization as shown in the above function: controlled by two for loops.
I found the optimization turn out to be very slow, I see in the JuMP optimization, for such issues usually need to implement an optimization model in place. I failed to do this.
Thanks for checking this problem!

How slow? Did you profile to see which parts are slow?

You can modify nonlinear problems in-place using @NLparameter: Nonlinear Modeling (Legacy) · JuMP

Thanks@odow. I am not familiar with checking which part of the program is slow. I will try to check this.
I come from Python, and I compared with my previous python implementation (using scipy sqp), the solution in JuMP model is slow than in Python.

1 Like

This package is useful for profiling: GitHub - KristofferC/TimerOutputs.jl: Formatted output of timed sections in Julia

Things to time:

  • How much is spent in rand_loads?
  • How much is spent in optimize!?

You can also get the run time from Ipopt with solve_time(model).

the solution in JuMP model is slow than in Python

What are the actual times? And how are you running the script? From the command line?

Many thanks for the remarks.
I have checked the solve_time(model) for Ipopt solver:
In the optimization function,
solve_time(model) = 0.00219 s (after first run Julia program).
I used @time simulate_cycle(["dec", "det"], 1) checked the
overall time is 0.002606 s, seems most of the running time is due to
Ipopt solver.
I run my program in VScode Julia REPL
I will keep checking the TimerOutputs package.

the overall time is 0.002606 s

Is that very slow?

1 Like

It is Ok. I will recheck my original program, maybe it is the way I organize the simulation function.
In my problem, for each run as said in above program I need to solve optimization around 3K times.
And I need to repeat 600 runs.

I need to solve optimization around 3K times. And I need to repeat 600 runs.

So ~8 seconds per run? 80 minutes all up?

How long does Python take?

Try doing more loops and check the timings. It might be that scipy’s sqp solver is faster than Ipopt for this problem. You could try setting different options: Ipopt: Ipopt Options

Is it also the case that in your big problem you only have two variables, x and y, and they have to sum to 1? If so, you could replace y = 1 - x, and then you have an unconstrained problem in a single variable, so you could use something other than JuMP to solve this, i.e., Optim or NLopt. They have a bunch of algorithms that may be faster.

1 Like

In my case, Python takes around 2 hours, and the model in JuMP takes around 3 hours (extra calculations are needed due to some extra functions).

I agree that the sqp is relatively faster. I tried Ipopt in Python is very slow so I turned to JuMP.

Thanks for the two suggestions.
I will check the Ipopt options, currently, I used the default parameters.
The second remark, yes, in my case x + y = 1. So I will try to check unconstrained optimizations.

I would go even one step further than odow:

  1. As pointed out by odow, after you substitute out y = 1 - x, the problem is a one-dimensional unconstrained minimization problem
  2. As far as I can see, your objective function is piece-wise quadratic; the different pieces are identified from the definition of α_x and α_y.
  3. And while I have not checked all the coefficients, it also looks like the objective function is convex (that would have to be double-checked).

Bottom-line: you are trying to minimize a (piece-wise) quadratic function of one variable. That can be done in closed-form, without any optimization solver involved, by finding the critical points of the (quadratic) objective function.

You can use JuMP expressions to build the quadratic objective (no @NLexpression needed), then extract the coefficients and solve compute the critical points. That will likely be much much much faster than going through an optimization solver (whichever it is).

Note that this may also explain why the SQP algorithm in scipy is faster than ipopt: it uses a quadratic approximation… of a quadratic problem. I would not be surprised if it converged in a single iteration most of the time.

3 Likes

Thanks a lot, @mtanneau for the inspiring remarks.
I think my objective function is convex. But it is piece-wise
defined as have pointed out in point 2.
Sorry, I am not clear on this:

You can use JuMP expressions to build the quadratic objective (no @NLexpression needed), then extract the coefficients and solve compute the critical points.

Can you help to give a small demo or further description? Thank you!

Once you have the coefficients a, b, c of your quadratic function x \rightarrow ax^2 + bx + c, it is pretty straightforward to find the critical points.

The annoying part is computing those coefficients from the code you currently have. Luckily, you can get away with virtually no change, since JuMP actually does that automatically.

For instance:

# This is dummy data
w1 = 0.8
w2 = 0.4
Ld = 0.75
L11 = 1.1
L21 = 0.3

model = Model()
@variable(model, x)

# Build the quadratic expression
@expression(model, y, 1-x)
@expression(model, expr, 
    w1 * (x * Ld - L11)^2 + w2 * (y*Ld - L21)^2
)

# Extract coefficients in the form `ax^2 + bx + c`
a = expr.terms[UnorderedPair(x, x)]
b = expr.aff.terms[x]
c = expr.aff.constant

Of course, nothing prevents you from skipping JuMP entirely, but then you have to manually compute all the coefficients. Going through JuMP also makes it easier to compare against a nonlinear solver like Ipopt.

For more background: JuMP docs on quadratic expressions.

1 Like

Thank you very much @mtanneau .
Your demo helps me a lot. I will further check this and
try to implement it to my problem.

I just thought of a small problem.
You proposal is based on the assumption that the optimization constraint is: x + y = 1,
so that I can change it to single variable unconstrained minimization.
So, if the constraints changed into 1.5>= x + y >= 1, for instance. Then maybe I have to
rely on a solver like Ipopt to solve it?

Thanks for checking @mtanneau .

1 Like

Finally, I take this solution of replacing y = 1- x and use Optim.jl to optimize my objective function.
I found the Optim provides the Box argument which well suits my problem (used to define the lower bound and upper bound of decision variable x). Now I can solve the problem in 10 min rather than the original 3 hours.

Thanks all for the suggestions @odow @mtanneau.

1 Like