One objective instead of stage objective with SDDP.jl

Hello,

I’m new to optimization problem. I find SDDP.jl closed to what I want to do. There are state variable, control variables, random variables. But with SDDP.jl, we need to define the stage objective for each node, I would like to call it “local objective function”. In my case, the objective is the sum of a certain function for each stage, I have a “global objective function”. I don’t know if this is supported by SDDP.jl. Any example similar to my case?

Besides, if stage objective and global objective exist at the same time, how can I emphasize that the priority of global objective, i.e. global objective is more important than the stage objective.

Hope someone can help

Ru

Hi there, I’m glad to see a user of SDDP.jl :smile:

You can only define "local objective function"s in SDDP.jl, and we minimize (or maximize) the sum of the objectives for each node.

What’s your global objective function and how is it defined? There are couple of tricks for cases like this, but it depends on what you want to do.

Thanks for your quick reply!

I am working with 4 reservoirs control in flood season. Water draw off from these reservoirs will gather at one point. We want to minimize the total flow at the point during a week by controlling the discharge of the 4 reservoirs. So the objective function is the total flow at the point in 7 days. Is there any example similar to my case?

What is the time horizon? Do you want to minimize the total flow during any 7 day period over a longer time? Or do you just want to minis the total flow over the horizon while minimizing cost as well?

I just want to minis the total flow for a week. I am thinking about dividing this objective into the flow for each day in a week, and then sum up all the flow for each day. I don’t know whether this is reasonable.

Is the total time horizon 7 days?

If so, add a new state variable that collects the total flow, then penalize it in the final stage. You can a penalty coefficient in the objective to change the priority, relative to your other objectives.

SDDP.LinearPolicyGraph(stages = 7) do sp, t
    @variable(sp, total_flow, SDDP.State, start = 0)
    @constraint(sp, total_flow.out == total_flow.in + ...)
    if t == 7
        @stageobjective(total_flow + ...)
    else
        @stageobjective(...)
    end
end

If you want it to be any 7 days, it’s also do-able, but you need to introduce 7 new state variables, so it’s a little trickier. You’d need something like this

SDDP.LinearPolicyGraph(stages = 365) do sp, t
    # Keep track of the last 7 flows
    @variable(sp, total_flow[1:7], SDDP.State, start = 0)
    # Keep track of the worst  7-day flow
    @variable(sp, max_flow, SDDP.State, start = 0)
    # State transitions for total_flow
    @constraint(sp, total_flow[1].out == ...)
    @constraint(sp, [i=2:7], total_flow[i].out == total_flow[i-1].in)
    # State transitions for max_flow
    @constraint(sp, max_flow.out >= max_flow.in)
    @constraint(sp, max_flow.out >= sum(total_flow))
    # On the last day, penalize max_flow
    if t == 365
        @stageobjective(max_flow + ...)
    else
        @stageobjective(...)
    end
end

Yes, the total time horizon is 7 days. So the first case is more likely to help. How do you define the penalty coefficient to make sure the total_flow is of the first priority? This is related to the if-else structure?

Do something like:

SDDP.LinearPolicyGraph(stages = 7) do sp, t
    @variable(sp, total_flow, SDDP.State, start = 0)
    @constraint(sp, total_flow.out == total_flow.in + ...)
    if t == 7
        @stageobjective(100 * total_flow + ...)
    else
        @stageobjective(...)
    end
end

But you’ll need to try a few different values other than 100, because it’ll depend on the values in the optimal solution.

Thanks, I’ll try it

1 Like