Using anonymous and or symbols to create dynamic variable names?

Howdy, so I asked a question on stack overflow:julia - Define several JuMP variables names in loop based on iteration - Stack Overflow , and Oscar replied. I thought a bit more, and remember this board, and thought I’d see if this is possible by means of rephrasing the question.

I tried 2 ways: 1 seen below - define anonymous variables and then tried to use them in the objective (this did not work). Also I tried @variable(m,Symbol(“x$k”)[1:5],Bin), and this didn’t seen to work either. Are any of these ideas salvageable?

My final goal here was to be able to dynamically be able to create variable names, and assign properties to them like I would ussually, and then use them how one would in JuMP - instead of hard coding x1,x2…xn, make a loop and sort of do what I tried with the Symbol("x$k).

Thank you.

import JuMP


using Juniper
using Ipopt
using LinearAlgebra 
using Base.Threads
using Cbc

function obj(list)
    sum=0.0
    for k=1:length(list)
        println(list[k])
        sum = sum + sum(list[k])
    end
    return sum
end

optimizer = Juniper.Optimizer
nl_solver= optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
mip_solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0, "threads"=>nthreads())
m = Model(optimizer_with_attributes(optimizer, "nl_solver"=>nl_solver, "mip_solver"=>mip_solver))
var_type = ["b","r","b"]
var_list=[]
for k=1:length(var_type)


if(var_type[k]=="b")
    x_k = @variable(m,[1:5],Bin)
end
if(var_type[k]=="r")
    x_k = @variable(m,[1:5])
end
append!(var_list,[x_k])
end

@objective(m, Min, obj(var_list))
optimize!(m)
for k=1:length(list)
    optimal_solution = value.(var_list[k])
    println(optimal_solution)

end
2 Likes

From my answer on SO:

in no case can you dynamically create a binding like x_i

By this I really mean it. Don’t try interpolating expressions into the macros, don’t use Symbol(...), and don’t use eval (not that you have, but people have tried this in the past).

What you should do instead is create an appropriate data structure to hold the variables. That might be a Vector, a Dict, or some user-defined struct, but the choice is problem-dependent.

In your case, I would do something like this:

types = ["b","r","b"]

model = Model()
var_list = [
    @variable(model, [1:5], binary = t == "b") for t in types
]

model = Model()
@variable(model, var_list[t=1:3, [1:5]], binary = types[t] == "b")

Edit: “I thought a bit more, and remember this board” Great :smile: I prefer discourse for back-and-forth discussion. Conversations on SO can get shut-down by the mods.

5 Likes

Hello,
I’m resurrecting this 2022 thread because I’d like to be sure what you mean by “I really mean it” :wink:

Indeed, I read the JuMP doc section Design patterns for larger models (that you mentioned in another question related to variable name collision How to merge two `JuMP` models?) but also the very useful String names, symbolic names, and bindings section and at the end it says:

  • You can manually register names in the model via model[:key] = value

So my question is the following: in the context of a multi-stage optimization problem where each stage contains about the same variables and contraints, I’d be tempted to dynamically forge variable names like: variable x for stage 1 should be x1

Using anonymous variables + Dict containers (with names as keys), I can get such variables defined (and the String name attribute can be freely set as a bonus).

However, I’ve also tested the idea of manual registration using a dynamically created Symbol(variable_name). I understand that it’s not needed since I can already access variables in my Dicts, but this dynamic registration does work, so that’s why I’m asking your advice: is the manual registration of dynamically created Symbols:

  • a bad practice (due to potential name clashes)?
  • or it can seriously break some lower level JuMP layers?

Here is a code example to illustrate: I want to have variable x1 and x2 created dynamically:

using JuMP
using HiGHS

"add variable `name` to model `m` and to Dict `dat`, for optional `stage`"
function add_stage_variable!(m::Model, dat::Dict{String,Any}, name::String; stage="")
    name_suffix = "$name$stage"
    dat[name] = @variable(m, base_name=name_suffix) # here I can use name or name_suffix
    # Optional last line: finish the job by registering variable as name_suffix????
    m[Symbol(name_suffix)] = dat[name]
end

m = Model(HiGHS.Optimizer)
set_silent(m)
data1 = Dict{String,Any}() # container for 1st stage variables
data2 = Dict{String,Any}() # container for 2nd stage variables
add_stage_variable!(m, data1, "x"; stage=1)
add_stage_variable!(m, data2, "x"; stage=2)

and the end of this code, data1 gets printed as Dict{String, Any}("x" => x1). In fact, by modifying add_stage_variable!, it could have been "x" => x1, "x1" => x1, "x1" => x or "x" => x depending on taste, since there is no risk of clash with the content of data2.

And the model m prints as:

A JuMP Model
Feasibility problem with:
Variables: 2
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: HiGHS
Names registered in the model: x1, x2

and that last line disappears if I comment out the last line of add_stage_variable! function.

1 Like

Hi @pierre-haessig,

My comment mainly applied to people trying some form of

for i in 1:3
    eval(Meta.parse("x_$i =i"))
end

Trying to create x_1, x_2, etc is usually always worse than x::Vector.

Your case doesn’t really apply, because you’re not creating Julia bindings named x_1, but a key in JuMP’s object dictionary. This is okay.

Nothing you do in the object dictionary can break lower-level JuMP layers. Your add_state_variable looks okay.

Selfish plug: if you’re doing multistage stuff, have you considered SDDP.jl? https://sddp.dev

1 Like

Thanks a lot for the feedback. I’ve assembled a toy optimization model to explore further this idea which I’ll post in a separate thread.

EDIT: Here it is Modular implementation of optimization problems with JuMP!

Ah yes, I still have in mind that you used SDDP on a “convex cow growing” model :wink:

In the application I have in mind, there are only 2 stages, with only a few hand-picked scenarios for the 2nd stage, so building one optimization program is fine (no decomposition or other fancy solving trick needed).

In other applications (optimal energy management under uncertainty), SDDP is indeed a good tool to have in mind (if the problem is convex) although I haven’t practiced it yet.

1 Like

if the problem is convex

We also support local solutions to non-convex problems :smile:, and there are tricks you can do like train with a convex model and then simulate with a non-convex model: Alternative forward models · SDDP.jl

But anyway, you’re right, don’t use SDDP for two stages and a few scenarios.

1 Like

Nice, I was not aware of this!

Now, to better get into SDDP.jl, I thinks I really need to read your 2020 article on policy graph https://onlinelibrary.wiley.com/doi/10.1002/net.21932.

1 Like