Hello,
How does Nonconvex.jl differentiate variables? To define variables the definition simply define an upper and lower bound values addvar!(model, lower, upper), so how does it differentiate between variables across multiple constraints?
Thanks
Hello,
How does Nonconvex.jl differentiate variables? To define variables the definition simply define an upper and lower bound values addvar!(model, lower, upper), so how does it differentiate between variables across multiple constraints?
Thanks
Variables are assigned an integer index in the order they are added, but it is up to you to keep track of this. There is also the DictModel
interface, which is a bit more friendly.
julia> using Nonconvex
julia> Nonconvex.@load NLopt
[ Info: Attempting to load the package NonconvexNLopt.
[ Info: Loading succesful.
julia> begin
f(x) = (@show x; sum(x))
model = Model(f)
addvar!(model, 3.0, 3.0)
addvar!(model, [1.0, 2.0], [1.0, 2.0])
optimize(model, NLoptAlg(:LD_MMA), [3.0, 1.0, 2.0]; options = NLoptOptions())
end;
x = [3.0, 1.0, 2.0]
x = [3.0, 1.0, 2.0]
Hi!
To add to Oscar’s answer, the relevant section in the documentation is Overview · Nonconvex.jl. There you will see that you can either treat all decision variables as elements in a long vector (the Model
API) or give each variable an explicit name (the DictModel
API). Also note that each decision variable in Nonconvex
can itself be a collection, e.g. NamedTuple
or Vector
. So you can model your decision variables as a vector of NamedTuples or a vector of vectors if that makes your code more readable.
I noticed in the documentation that its possible to convert a JuMP model into a DictModel, but I get an error when I try to convert the following (set type not supported):
### declare JuMP model
model = JuMP.Model()
### define JuMP variables
lbx = -Inf*ones(N+1); ubx = Inf*ones(N+1) ;
@variable(model, base_name ="x", x[i=1:N+1,j=1:3], lower_bound = lbx[i], upper_bound = ubx[i]);
@variable(model, base_name = "δ", δ[i=1:N], lower_bound = lbx[i], upper_bound = ubx[i]);
@constraint(model, dyn0, x[1,:] == xinit);
@constraint(model, dyn[i=1:N],x[i+1,:] -Ad*x[i,:] + Bd*δ[i] .== zeros(n))
Do you have a reproducible example and the full error message? This sounds like a missing feature in Nonconvex
See the following:
using JuMP
using Nonconvex
A1 = [1.0 1.0 1.0; 1.0 2.0 0.0; .0 .0 1.]
B1 = [1,1,1]
N = 5;
xinit = [0.0,.0,.0];
n = 3;
### declare JuMP model
model = JuMP.Model()
### define JuMP variables
lbx = -Inf*ones(N+1); ubx = Inf*ones(N+1) ;
@variable(model, base_name ="x", x[i=1:N+1,j=1:3], lower_bound = lbx[i], upper_bound = ubx[i]);
@variable(model, base_name = "δ", δ[i=1:N], lower_bound = lbx[i], upper_bound = ubx[i]);
@constraint(model, dyn0, x[1,:] == xinit);
@constraint(model, dyn[i=1:N],x[i+1,:] -A1*x[i,:] + B1*δ[i] .== zeros(n))
dmodel = DictModel(model)
outputs is
If I remove the variable and constraint names it then seems to be okay, although it does not seem to keep the greek letter variable and instead replaces it with “d”
@variable(model, x[i=1:N+1,j=1:3], lower_bound = lbx[i], upper_bound = ubx[i]);
@variable(model, δ[i=1:N], lower_bound = lbx[i], upper_bound = ubx[i]);
@constraint(model,x[1,:] == xinit);
[@constraint(model, x[i+1,:] - A1*x[i,:] + B1*δ[i] .== zeros(n)) for i=1:3]
Try .== xinit
. JuMP has different constraint types for vector and scalar.
thanks!
Please open an issue in Nonconvex.jl with the original code example. I will try to fix it.
Please keep the names! See Nonconvex JuMP model to DictModel cannot see constraints - #3 by mohamed82008