# Specifying variable dimension in JuMP

This might not be JuMP specific, but I’m trying to learn how to use JuMP for LPs.

Do I need to specify the size of the variable x? It should be a 2-d vector. If so, how?

My goal is to solve:
\$\$\max_x cx \$\$ subject to \$\$Ax <= b, x>=0\$\$.
A is the identity matrix. c = [1,0], b = [1,1].

Below is a picture of my code and output of the `print model` command. When I print the model it outputs weird inequalities suggesting something is wrong. See screenshot:

For example, ne of the inequalities has 0<1, for example. I was expecting it to print something like the following for the constraints:

``````x_1 + 0*x_2 <= 1,
0*x_1 + x_2 <= 1,
x_1 >=0, x_2 >= 0 .
``````

OP edited - the following doesn’t apply anymore:
Replacing `@constraint(model, A*x <= b)` with `@constraint(model, A*x .<= b)` should do the trick.

Ok thanks that runs now, but it does appear to be quite what I want. See printed output below.
Maybe I’m not properly specifying the vectors `b` and `c` or the matrix `A`? Or do I need to somehow specify the type of `x`?

I am not sure what your goal is here - so it is hard to evaluate what actually happens in the code versus your expectations.

My goal is to solve \$\$\max_x cx \$\$ subject to \$\$Ax <= b, x>=0\$\$

A is the identity matrix. c = [1,0], b = [1,1].

When it prints the model, the output is very weird (see above). One of the inequalities has 0<1, for example. I was expecting it to print something like the following for the constraints:

``````x_1 + 0*x_2 <= 1,
0*x_1 + x_2 <= 1,
x_1 >=0, x_2 >= 0 .
``````

How about setting your variable in this way: `@variable(model, x[1:2] >= 0)` and reverting the constraint to `@constraint(model, A*x <= b)`?

I think this satisfies your constraints (`[x - 1, x - 1] ∈ MathOptInterface.Nonpositives(2)` is equivalent to your `x_1 + 0*x_2 <= 1, 0*x_1 + x_2 <= 1`).

gives the error:
`@constraint(model, \$(Expr(:escape, :A)) * \$(Expr(:escape, :x)) <= b)`: Unexpected vector in scalar constraint.

This is the version that works on my end:

``````A = [1 0; 0 1]
b = [1;1]
c = [1,0]

model = Model(HiGHS.Optimizer)
set_silent(model)

@variable(model, x[1:2] >= 0)

@constraint(model, A*x <= b)

@objective(model, Max, sum(c .* x));

optimize!(model)

print(model)
#Subject to
# [x - 1, x - 1] ∈ MathOptInterface.Nonpositives(2)
# x ≥ 0
# x ≥ 0
``````

Are you sure you didn’t change stuff on other parts? Please try to execute the above.

Just copied your code. Still getting same error:
`@constraint(model, \$(Expr(:escape, :A)) * \$(Expr(:escape, :x)) <= b)`: Unexpected vector in scalar constraint. Did you mean to use the dot comparison operators like .==, .<=, and .>= instead?

This seems to work ok I guess:

Thanks for all this help, but this seems awful difficult. Are you an expert in JuMP? If not, maybe I should wait for someone with specialization in this package?

You seem to work in a Jupyter notebook there - you might get into spaghetti issues with the cells: please make sure you are executing the `model = Model(HiGHS.Optimizer)` before running the other operations on the model.

Yeah I’m in Jupyter notebook. I executed that line first. I also tried restarting the kernel to make sure no other cell is messing with things. I still get same error with the matrix approach (i.e. without specifying the components of x explicitly).

Here you have my vs-code naive execution - it just works:

Anyway - to answer your question: I cannot call myself an expert on JuMP - so if you need a contribution at that level for your specific problem, I think it is best to leave this to other users that might be more qualified.

However, for reference, here is the solution I proposed (which you seem to have issues in executing without getting an error):

``````A = [1 0; 0 1]
b = [1;1]
c = [1,0]

model = Model(HiGHS.Optimizer)
set_silent(model)

@variable(model, x[1:2] >= 0)

@constraint(model, A*x <= b)

@objective(model, Max, sum(c .* x));

optimize!(model)

print(model)
#Subject to
# [x - 1, x - 1] ∈ MathOptInterface.Nonpositives(2)
# x ≥ 0
# x ≥ 0
``````

I wish you luck.

P. S. I see that you actually added `[x - 1, x - 1] ∈ MathOptInterface.Nonpositives(2)` to your constraint: please note that I indicated that as an output that satisfied your stated expectations, not instructed actually to build a constraint from that. The actual instruction was above that paragraph:

3 Likes

Thank you! Maybe I’ll try it in VScode and see how it goes. I’m probably just doing something stupid somewhere.

I was intrigued by what’s happening, so here is my two cents. First case:

``````@variable(model, x >= 0) ###Creates a scalar variable x
@constraint(model, A*x .<= b)
``````

JuMP doesn’t know that `x` is supposed to be a vector. If x is a scalar, then `A*x`=[x 0;0 x]. Using `A*x.<=b` gives a column-wise comparisons: [x;0]<=[1;1] and [0;x]<=[1;1], which, in turn, give the element-wise scalar constraints x<=1, 0<=1, 0<=1, x<=1.

Telling JuMP that x is a vector

``````@variable(model, x[1:2] >= 0) ###Creates a vector variable x with x and x
``````

If x is a vector of size 2x1, then `A*x` is also a vector of size 2x1. And whereas both `A*x<=b` and `A*x.<=b` give mathematically the same constraints, numerically they can differ, see: Constraints · JuMP.

Using `@constraint(model, A*x .<= b)` we get four separate scalar constraints (two from the matrix multiplication, two from the definition of x): Using `@constraint(model, A*x <= b)` we get one vector constraint from the matrix multiplication and two scalar constraints from the definition of x: 2 Likes

If you’re getting the escape error, its because you’re using an older version of JuMP.

Update your packages, or use the .>= syntax.