for j = 1:m
for i = 1:n
@constraint(model, [t[i, j], 1, x[i, j]] in MOI.ExponentialCone())
end
end
How to do this without loops? For example, with vectors? Will vectorized codes be faster when n or m is large?
As a related question, I remember I have seen cases where adding constraints with vectorized codes can be much faster than simply doing for loops, but I also heard that for loops are actually considered to be more efficient than vectorized codes. I’m a bit confused about in general which way to follow.
For background:
The general Julia language approach is discussed here:
In Julia, vectorized functions are not required for performance, and indeed it is often beneficial to write your own loops (see Performance Tips), but they can still be convenient.
I believe this advice is also relevant to constraint containers if you happen to also use a conditional condition in the indexing e.g [i = 1:n, j = 1:m; a[i,j] > 0]:
Note that with many index dimensions and a large amount of sparsity, variable construction may be unnecessarily slow if the semi-colon syntax is naively applied. When using the semi-colon as a filter, JuMP iterates over all indices and evaluates the conditional for each combination. When this is undesired, the recommended work-around is to work directly with a list of tuples or create a dictionary.
Thanks! The form and coefficients in the objective do not change. Some of the constraints in the models are changing. For example, like the following (suppose t, x, y are the variables)
@constraint(model, Ay .== b) @constraint(model, t .== Cy + d) @constraint(model, [i = 1:n, j = 1:m], [t[i, j], 1, x[i, j]] in MOI.ExponentialCone())
Part of the coefficient matrix A is changing.
How do I rebuild the model faster?
Also, I use the solver Mosek.Optimizer. Or is there a faster solver for models with exponential cones?
Thanks! Would it be better to do this for all the intermediate variables like t here? That is, change equality constraints to expressions if possible? For example,
@variable(model, t[1:dim1])
@variable(model, t2[1:dim2, 1:dim3])
@objective(model, Min, c' * t2 + dot(t2 , t2))
@constraint(model, t2 .== A * t .+ b)
Then it would be better to change
@variable(model, t2[1:dim2, 1:dim3])
@constraint(model, t2 .== A * t .+ b)
to
@expression(model, t2, A * t .+ b)
right?
Also, I sometimes see codes that put dot(t2, t2) to the constraint by introducting another variable t3 >= dot(t2, t2), but would this simply add additional variables and constraints, and make things slower?
In most cases, yes. More variables means more work for the solver. (It’s possible to construct a case where more variables is faster, but a good rule of thumb is fewer variables = better.)
You’ll also end up with weird cases where abs(t2 - (A * t .+ b)) > 0 because of tolerances (i.e., the equality doesn’t hold exactly, only approximately).
Usually adding single epigraph variables like t3 is fine, and in some cases even encouraged.