sati
1
Hi all,
I have the code below. How can we write this code in a vectorized way?
using JuMP
using SparseArrays
model = Model()
D = [1,2,3]
L = [1,2,3,4,5]
line2dem = sparse([1, 2, 3], [1, 2, 3], [1, 1, 1], 8, 3)
lineIReal = [1,2,1,1,2]
demIReal = [1,2,1]
@variable(model, demIV[D])
@variable(model, genIReal[D])
@variable(model, genIImag[D])
@objective(model, Min, sum(demIReal[d] * (genIReal[d]^2 + genIImag[d]^2)^2 + lineIReal[l] for d in D, l in L))
@constraint(model, eDemIR[d in D], demIV[d] + demIReal[d] + sum(line2dem[l, d] * lineIReal[l] for l in L) == 0)
Thanks.
odow
2
First answer: you don’t need to. The scalar version of JuMP will be just as fast (and potentially even a little faster).
Second answer: something like this:
using JuMP
using SparseArrays
lineIReal = [1, 2, 1, 1, 2]
demIReal = [1, 2, 1]
D = length(demIReal)
L = length(lineIReal)
line2dem = sparse([1, 2, 3], [1, 2, 3], [1, 1, 1], L, D)
model = Model()
@variable(model, demIV[1:D])
@variable(model, genIReal[1:D])
@variable(model, genIImag[1:D])
@objective(model, Min, demIReal' * (genIReal.^2 .+ genIImag.^2).^2 + sum(lineIReal))
@constraint(model, demIV .+ demIReal .+ line2dem' * lineIReal .== 0)
I changed the objective though. Did you mean to have the sum over both D
and L
? The terms were unrelated.
Also, your final constraint is just a fixed variable bound. You could instead do
fix.(demIV, -(demIReal .+ line2dem' * lineIReal))
1 Like