# Best JuMP modelling technique!

Hi all,

I came across two ways of defining a same constraint in JuMP. I want to know if both ways, from the point of view of the time taken by the JuMP to model the constraint, are equivalent OR does one method leads to less modelling time than the other?

Method 1:

``````@constraint(acopf, [s=1:nSc,t=1:nTP,i=1:nGens;t>1],
-ramp_rate<=(Pg[s,t-1,i]-Pg[s,t,i])<=ramp_rate)
``````

Method 2:

``````for c in 1:nCont
for s in 1:nSc
for t in 1:nTP
for i in 1:nGens
if t>1
@constraint(acopf,-ramp_rate[i]<=(Pg[s,t-1,i]-Pg[s,t,i])<=ramp_rate[i])
end
end
end
end
``````

In my opinion, both methods should take the same amount of modelling time. However, my friend insists that method 1 is superior than method 2 in terms of less modelling time? Any insight on this issue will be appreciated.

Furthermore, can some one tell me how to measure the total time taken by the JuMP to model the problem? Please note that I am not talking about the the solver time. The solver is giving me the time it needs to solve the problem. I am talking about the model formulation time which JuMP takes. Thanks a lot!

I don’t if one of these methods is theoretically faster. But you can build your models in functions and then use the `@time` macro. E.g., roughly something like:

``````function main()
#create empty models
model1 = Model()
model2 = Model()

function buildFirstModel!(m)
#add vars and constraints to m
end
function buildSecondModel!(m)
#add vars and constraints to m
end

@time buildFirstModel!(model1)
@time buildSecondModel!(model2)
end
``````

``````function buildFirstModel!(m)
#add vars and constraints to m
end
function buildSecondModel!(m)
#add vars and constraints to m
end

function main()
m = Model()
@time buildFirstModel!(m)
m = Model()
@time buildFirstModel!(m)
m = Model()
@time buildSecondModel!(m)
m = Model()
@time buildSecondModel!(m)
end

main()
``````

Also, the first way can theoretically create all the constraints in a single call to the C++ API, while the second can’t. They are either equivalent or the first one is faster, but I think it is basically impossible to the second way to be faster.

2 Likes

The first is faster, because it doesn’t have the `1:nCont` loop .

Assuming that is just a typo, the two methods are equivalent.

The first is perhaps easier to read, which is one argument for using it.

@miles.lubin I hope you are doing well. Can you shed some light on this issue? I would highly appreciate it. Thanks a lot!

Hi @musman. The answers in this thread already point in the right direction. You have examples of how to measure the modeling time. What timings do you get in your case?

1 Like

Hi Miles, thanks for the reply. Since you are developer of JuMP and you have more knowledge and info about it, I was curious if you know that which method (method 1 or method 2 ) by default is better in terms of modelling time? I have read the JuMP documentation and I have not found any doc which specifically says that setting constraints through method 1 is superior than setting constraints through method 2. Now, I will measure time through the ways proposed in this thread but I was curious if you know this beforehand and can guide me on this.

@musman, the best way to answer this question is to time it yourself.

Method 1 may be slightly slower because it returns a `SparseAxisArray` of constraints for future reference, whereas Method 2 just adds the constraints and does not return a reference. However, I doubt this difference will be material compared to the time it actually takes to solve the problem, so you should pick the one that is most readable to you.

However, if the `if` check is expensive, and you can lift it to higher loops (e.g., if it only uses `c` and `s`, it can go after `for s in 1:nSc`), then writing out the for-loop can be much faster.

1 Like

I agree with @odow, you should time this yourself.

On the other side, I am not so sure method 1 will be slower (even noticing now, with @odow’s comment, that both do different things, i.e., method 1 returns a structure and method does not). This because depending on the underlying solver you may be benefited from adding each constraint individually against adding all constraints in a single go.