I have a part of my code in which it is simpler (far less ifs and edge cases) if I add empty constraints first and then, knowing all constraints I need are present in the model, change the coefficient of some variables inside newly added and old constraints.
I have searched the Discourse but the topics I found are from 2017 (before the great change in JuMP I think), and none really addresses the question.
There is a way to do this? I will need to rework more code if I cannot do it.
julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
julia> @variable(model, x)
x
julia> c = @constraint(model, 0 <= 1)
0 ≤ 1.0
julia> set_normalized_coefficient(c, x, 1)
julia> c
x ≤ 1.0
Ah, it was my incompetence. I checked that section, but I thought I had no expression I could pass to the @constraint to express an empty constraint. I was with C++ and other languages in my head at the moment, and thought that if I did no pass at least one variable, then the <= operator would not be the one overloaded for VariableRef, and so 0.0 <= 0.0 would be the same thing as passing a Bool as second parameter. I forgot how macros work, and that no type wrapping would be necessary for an empty constraint. My fault.
Thanks for the hint, but as I see it, there is no batch method for set_normalized_coefficient. Even if I start the constraints with some coefficients I know at time of creation, I need to be able to set coefficients in existing constraints. What should I do then?
P.S.: I have an enormous reply to give in that other post where I asked help about resetting the LP basis. However, every time I change something I get deeper into the problem, I will probably reply it when I am 100% sure of what the problem was, just so it can help others with similar problems.
set_normalized_coefficient is the way to go. My previous reply was just a side-comment.
There’s no real remedy to the performance hit, because it is related to solvers’ internal problem representation.
In the LP case, think of the constraint matrix as being stored in SparseCSC form: adding columns is fast, but setting individual coefficients (even if done in a batch) is bound to be slower, because you have to shift things in memory.
If you’re using JuMP in automatic mode, this may be less of an issue, depending on how MOI’s cache is implemented.
For instance, if you keep a dictionnary of (row, column) => coeff pairs, then modifying a batch of individual coefficients should be as cheap as adding whole rows of columns.
I use direct mode because the last time I tried to use MOI cache it took more time than I used to solve the problem when I tried to delete variables from it.
Yes, we removed special syntax for column generation in lieu of proper benchmarks showing the benefit. It’s not worth the extra code complexity for marginal gain.
Moreover, no one has asked for it in 18 months, so that’s something.
It is kinda awkward that you are apologizing when you are up to run local tests just to be sure you gave me the correct information. I am grateful for your help (not only in this case but in general), no need to overexert yourself.
If I understand correctly, an optimized implementation was not much faster, then? Well, I will finish implementing this “kinda column generation” tomorrow (or, more probably, monday), and if the changes take less time than solving the changed LP I will be already happy enough. The paper I am reproducing did explain very little this process (specially the technical details related to how the “growing” of the model was done using the solver API) and there is a limit on how much more I am willing to elucubrate.