JuMP : Vector version of set_lower_bound and set_upper_bound?

Hi,

I work on a project involving the resolution of numerous (10K) linear optimization problems.

JuMP is very expressive and convenient allowing to work with different backend solvers : many thanks to the JuMP devs for their great work !

In my specific case where the different LP problems are both small and numerically close to each other, I need to update a given model because the construction of a new one for each LP would both cost to much time and prevent me from using warm starts (which are essential to the overall performance).

It took me quite some time to discover that the vector version of
set_normalized_rhs exist and that I didn’t need ParametricOptInterface to accelerate my updates.

It also took me some time (I am relatively new to mathematical optimization) to realize that using lower/upper bounds of variables rather than additional constraints could accelerate significantly my computations.

Hence, I have been a bit disappointed by the absence of a vector method (a form that can be applied to a vector of variables) for set_lower_bound and set_upper_bound. I use broadcast but the time penalty is significant


So my question is : why a vector form is available for constraint RHS modifications but not for modification of variables bound ?

Do you have a reproducible example where the time penalty is significant?

Hi Oscar,

It will take me some time to extract a MWE from my code, so I want to be sure to understand why you request it.

If you just want to convince yourself that the bounds update time is significant then the following profile could be sufficient:

where the update related function are:

function update_model_ucd_group_debit_constraints!(model::JuMP.Model, ucd_predicate, ub_func)
    debits_groups = model[:debits_groups]

    for u ∈ eachindex(debits_groups)

        !ucd_predicate(u) && continue

        udg = debits_groups[u]
        for g ∈ eachindex(udg)
            @inbounds all_postes_update_broadcast!(udg[g], t -> ub_func(g, u, t), JuMP.set_upper_bound, model)
        end
    end
    return nothing
end

where debits_groups[u][g] is a vector of JuMP variable refs containing 20 elements debits_groups[u][g][t].

It calls the following function:

function all_postes_update_broadcast!(destination, poste_function, update_function, model::JuMP.Model)
    new_values::Vector{Float64} = model[:t_temp]
    map!(poste_function, new_values, eachindex(new_values))
    update_function.(destination, new_values)
    nothing
end

Ideally I would like to replace:

update_function.(dest,src)

by

update_function(dest,src)

which is possible (and efficient) when
update_function is equal to JuMP.set_normalize_rhs
but does not work when
update_function is equal to JuMP.set_upper_bound

I hope my question is clearer

What percentage of total runtime is spent updating the bounds? It seems like most of the time is spent in Highs_change... which could be reasonable. HiGHS might not have the most efficient update method in C.

Have you addressed the usual Julia performance tips like type stability? I don’t think that model[:debit_groups] will infer. You could try something like:

function update_model_ucd_group_debit_constraints!(
    model::JuMP.Model,
    ucd_predicate, 
    ub_func,
)
    _fn_barrier_update_model_ucd_group_debit_constraints!(
        model,
        model[:debits_groups],
        ucd_predicate,
        ub_func,
    )
    return
end

function _fn_barrier_update_model_ucd_group_debit_constraints!(
    model::JuMP.Model,
    debits_groups,
    ucd_predicate::F,
    ub_func::G,
) where {F,G}
    for u in eachindex(debits_groups)
        if !ucd_predicate(u)
            continue
        end
        udg = debits_groups[u]
        for g in eachindex(udg)
            for (t, xi) in enumerate(udg[g])
                JuMP.set_upper_bound(xi, ub_func(g, u, t))
            end
        end
    end
    return
end
1 Like

Thanks for the tip ! The function barrier does not improve the time which is, as you have pointed out, largely (but not totally) consumed by the HiGHS function:

Nevertheless, I have noticed that HIGHS.jl defines a function to modify multiple bounds at once (Highs_changeColsBoundsByRange). One could conceive this function to be reachable from JuMP.

Snippet from :libhighs.jl

function Highs_changeColBounds(highs, col, lower, upper)

ccall((:Highs_changeColBounds, libhighs), HighsInt, (Ptr{Cvoid}, HighsInt, Cdouble, Cdouble), highs, col, lower, upper)

end

"""

Highs_changeColsBoundsByRange(highs, from_col, to_col, lower, upper)

Change the variable bounds of multiple adjacent columns.

### Parameters

* `highs`: A pointer to the Highs instance.

* `from_col`: The index of the first column whose bound changes.

* `to_col`: The index of the last column whose bound changes.

* `lower`: An array of length [to\\_col - from\\_col + 1] with the new lower bounds.

* `upper`: An array of length [to\\_col - from\\_col + 1] with the new upper bounds.

### Returns

A `kHighsStatus` constant indicating whether the call succeeded.

"""

function Highs_changeColsBoundsByRange(highs, from_col, to_col, lower, upper)

ccall((:Highs_changeColsBoundsByRange, libhighs), HighsInt, (Ptr{Cvoid}, HighsInt, HighsInt, Ptr{Cdouble}, Ptr{Cdouble}), highs, from_col, to_col, lower, upper)

end

We could conceivably hook this up. But im still not convinced its worth it.

What is the total runtime (in seconds) and how much time is spent in these update functions?

1 Like

In my case it is about 30% of the total time (12 s).

I totally understand that my concern maybe super specific and not general enough to extend the JuMP API.

I have the feeling that JuMP is a great tool for solving optimization problems due to its great expressiveness and generality. In some specific cases it maybe useful, at the end of the study, to move to a direct use of solvers in order to reach maximal speed.

In this specific case, it looks feasible for me to make a direct call to the range version. I can report the eventual acceleration.

Thank you for your help.

In case you are still interested.

I did try to call the Set version of HiGHS and it allows to accelerate the execution time from 12 to 8 s.

function set_lower_upper_bound(vars::AbstractArray{VariableRef}, lbs::AbstractArray{<:Real}, ubs::AbstractArray{<:Real})
    (length(vars) != length(ubs)) && error("Les tableaux de variables et de bornes infĂ©rieures doivent avoir la mĂȘme taille.")
    isempty(vars) && return nothing
    
    model = owner_model(first(vars))
    optimizer = JuMP.backend(model)
    highs_optimizer = optimizer
    
    num_set_entries = Cint(length(vars))
    highs_indices = Vector{HiGHS.HighsInt}(undef, num_set_entries)
    
    for (i, var) in enumerate(vars)
    highs_indices[i] = index(var).value - 1
    end
    
    lower_bounds_cdouble = convert(Vector{Cdouble}, lbs)
    upper_bounds_cdouble = convert(Vector{Cdouble}, ubs)
    
    status = HiGHS.Highs_changeColsBoundsBySet(
    highs_optimizer.inner,
    num_set_entries,
    highs_indices,
    lower_bounds_cdouble,
    upper_bounds_cdouble
    )
    
    if status != HiGHS.kHighsStatusOk
    error("Erreur lors de l'appel Ă  Highs_changeColsBoundsBySet: $(status)")
    end
    
    return nothing
end

Most of the remaining “non-solving time” is now spent in the update of the coefficients of the objective function. I noticed that, JuMP does not use the Set version of the HiGHS.jl interface but rely on a broadcasted call to the scalar version.

function _set_objective_coefficient(
    model::GenericModel{T},
    variables::AbstractVector{<:GenericVariableRef{T}},
    coeffs::AbstractVector{<:T},
    ::Type{F},
    ) where {T,F}
    MOI.modify(
    backend(model),
    MOI.ObjectiveFunction{moi_function_type(F)}(),
    MOI.ScalarCoefficientChange.(index.(variables), coeffs),
    )
    return
end
    
function Highs_changeColsCostBySet(highs, num_set_entries, set, cost)
    ccall((:Highs_changeColsCostBySet, libhighs), HighsInt, (Ptr{Cvoid}, HighsInt, Ptr{HighsInt}, Ptr{Cdouble}), highs, num_set_entries, set, cost)
end

The total time is 12 seconds? If so, we’re unlikely to fix this to save 4 seconds. I guess you have very simple problems AND you want to change many parameters between solves.

For something like SDDP.jl, we also have many trivial problems, but we change only a few parameters between solves.

Doing the vectorized calls also has an overhead, because we’d be allocating vectors for the variables and values in JuMP, MOI, and the C calls to HiGHS.

It’s not a no, but we need to think about this a bit more. I’ve added it to the agenda for tomorrow’s monthly developer call.

2 Likes

precisely !

Actually, SDDP has been considered for this specific application (and there is an ongoing work in this direction) but it is not trivial to adapt (non-convexities
)

Indeed. I just took a quick look at the underlying architecture for my quick and dirty experiment and it is not a simple additional dispatch like I guessed it could be :slight_smile:

I did not investigate how the vector method for set_normalized_rhs is implemented but I can tell that when I switch to it (from the broadcasted call to the scalar method) It saves a lot of time.

Thank you very much for considering this. Again I perfectly understand that this is a niche problem that may not worth the work. I would also be very interested by a native JuMP handling of unitful variables :slight_smile: .

1 Like

The monthly developer call agrees that this is something we will investigate in more detail and endeavor to support.

I’ve opened a feature request: Vectorized set_lower_bound and set_upper_bound · Issue #4030 · jump-dev/JuMP.jl · GitHub

2 Likes

This is great news ! Thank you very much for considering this topic.

What about other APIs? e.g. fix, value etc.
I’m a bit confused.
And the chances are the decisions can also be arranged in a matrix, a 3-axes array


Your questions are precisely why we haven’t supported this to date.