Performance drop from HiGHS 1.14.0 to HiGHS 1.15.0

Hi,

I am trying to reduce allocations using JuMP and HiGHS for solving numerous small LPs and I just noticed that these allocations have increased dramatically (x2) by updating HiGHS 1.14 to 1.15.0:

1.14

[ Info: Launch solve_optimisation with 16 threads
Progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| Time: 0:00:11
  treating pdt:  1
 11.812555 seconds (32.35 M allocations: 1.304 GiB, 2.22% gc time)

1.15 : 12.8 seconds and 2.8 GiB

I did not try to build a MWE yet but I wanted to known if this had already been noticed.

Hi Laurent, it hasn’t been noticed.

It may be because of the CTRL+C interrupt feature I added: Comparing v1.14.0...v1.15.0 · jump-dev/HiGHS.jl · GitHub

I’ll take a look.

1 Like

Thank you very much !

I tried @profview_salloc and the profile graphs looked quite different (more MathOptInterface stuff with HiGHS 1.15).

:+1: A MWE would be useful though if you have the time.

I can reproduce a small increase in allocations, but nothing like this. Did any other package change versions? Looks like I’m going to need a MWE.

1 Like

Ok thanks. I will try to find some time to build a MWE.

Hi,

an attempt to build a (no so small) MWE not too far from my code:

MWE
using JuMP
using HiGHS

dr(t) = 1.0
rd(g) = 1.0-0.1g 
de(t) = sin(t)^2+1.0
iv(r) = 0.1
lb(r,t) = 0.0
ub(r,t) = cos(t)*r+50.0
dm(g) = 10.0
a(t) = sin(t)*1.

@inbounds function set_objective_function!(model, dg,dde)
    nT,nG = size(dg)
    T = 1:nT
    G = 1:nG
    @objective(model, Max, sum(sum(rd(g) * dg[t, g] for g ∈ G) * dr(t) * dde(t) for t ∈ T))
end

@inbounds function build_model()

    model = Model(HiGHS.Optimizer)
    set_silent(model)
    
    R = 1:2
    nT = 100
    T = 1:nT
    G = 1:3
    model[:ivs] = @variable(model, ivs[r ∈ R])
    for r ∈ R
        fix(ivs[r], iv(r))
    end
    
    model[:rv] = @variable(model, lb(r,t) <= rv[t ∈ T, r ∈ R] <= ub(r,t))
    model[:dg] = @variable(model, lower_bound = 0.0, upper_bound = dm(g), dg[t ∈ T, g ∈ G])
    model[:ds] = @variable(model, ds[t ∈ T])

    @constraint(model, [t ∈ 1:nT], ds[t] == sum(dg[t, g] for g ∈ G))

    df(r) = r==1 ? -1.0 : 1.0  
    ad(t, r) = dr(t) * (df(r) * ds[t]  + a(t))
    model[:ics] = @constraint(model, [r=R], rv[1, r] == ivs[r] + ad(1, r))
    model[:cvs] = @constraint(model, [t=2:nT, r=R], rv[t, r] == rv[t-1, r] + ad(t, r))

    set_objective_function!(model, dg,de)
    return model
end

function update_constraint_rhs!(model,i)
    cvs = model[:cvs]
    nT,nR = size(cvs)
    df(r) = r==1 ? -1.0 : 1.0  
    ad(t, r) = dr(t) * (i/100)*a(t)
    for t ∈ 2:nT
        for r ∈ 1:nR
            @inbounds set_normalized_rhs(cvs[t, r],ad(t,r))
        end
    end

end

@inbounds function update_model!(model, i,iv::Float64)
    iv1 = model[:ivs][1]
    fix(iv1, iv, force=true) 
    dde(t) = de(t+i)
    update_constraint_rhs!(model,i)
    set_objective_function!(model, model[:dg], dde)
    return nothing
end

function mwe()
    model = build_model()
    n_computations  = 1000

    results = Vector{Tuple{Float64,Float64}}(undef,n_computations)
    @time for i ∈ 1:n_computations
        update_model!(model, i, 10.0+20.0*i/n_computations)
        optimize!(model)
        assert_is_solved_and_feasible(model)
        @inbounds c1 = model[:ics][1]
        has_duals(model)
        dual = get_dual_value(c1)
        result = (objective_value(model),dual)
        push!(results,result)
    end

    @show results[end-10:end]
    return 
end

HiGHS 1.15.0

julia> Viviane.mwe()
  2.333662 seconds (2.82 M allocations: 146.630 MiB, 0.94% gc time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]
julia> Pkg.add(Pkg.PackageSpec(;name="HiGHS",version="1.14.0"))
   Resolving package versions...
      Compat entries added for 
    Updating `~/Projects/Julia/Viviane.jl/Project.toml`
⌃ [87dc4568] ↓ HiGHS v1.15.0 ⇒ v1.14.0
    Updating `~/Projects/Julia/Viviane.jl/Manifest.toml`
⌃ [87dc4568] ↓ HiGHS v1.15.0 ⇒ v1.14.0

HiGHS 1.14.0

julia> Viviane.mwe()
  1.242171 seconds (2.46 M allocations: 133.529 MiB, 1.59% gc time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]
1 Like

What is versioninfo()? I can’t reproduce this. There is an allocation difference, but not a runtime difference.

Note that I had to change your code slightly

        # dual = get_dual_value(c1)
        dual = JuMP.dual(c1)

With v1.15

julia> include("/tmp/hgh/main.jl")
  1.712105 seconds (2.72 M allocations: 151.881 MiB, 1.48% gc time, 4.17% compilation time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]
  1.618252 seconds (2.69 M allocations: 150.069 MiB, 0.60% gc time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]

(hgh) pkg> st
Status `/private/tmp/hgh/Project.toml`
  [87dc4568] HiGHS v1.15.0
  [4076af6c] JuMP v1.25.0

with v1.14

julia> include("/tmp/hgh/main.jl")
  1.678923 seconds (2.37 M allocations: 138.781 MiB, 1.06% gc time, 4.30% compilation time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]
  1.614805 seconds (2.34 M allocations: 136.969 MiB, 0.86% gc time)
results[end - 10:end] = [(45.14943422829527, -1.7999816441116678), (44.61201275651888, -1.7999816441116678), (45.00089601640482, -1.7999816441116678), (45.02738605754217, -1.7999816441116678), (44.30705007850558, -1.7999816441116678), (44.67384855889918, -1.7999816441116678), (44.90472480670809, -1.7999816441116678), (44.930990587329006, -1.7999816441116678), (45.27071418963132, -1.7999816441116678), (45.743776687659434, -1.7999816441116678), (45.77088802257778, -1.7999816441116678)]

(hgh2) pkg> st
Status `/private/tmp/hgh2/Project.toml`
⌃ [87dc4568] HiGHS v1.14.0
  [4076af6c] JuMP v1.25.0
Info Packages marked with ⌃ have new versions available and may be upgradable.

The difference of computing time was not reproducible on my machine on this oversimplified example (it is on my real code): sorry about that.

Nonetheless, the difference of allocations is reproducible and I suspect that this kind of increase in memory allocation is responsible for the time increase in my real code which is multithreaded (gc as a bottleneck).

I will try to find some time to build a better MWE.

P.S. Thank for the JuMP.dual tip

GC doesn’t appear to be a bottleneck with your example.