JuMP slowed down in multithreading

Hi,
I have a question with JuMP multithreading performance that I would like to ask for your help. I wanted to solve a specific optimization problem 4 times, and I did it in 2 ways:

  1. Solve it repeatedly 4 times on a single thread.
  2. Solve it on 4 threads simultaneously, one time on each thread. (my laptop has 4 physical cores, and I set the number of threads to 4)

Here is the code I used in the two cases:

  • Case 1
function model_solve()
    model = Model(Ipopt.Optimizer)
    set_silent(model)
    @variable(model, x)
    @variable(model, y)
    @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)

    for h in 1:4 
        time_run = 0
        for i in 1:50
            time_start = time_ns()
            optimize!(model)
            time_run += time_ns() - time_start
        end
        println(time_run)
    end
end
  • Case 2:
function model_solve311(i)
        model = Model(Ipopt.Optimizer)
        set_silent(model)
        @variable(model, x)
        @variable(model, y)
        @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)
    return model
end

function model_solve312(model)
    time_run = 0
    for j in 1:50
            time_start = time_ns()
            optimize!(model)
            time_run += time_ns() - time_start
    end
    println(time_run)
end

function model_solve31(i) 
    model = model_solve311(i)
    model_solve312(model)
end

function tmap(model_solve31)
    tasks = [Threads.@spawn model_solve31(i) for i in 1:4]
    out = [fetch(t) for t in tasks]
end

It seems like the time it takes to solve the problem one time in case 2 is significantly higher than that in case 1 (time_run in case 1 - model_solve() - is about 0.65 seconds, while in case 2 - tmap(model_solve31) - is around 2 seconds). I would really appreciate it if you guys could help me explain this behavior and suggest some ways to reduce the multithreading solving time.

Regards

I set the number of threads to 4

How did you do that? Did you check Threads.nthreads() in a Julia session to confirm there were 4?

You could also try the distributed version instead of using threads:

using Distributed
Distributed.addprocs(4)

@everywhere begin
    # If using a Pkg environment, you need to activate it on every worker
    using Pkg
    Pkg.activate("/tmp/ipopt/Project.toml")
end

@everywhere begin
    using JuMP, Ipopt
end

@everywhere begin
    function model_solve311(i)
        model = Model(Ipopt.Optimizer)
        set_silent(model)
        @variable(model, x)
        @variable(model, y)
        @NLobjective(model, Min, (1 - x)^2 + 100 * (y - x^2)^2)
        return model
    end

    function model_solve312(model)
        time_run = 0.0
        for _ in 1:50
            time_start = time_ns()
            optimize!(model)
            time_run += time_ns() - time_start
        end
        return time_run
    end

    function model_solve31(i) 
        model = model_solve311(i)
        return model_solve312(model)
    end
end

@time pmap(model_solve31, 1:4)
@time map(model_solve31, 1:4)

Yields:

julia> @time pmap(model_solve31, 1:4)  # pmap = parallel map 
  0.458568 seconds (246 allocations: 9.344 KiB)
4-element Vector{Float64}:
 4.51427241e8
 4.56679585e8
 4.57432338e8
 4.52535787e8

julia> @time map(model_solve31, 1:4)
  1.689642 seconds (100.07 k allocations: 6.407 MiB)
4-element Vector{Float64}:
 4.28438724e8
 4.2933088e8
 4.13085296e8
 4.17927879e8
1 Like

Hi, thanks for answering.
Since I use VScode, I set the number of threads in the settings of Julia extension. I did check with Threads.nthreads() after that and it was 4.
I also tried the distributed version before and it was okay. It was just the multithreading version. I guess it has something to do with some components of the optimizer Ipopt itself.
Thanks again for the detailed distributed version. It is quite lean compared to mine so I found it very useful.