How to vectorize a JuMP solver

Hi everyone,

Is it possible to vectorize a JuMP solver? I want to solve a maximization problem for different draws of a random variable (X in the example below, drawn from a log-normal distribution). So for a vector of X’s, I’d want the solver to take as an argument each element of X, and output a vector of the solution.

This is easy to do in a for loop in the toy example below, but I’d like it to run faster.

using QuantEcon
using JuMP
using MadNLP

γ     = 2;       
α     = 7.08;    
σₓ    = 0.027; 
D     = 0.385; 
nodes = 10;

d = D*0.8;
f = D*0.2;

x,w = qnwlogn(nodes,0,σₓ);

inflation = ones(nodes);

@time begin
    for i in 1:nodes
        model = Model(MadNLP.Optimizer, add_bridges = false)
        set_silent(model)
        @variable(model, Π >= 0)                   
        @variable(model, C >=0)                 
        @NLobjective(model, Max, (C^(1-γ))/(1-γ) - (α/2)*(Π-1)^2)  
        @NLconstraint(model, c1, x[i] >= C + d*(1/Π) + f) 
        optimize!(model)       
        inflation[i] = value(Π)
    end
end

Is there a way to vectorize this (or any other suggestions on how to optimize this)? I know you can declare vectors as variables in JuMP, but nothing works when I try to declare the vector of the objective or the constraint.

Thanks for your time!

How long does the creation of the model take compared to solving? Maybe you could just modify c1 constraint and resolve the model?

1 Like

There are a few options. Hopefully this helps. See Parallelism · JuMP

(base) oscar@Oscars-MBP /tmp % julia --project=qecon -t 8 --banner=no
julia> Threads.nthreads()
8

julia> using QuantEcon

julia> using JuMP

julia> using MadNLP

julia> function solve_model_loop(x::Vector{Float64}, γ, α, d, f)
           inflation = ones(nodes);
           for i in 1:length(x)
               model = Model(MadNLP.Optimizer, add_bridges = false)
               set_silent(model)
               @variable(model, Π >= 0)                   
               @variable(model, C >=0)                 
               @NLobjective(model, Max, (C^(1-γ))/(1-γ) - (α/2)*(Π-1)^2)  
               @NLconstraint(model, x[i] >= C + d*(1/Π) + f) 
               optimize!(model)       
               inflation[i] = value(Π)
           end
           return inflation
       end
solve_model_loop (generic function with 1 method)

julia> function solve_model_inner(xi::Float64, γ, α, d, f)
           model = Model(MadNLP.Optimizer; add_bridges = false)
           set_silent(model)
           @variable(model, Π >= 0)                   
           @variable(model, C >= 0)                 
           @NLobjective(model, Max, C^(1 - γ) / (1 - γ) - (α / 2) * (Π - 1)^2)  
           @NLconstraint(model, xi >= C + d / Π + f) 
           optimize!(model)       
           return value(Π)
       end
solve_model_inner (generic function with 1 method)

julia> function solve_model_function(x::Vector{Float64}, γ, α, d, f)
           return [solve_model_inner(xi, γ, α, d, f) for xi in x]
       end
solve_model_function (generic function with 1 method)

julia> function solve_model_threaded(x::Vector{Float64}, γ, α, d, f)
           solutions = Tuple{Int,Float64}[]
           my_lock = Threads.ReentrantLock()
           Threads.@threads for i in 1:nodes
               Π = solve_model_inner(x[i], γ, α, d, f)
               Threads.lock(my_lock) do
                   push!(solutions, (i, Π))
               end
           end
           sort!(solutions)
           return last.(solutions)
       end
solve_model_threaded (generic function with 1 method)

julia> function solve_model_modify(x::Vector{Float64}, γ, α, d, f)
           model = Model(MadNLP.Optimizer, add_bridges = false)
           set_silent(model)
           @variable(model, Π >= 0)                   
           @variable(model, C >=0)                 
           @variable(model, xi)
           @NLobjective(model, Max, (C^(1-γ))/(1-γ) - (α/2)*(Π-1)^2)  
           @NLconstraint(model, xi >= C + d*(1/Π) + f)
           inflation = ones(nodes);
           for i in 1:length(x)
               fix(xi, x[i])
               optimize!(model)       
               inflation[i] = value(Π)
           end
           return inflation
       end
solve_model_modify (generic function with 1 method)

julia> γ = 2
2

julia> α = 7.08
7.08

julia> σₓ = 0.027
0.027

julia> D = 0.385
0.385

julia> nodes = 10
10

julia> d = 0.8 * D
0.30800000000000005

julia> f = 0.2 * D
0.07700000000000001

julia> x, w = qnwlogn(nodes, 0, σₓ)
([0.45000741322027116, 0.5551296129341249, 0.6648349988806642, 0.7859307546713068, 0.9234089580627003, 1.0829437935040036, 1.2723767253747713, 1.5041326068627998, 1.8013811129882316, 2.222185614330128], [4.310652630718301e-6, 0.0007580709343122181, 0.019111580500770275, 0.13548370298026702, 0.3446423349320187, 0.3446423349320187, 0.13548370298026702, 0.019111580500770275, 0.0007580709343122181, 4.310652630718301e-6])

julia> @time inflation = solve_model_loop(x, γ, α, d, f);
 36.693092 seconds (83.47 M allocations: 5.046 GiB, 3.47% gc time, 6.62% compilation time)

julia> @time inflation = solve_model_function(x, γ, α, d, f);
  0.068729 seconds (120.91 k allocations: 9.140 MiB, 93.75% compilation time)

julia> @time inflation = solve_model_threaded(x, γ, α, d, f);
  0.153149 seconds (336.55 k allocations: 22.977 MiB, 77.81% compilation time)

julia> @time inflation = solve_model_modify(x, γ, α, d, f);
  0.471889 seconds (413.50 k allocations: 26.285 MiB, 9.16% gc time, 99.07% compilation time)

julia> @time inflation = solve_model_loop(x, γ, α, d, f);
  0.004798 seconds (17.94 k allocations: 3.386 MiB)

julia> @time inflation = solve_model_function(x, γ, α, d, f);
  0.004223 seconds (17.84 k allocations: 3.084 MiB)

julia> @time inflation = solve_model_threaded(x, γ, α, d, f);
  0.001316 seconds (17.86 k allocations: 3.353 MiB)

julia> @time inflation = solve_model_modify(x, γ, α, d, f);
  0.004126 seconds (14.32 k allocations: 3.496 MiB)

The first solve time for MadNLP is bad because of the way it’s designed. So you could give Ipopt a try. Otherwise, the threaded works quite well.

You’ll need to experiment with your full problem though, because the best solution can be problem dependent.

1 Like