I am trying to solve a low dimensional (box-)constrained optimisation problem over a large grid of parameters. Both the optimisation problem and the constraints depend on the parameters. I can assume that the optimisation problem is relatively well behaved.
Something like \min_{x \in \mathcal{B}(p)} f(x; p) where \mathcal{B}(p) = [a_1, b_1] \times [a_2, b_2(p)] and b_2 is a continuous function of the parameters p \in \mathbb{R}^2.
Furthermore, I know that f(x; p) is smooth in the parameters so that the minimiser x^{\star}(p) = \arg\max_{x \in \mathbb{B}(p)} f(x; p) is changing smoothly in the parameters.
Below is a MWE with an implementation that closely resembles my current one.
using StaticArrays
using Base.Threads
using Optimization, OptimizationOptimJL
mutable struct Vec{T} <: FieldVector{2, T}
x::T
y::T
end
StaticArrays.similar_type(::Type{<:Vec}, ::Type{T}, s::Size{(2,)}) where T = Vec{T}
Base.similar(::Type{<:Vec}, ::Type{T}) where T = Vec(zero(T), zero(T))
function rosenbrock(v, p)
(p[1] - v.x)^2 + p[2] * (v.y - v.x^2)^2
end
function optimovergrid!(maximiser::Matrix{V}, pgrid) where {T, V <: Vec{T}}
fn = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
lb = Vec{T}(-1, -1)
@inbounds @threads for idx in CartesianIndices(pgrid)
pᾢ = pgrid[idx]
ub = Vec{T}(Inf, 0.5 + pᾢ[1] * pᾢ[2])
vâ = maximiser[idx]
prob = OptimizationProblem(fn, vâ, pᾢ; lb = lb, ub = ub)
sol = solve(prob, GradientDescent())
maximiser[idx] .= sol.u
end
return maximiser
end
n = 101
pgrid = [ (pâ, pâ) for pâ in range(0, 1; length = n), pâ in range(0, 1; length = n) ]
maximiser = [ similar(Vec{Float64}) for idx in CartesianIndices(pgrid) ]
optimovergrid!(maximiser, pgrid)
@benchmark optimovergrid!($maximiser, $pgrid)
Unfortunately, this allocates a lot and is a bit slow. I am not using the âsmoothnessâ of f in the parameters and I am not reusing the information in the prob
. Does anybody see a better way of doing this or some way I could optimise the code?