I have a user-defined nonlinear constraint and its analytical gradient. A major effort is to compute the same matrix in both functions. In the problem below, it’s computed twice; in other problems, up to 2p times for a p x p matrix. Could this matrix be re-used instead?
Integer optimisation is required. It’s currently implemented with Pavito, using a workaround in Disable Hessians when not provided by JuMP by spockoyno · Pull Request #12 · jump-dev/Pavito.jl · GitHub; see also Integer optimisation with user's multivariate functions: Possible in Julia?
using JuMP, Ipopt, Cbc, LinearAlgebra, Pavito # with the workaround
#mats: p x p x n array of symmetric matrices
function minimise(mats, target:: Float64)
n = size(mats, 3)
function logDet(r...)
M = sum([r[i] * mats[:, :, i] for i = 1:n]) # same M
v = max(det(M), 1e-8) # safeguard against singularity
return - log(v) - target
end
function logDet_grad(g, r...)
M = sum([r[i] * mats[:, :, i] for i = 1:n]) # same M
if det(M) < 1e-8 # perturb a singular matrix
M[diagind(M)] = M[diagind(M)] + 1e-8
end
D = inv(M)
for j = 1: n
g[j] = - tr(D * mats[:, :, j])
end
end
solver = PavitoSolver(
mip_solver_drives=false,
rel_gap=1e-5,
mip_solver=CbcSolver(),
cont_solver=IpoptSolver(),
)
m = Model(solver = solver)
@variable(m, x[1:n] >= 0, Int, start = 1)
JuMP.register(m, :logDet, n , logDet, logDet_grad)
JuMP.addNLconstraint(m, :($(Expr(:call, :logDet, x...) ) <=0.0 ) )
@objective(m, Min, sum(x))
solve(m)
return getvalue(x)
end
# Example data with 10 matrices:
arr = [ [0.00104058 -0.000268537; -0.000268537 6.93e-5]
[0.00390625 -0.000976563; -0.000976563 0.000244141]
[0.00826446 -0.00200351; -0.00200351 0.000485698]
[0.0357925 -0.00773893; -0.00773893 0.00167328]
[0.0625 -0.0125; -0.0125 0.0025]
[0.16 -0.0256; -0.0256 0.004096]
[0.25 -0.0333333; -0.0333333 0.00444444]
[0.390625 -0.0390625; -0.0390625 0.00390625]
[0.591716 -0.0364133; -0.0364133 0.00224082]
[0.756144 -0.0263006; -0.0263006 0.000914805] ]
target = log(17.57215962979037) # tight, active at solution
# target = log(17.6) # lax
sol = minimise(reshape(arr', 2,2,10), target)
println(sol)
#Solution: obj = 11; x = [0, 0, 0, 0, 0, 6, 0, 0, 0, 5] with either target
This question is similar to JuMP User-Defined Function, Value and Gradient At Once. However, in that post, the function’s scalar value is re-used, rather than an intermediate matrix.
Extras: I would also welcome general advice on speeding-up the above code, like the suggestion by odow. Note that the actual p x p x n array usually has p x p matrices with a small p, in [2, ..., 12]
, whereas their total number, n, can be up to 1,000 - 10,000. Also, the matrices are symmetric (and positive-semidefinite), so that n(n + 1)/2 elements need be added/multiplied rather than n^2.
Many thanks!