Any recommendations on how to formulate a log determinant objective in JuMP? Thanks.

The current recommendation is to use the log-det cone:

http://www.juliaopt.org/MathOptInterface.jl/stable/apireference/#MathOptInterface.LogDetConeTriangle

It is automatically reformulated into exponential cone and PSD constraints with

http://www.juliaopt.org/MathOptInterface.jl/stable/apireference/#MathOptInterface.Bridges.LogDetBridge

To maximize the volume of an ellipsoid with rotated second-order cone (RSOC) instead of exponential cone, there is also

http://www.juliaopt.org/MathOptInterface.jl/stable/apireference/#MathOptInterface.RootDetConeTriangle

which is automatically reformulated into RSOC and PSD constraints with

http://www.juliaopt.org/MathOptInterface.jl/stable/apireference/#MathOptInterface.Bridges.RootDetBridge

Some solvers such as https://github.com/blegat/SDPT3.jl supports having log-det in the objective directly but this package is not finished yet.

I have created a package to make it easy write optimization problems with ellipsoids: https://github.com/blegat/SetProg.jl

See https://www.youtube.com/watch?v=hV3G-eNLNjk

Thanks! I will look into it.

How does one use this? My objective looks like this:

```
@NLobjective(
model, Max,
log(det(B)) + # produces "Unexpected array..."
sum(
(
-log(σ^2) / 2 - ϵ[i]^2 / 2
+ log( 2Φ(a * ϵ[i]) )
)
for i ∈ eachindex(ϵ)
)
)
```

How do I incorporate “log-det cone” into it? Are there any modern best practices considering optimization of functions *including* determinants (not *just* the determinant itself)?

Offtopic: Don’t compute log determinants this way — you can easily overflow the limits of floating-point arithmetic. The LinearAlgebra package has functions `logdet`

and `logabsdet`

that you can employ.

No idea how `logdet`

interacts with JuMP, but log determinants also have a nice formula for their derivative which is useful in optimization.

How does one use this?

Here’s an example, but it isn’t supported by Ipopt or NLopt, you’ll need to reformulate your entire problem as a conic optimization problem and use a solver like Mosek or SCS.

```
# max log(det(x))
#
# is equivalent to
# max t
# s.t. t <= log(det(X))
model = Model()
@variable(model, X[1:3, 1:3])
@variable(model, t)
@constraint(model, [t; 1; vec(X)] in MOI.LogDetConeSquare(3))
@objective(model, Max, t)
```

(Or you could just implement your own gradient using the formula I mentioned and then call NLopt or whatever optimization software you want. Using a modeling language is a two-edged sword — it can be convenient, but it is also limiting.)

Note also that ChainRules includes rules for `logdet`

, so you can use automatic differentiation (Zygote etcetera) to find your derivatives, which then makes it a lot easier to call arbitrary optimization packages.