I followed the link that teaches how to register a function in a NL model.
As far as I can see, the registered functions’ inputs are optimization variables. I would like to also allow inputting a separate object, whose values would define what the function does. Is there a way to not define the functions separately, but rather register one function that takes the so-called ‘context’ parameters?
For example, I would like to define a function
evaluate_f!(f, x) where
x is the optimization variable (which is fine by JuMP), and
f is an object that I defined. For example, if f is of type
Quadratic, then it has attributes of its data
Q, q that define x^\top Q x + q^\top x. So the function that I register will look like:
function evaluate_f!(f::Quadratic, x)
n = length(f.q)
0.5*sum(f.Q[i, j] * x[i] * x[j] for i in 1:n, j in 1:n)+sum(f.q[i] * x[i] for i in 1:n)
for which I am using the scalar
i,j approach rather than vector notation for JuMP’s nonlinear solver interactions. Similarly, there will be different versions for, e.g.,
The reason why I want to do this is that I would like to optimize functions that look like \log(\exp(f(x)) + \exp(g(x))) where instead of defining f, g, ... separately I can automatically keep all my functions in a list
fn_collection that keeps all my many functions and
@NLexpression(model, exp_to_minimize, log(sum(exp(evaluate_f!(fn_collection[k], x)) for k in 1:length(fn_collection))))
Perhaps I am missing the issue, but doesn’t
for fn in fn_collection
@NLexpression(model, exp_to_minimize, x -> log(sum(exp(evaluate_f!(fn, x)))))
produce the desired result?
I am assuming you want to generate one expression at a time; not a vector of expressions.
Side note: there are efficient / accurate ways of computing
log(sum(exp())) in some packages (which I cannot remember of the top of my head).
evaluate_f!, suggesting a mutating function?
The short answer is that you cannot register a function which accepts something other than scalar decision variables. (This is something else that will be fixed in the rewrite of the JuMP nonlinear interface.)
Thank you for your kind reply, Oscar! Do you think I can still hard-code this (something like @hendri54 kindly suggested, which I will reply to soon, but ends up not working as expected).
Update: I have done the following slightly brute-forced method:
for fn_nr in 1:length(f_collection)
register(model, Symbol(:expr, fn_nr), 1, x -> scalar_expr!(f_collection[fn_nr], x); autodiff = true)
if occursin("already", e.msg)
println("Already registered, skipping.")
Although this let me registers more than one function, I still get
Unexpected array errors suggesting
x also cannot be given as a vector.
Update 2: Nope, I tried everything. Scalar definition of
x... as suggested on JuMP page and
x -> scalar_expr!(f_collection[fn_nr], x) kind of definition contradicts. I will update if I solve this issue.
Thank you very much for your kind reply!
The reason I have
! is just to remember that I have written the function myself, it is just my own convention, and it indeed does contradict the common understanding of mutating function.
Except for that, your suggestion makes a lot of sense to me. But my functions appear inside logsumexp, hence I need to access the
evaluate_f! functions within a single logsumexp function. I don’t know how to circumvent this problem.
Untested, because I don’t have a reproducible example, but this should point you in the right direction:
evaluate_f!(f::Linear, x) = f.q' * x
evaluate_f!(f::Quadratic, x) = 0.5 * x' * f.Q * x + f.q' * x
model = Model()
f_x = Any
for (k, f_k) in enumerate(fn_collection)
f = (x...) -> evaluate_f!(f_k, collect(x))
register(model, Symbol("f_$k"), length(x), f; autodiff = true)
expr = Expr(:call, Symbol("f_$k"), x...)
push!(f_x, add_nonlinear_expression(model, expr))
@NLexpression(model, exp_to_minimize, log(sum(exp(f_x_k) for f_x_k in f_x)))
See Nonlinear Modeling · JuMP
It indeed works! Thank you for this amazing support. I would have never thought about defining the mapping from
x... -> as well as apply the
Expr trick pre-
@NLexpression. This is excellent!
This also allows me to optimize the
max of these functions (instead of logsumexp) as —
@NLexpression(model, exp_to_minimize, max(f_x...))
Yeah it’s a shame that the syntax is very convoluted and not at all obvious. JuMP can do a lot of things, but they can be hard to explain.
I’m a broken record at this point, but the nonlinear rewrite will fix all of this, and your original code will “just work.”
Haha, thank you :)! BTW, what I said “it indeed works” was for Ipopt. When I try BARON, I get:
UnrecognizedExpressionException: unrecognized function call expression: f_1(x, x, x) Stacktrace:  to_str(c::Expr) @ BARON [~/.julia/packages/BARON/xgWzt/src/util.jl:156](https://file+.vscode-resource.vscode-cdn.net/Users/arasselvi/Workspace/SumOfConvexMax/julia/quadraticsimplex/~/.julia/packages/BARON/xgWzt/src/util.jl:156)  (::BARON.var"#8#10")(d::Expr) @ BARON [./none:0](https://file+.vscode-resource.vscode-cdn.net/Users/arasselvi/Workspace/SumOfConvexMax/julia/quadraticsimplex/none:0)
is this expected?
BARON doesn’t support user-defined functions. It writes the problem to a file, so it can’t write out arbitrary Julia code.
Thank you, great to know!