I followed the link that teaches how to register a function in a NL model.
As far as I can see, the registered functions’ inputs are optimization variables. I would like to also allow inputting a separate object, whose values would define what the function does. Is there a way to not define the functions separately, but rather register one function that takes the so-called ‘context’ parameters?
For example, I would like to define a function evaluate_f!(f, x) where x is the optimization variable (which is fine by JuMP), and f is an object that I defined. For example, if f is of type Quadratic, then it has attributes of its data Q, q that define x^\top Q x + q^\top x. So the function that I register will look like:
function evaluate_f!(f::Quadratic, x)
n = length(f.q)
0.5*sum(f.Q[i, j] * x[i] * x[j] for i in 1:n, j in 1:n)+sum(f.q[i] * x[i] for i in 1:n)
end
for which I am using the scalar i,j approach rather than vector notation for JuMP’s nonlinear solver interactions. Similarly, there will be different versions for, e.g., f::Linear.
The reason why I want to do this is that I would like to optimize functions that look like \log(\exp(f(x)) + \exp(g(x))) where instead of defining f, g, ... separately I can automatically keep all my functions in a list fn_collection that keeps all my many functions and
@NLexpression(model, exp_to_minimize, log(sum(exp(evaluate_f!(fn_collection[k], x)) for k in 1:length(fn_collection))))
The short answer is that you cannot register a function which accepts something other than scalar decision variables. (This is something else that will be fixed in the rewrite of the JuMP nonlinear interface.)
Thank you for your kind reply, Oscar! Do you think I can still hard-code this (something like @hendri54 kindly suggested, which I will reply to soon, but ends up not working as expected).
Update: I have done the following slightly brute-forced method:
for fn_nr in 1:length(f_collection)
# println(typeof(fn))
try
register(model, Symbol(:expr, fn_nr), 1, x -> scalar_expr!(f_collection[fn_nr], x); autodiff = true)
catch e
if occursin("already", e.msg)
println("Already registered, skipping.")
else
throw(e)
end
end
end
Although this let me registers more than one function, I still get Unexpected array errors suggesting x also cannot be given as a vector.
Update 2: Nope, I tried everything. Scalar definition of x... as suggested on JuMP page and x -> scalar_expr!(f_collection[fn_nr], x) kind of definition contradicts. I will update if I solve this issue.
The reason I have ! is just to remember that I have written the function myself, it is just my own convention, and it indeed does contradict the common understanding of mutating function.
Except for that, your suggestion makes a lot of sense to me. But my functions appear inside logsumexp, hence I need to access the evaluate_f! functions within a single logsumexp function. I don’t know how to circumvent this problem.
It indeed works! Thank you for this amazing support. I would have never thought about defining the mapping from x... -> as well as apply the Expr trick pre-@NLexpression. This is excellent!
This also allows me to optimize the max of these functions (instead of logsumexp) as — @NLexpression(model, exp_to_minimize, max(f_x...))