I need to solve a constrained non-linear optimization problem where the cost and constraints are computed via a vector-value function. E.g., instead of
If you are looking for a solver supporting the evaluation of constraints and objective in the same callback, Knitro.jl is a way-to-go (but has a commercial license).
Another way to go is to define the callbacks for your solver (eval_f and eval_cons) in a closure, where you could evaluate the constraints and the objective jointly. For instance:
function buid_callback(x0)
J, c1, c2 = myfunc(x0)
current_x = hash(x0)
function eval_f(x)
if hash(x) != current_x
current_x = hash(x)
J, c1, c2 = myfunc(x)
end
return J
end
function eval_cons!(cons, x)
if hash(x) != current_x
current_x = hash(x)
J, c1, c2 = myfunc(x)
end
cons[:] = [c1, c2]
end
return (eval_f, eval_cons)
end
I acknowledge the example is not that self-explanatory. The thing is, by using this closure, you could build you two callbacks suitable, for any optimization solvers:
eval_f, eval_cons = build_callback(x0)
Then, each time you are calling eval_f at a new point x, the function will look if it has already computed this point. If so, it will return the previously stored solution J,c1, c2. Otherwise, it will call your function myfunc.
So imagine that in the solver you are calling eval_f, then eval_cons, on a new point x. Then
As x is a new point, you will call myfunc when you first call the function eval_f, and store the result in the cache inside the closure
Then, when you call eval_cons at x, the function will recognize that it has already computed myfunc on x and hence will return directly (c1, c2), without recomputing all from scratch