JuMP: Register arbitary number of functions and use as constraints

I am doing some optimisation, the problems all look the same but the number of variables differ. I am trying to create a function which will do the optimisation problem in one line (optimise_this(some_problem)). The problem I am having is that I do not know how to register an arbitary number of functions as constraints.

What I have is basically this:

function optimise_problem(constraints_functions::Vector{Function},gradients::Vector{Function})
    n = length(constraints_functions)
    m = Model(solver = IpoptSolver())
    @variable(m, x[1:n] >= 0.)
    for i = 1:n
        JuMP.register(m, :fun_i, n, constraints_functions[i], gradients[i])
        @NLconstraint(m, fun_i(x...)==0)
    end    
    @objective(m, Max, 1)
    return getvalue(x)
end

(well, this is more solving than optimisation, but anyway)

The first problem is that I am a bit unsure how to correctly give the entire x vector into my function in @NLconstraint(m, fun_i(x...)==0). Here I try x... which does not seem to work.

The major problem is how to actually declare the function so that I can give an arbitrary (unique) name and then add it as a constraint. Setting :fun_i as in my example obviously does not work (but easiest way to illustrate what I wanted to do). If I unfolded the for loop and did it manually I could just write

JuMP.register(m, :fun_1, n, constraints_functions[1], gradients[1])
@NLconstraint(m, fun_1(x...)==0)
JuMP.register(m, :fun_2, n, constraints_functions[2], gradients[2])
@NLconstraint(m, fun_2(x...)==0)
...

The first one I could fix properly using JuMP.register(m, Symbol(:fun_,1), n, constraints_functions[1], gradients[1]). However, the declaration in the constraint does not allow for this. I have been think of metaprogramming somehow. But using eval() have a lot of implications and always feels dangerous, especially when it might not bee needed. I was thinking of using a macro, but that shouldn’t work either, right? (since the macro cannot see the content of the dim variable it wouldn’t either be able to make the fun_i(x…) properly in the constraint either, but maybe I am wrong).

the best I can come up with is just to write an humongous function like

function set_constraints(m,i,n,constraints_functions,gradients)
    JuMP.register(m, :fun_1, n, constraints_functions[1], gradients[1])
    @NLconstraint(m, fun_1(x...)==0)
    (i==1) && return
    JuMP.register(m, :fun_2, n, constraints_functions[2], gradients[2])
    @NLconstraint(m, fun_2(x...)==0)
    (i==2) && return
    ...
    JuMP.register(m, :fun_1000, n, constraints_functions[1000], gradients[1000])
    @NLconstraint(m, fun_1000(x...)==0)
    (i==1000) && return
end

and then I just ensure to have more entries than what I might need… However, it feels beyond ludicrous if this is the only way to do it…

It doesn’t feel like what I want to do is fundamentally problematical, but I am not sure how to get around the interface provided by JuMP to allow be to actually do it. However I am quite new to using JuMP so so figured I’d ask here and see if someone more familiar with it could help.

1 Like

Having the same issue here… @Gaussia did you figure out a solution, or did you just give up? Because I wouldn’t blame you for just giving up, some aspects of JuMP (especially nonlinear ones) frustrate me.

I wanted to bump this again and see if anybody knows of a solution.

1 Like

I have kinda forgotten if I found a solution or gave up. In the end I figured out a better approach for achieving what I wanted, without using Jump.

1 Like

I was just dealing with a similar issue like “… metaprogramming somehow. But (NOT) using eval() …”
And the crux of the solution came from @ChrisRackauckas and @Chris_Foster who suggested RuntimeGeneratedFunction.jl .

A Simple MWE code for RuntimeGeneratedFunction.jl here >>

More below :

# WORKS WAS -- f_symbolic_ans(x) = (x^3-x^2+1)
str_f_symbolic_BigD = "f_symbolic_ans(x) = (x^3-x^2+1)"
func_f_symbolic_BigD = @RuntimeGeneratedFunction(Meta.parse(str_f_symbolic_BigD))
#
f371 = ApproxFun.Fun(x->func_f_symbolic_BigD(x),Domain(r_left_x..r_right_x))

Also figured out my answer to "… quickly handling / removing Inf from answers "
via

# Base.Generator{  
gen_iter_f2 = ( r_x for r_x in Iterators.filter( x -> ( isfinite(ApproxFun.extrapolate(f371,x)) & isfinite(ApproxFun.extrapolate(funjac_u1,x)) )
                ,range_x ) )

However @Gaussia if you have “… figured out a better approach for achieving what I wanted, without using Jump. …”
I’d love to know what it was, could you outline a sketch in pseudo-code or such ?

You can do this in JuMP, but it requires some advanced usage of the raw expression input.

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, x[1:2] >= 0)
constraints = [
    (x, y) -> (x - y)^2,
    (x, y) -> x + y,
]
for (i, con) in enumerate(constraints)
    f = Symbol("f$(i)")
    register(model, f, 2, con; autodiff = true)
    add_NL_constraint(model, :($(f)($(x[1]), $(x[2])) == $(i - 1)))
end
optimize!(model)
2 Likes

Thanks for the reply, although I am still having difficulties. Can you explain though what is wrong here? It gives KeyError: key :fg not found for some reason. As far as I can tell, it’s the same as the above? Thanks!

using JuMP
model = Model()
@variables(model, begin
    -5 <= x[1:5] <= 5
    -4 <= y[1:3] <= 1
    -30 <= z
end)
# Testing expression parsing
ex = :((x, y, z) -> sum(x[i] for i=1:4) - y[1] * y[2] + z)
symb = :fg
JuMP.register(model, symb, 3, eval(ex); autodiff = true)
vars = [x,y,z]
JuMP.add_NL_constraint(model, :($(symb)($(vars)...) >= 0))

You have multiple issues.

  1. You cannot pass vectors to user-defined functions: https://jump.dev/JuMP.jl/stable/nlp/#User-defined-functions-with-vector-inputs-1
  2. Splatting vars is not doing what you thing. You need [x; y; z]
  3. JuMP’s raw expression syntax is subtle. You need to construct the expression prior to splatting.

Putting everything together, you want something like:

model = Model()
@variables(model, begin
    -5 <= x[1:5] <= 5
    -4 <= y[1:3] <= 1
    -30 <= z
end)
ex = :((x...) -> sum(x[i] for i=1:4) - x[5] * x[6] + x[7])
f_ex = eval(ex)
symb = :fg
vars = [x; y; z]
JuMP.register(model, symb, length(vars), f_ex; autodiff = true)
expr = Expr(:call, symb, vars...)
JuMP.add_NL_constraint(model, :($(expr) >= 0))