Conditional inside a function to be optimized

I’m trying to maximize this function

function C_opt(s)
    
    pax, pby = OffsetArray( zeros(Float64, (2,2)), (0:1,0:1) ), OffsetArray( zeros(Float64, (2,2)), (0:1,0:1) )
    
    Cmax = 0
    for l=0:15
        pax[0,0], pax[0,1], pby[0,0], pby[0,1] =  digits(l, base=2, pad=4)
        pax[1,0], pax[1,1], pby[1,0], pby[1,1] = 1- pax[0,0], 1- pax[0,1], 1- pby[0,0], 1- pby[0,1]
        
        C = 0
        j = 1 
        
        for i=0:15
            a,b,x,y = digits(i, base=2, pad=4)
            C += s[j]*pax[a,x]*pby[b,y]
            j += 1
        end
        C = C + s[j]*pax[0,0] + s[j+1]*pax[1,0]
        
        if Cmax > C
            Cmax = C
        end
        

    end
    
    return Cmax

end

model = Model(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0));
@variable(model, -1 <= s[i = 1:18] <= 1, start = rand(Uniform(-1,1)) )
@objective(model, Max,  C_opt(s) )

optimize!(model)
@show objective_value(model)

but I’m getting this error

MethodError: no method matching isless(::GenericAffExpr{Float64,VariableRef}, ::Int64)
Closest candidates are:
isless(!Matched::Missing, ::Any) at missing.jl:87
isless(!Matched::AbstractFloat, ::Real) at operators.jl:167
isless(!Matched::ForwardDiff.Dual{Tx,V,N} where N where V, ::Integer) where Tx at

Commenting this lines

if Cmax > C
             Cmax = C
end

I get something, but it is not the right answer, that "if " statement has to be there

Thanks in advance!

First of all, welcome to our community!

Second, thanks for already taking your time to post the code inside triple backquotes.

Third, you are comparing a number (more specifically, 0 :: Int64) with an expression that has model variables associated. These variables have not a defined value before you run the model, if these are run together with the model (I never used Ipopt), then your probably need to call the function value over the expression or, more probably, over the variables and then use the values instead to compute what you want. Ex.: if s is a vector of variables then value.(s) will get a vector of their values (as Float64), and then you can use them for your computation instead of the variables themselves.

5 Likes

Thanks for replying. I’m not sure of understanding your answer. s is certainly a vector, but that’s precisely the one I want to find, the vector that maximizes the function C_opt(s). I tested the code with a similar function and it worked, and the it works in python with minimize from scipy.optimize.

I don’t understand what’s the problem with the “if” inside the function.

1 Like

If you do @show typeof(s) after the @variable you will see it is a Vector of JuMP variables, i.e., the unknowns you are trying to get to know the values which maximize the value of the objective function.

One thing I only perceived now is that you call C_opt in @objective(model, Max, C_opt(s) ) I am not sure if this is what you want. C_opt(s) will evaluate the expression returned before the solver runs and, therefore, s does not have any values yet. So it seems to me to be wrong.

You also said you removed the if statement and did not get the values you wanted, but in this case you provided no objective function (in truth, you provided an objective function that is just the constant zero and, consequently, if the model finds any valid solution it will just stop).

Maybe you meant @objective(model, Max, C_opt) so the C_opt is run every step of the optimization process? (Again, I never used a blackbox optimizer, so I do not know if this is the right syntx.) In this case, you probably want to return a computed value for the current solution, and we circle back to you needing to use value to extract the current values from the unknowns.

1 Like

Basically, you are comparing an equation with unknowns (i.e., letters that may be replaced by any values inside a range) with zero. This is not defined. What you expected? That it returned true only if it is true for all values that the variables may assume? What is your expected result for a <= 0 when a is defined like -1 <= a <= 1?

I understand your point here, but it’s no like the optimizer is generating solutions and passing that “provisional” vector s to the function?

Maybe the optimizer pass the vector of values already, I do not know. But you are not passing the function to the optimizer to discover that, you are passing to the optimizer the return of C_opt with the initial (without value) vector of variables s, if you want to pass function C_opt , then just pass it, not the results of its call.

1 Like

I’m expecting to obtain a numerical value. The only possible values that pax[…] and pby[…] can have are 0 and 1. Maybe this optimizer is not the right choice for this problem. As I said, I wrote the same code in python and it works. This function C_opt is a function that I need to optimize a more complex problem, but in view that C_opt was giving me problems I wanted to fix it first.

By the way, the right line above is C > Cmax.

I think Ipopt already receive the parameters as values (so you do not need to call value), and you can use the call notation you are using (i.e., @objective(model, Max, C_opt(s) ), because it is a macro and it uses some “dark magic” so its is not really calling C_opt(s) and passing the answe there). However, maybe what you want is @NLobjective instead of @objective, at least this is what occurs to me looking at this example.

1 Like

I think the problem is in the definition of the bounds. Try starting them like this:

julia> l = [ -1 for i in 1:18 ] ; u = [ 1 for i in 1:18 ];

julia> @variable(model, l[i] <= s[i = 1:18] <= u[i], start = ... 

I thought that too, but danuzco says the code runs if he comments an unrelated if statement, so it does not appear to be so.

1 Like

The error disappears here with that change. It may be simply a casualty that s does not get projected on the bounds in that specific execution if the if is removed. One would need to see the exact iteration output of Ipopt to know what is going on.

1 Like

Thanks for your answer. I’m trying to use this approach but I’m getting something wrong with the same code I was using. I’m even trying with this simple example I grabbed from https://nbviewer.jupyter.org/github/jump-dev/JuMPTutorials.jl/blob/master/notebook/introduction/getting_started_with_JuMP.ipynb

using JuMP
using GLPK

model = Model(GLPK.Optimizer)
@variable(model, x >= 0)
@variable(model, y >= 0)
@constraint(model, 6x + 8y >= 100)
@constraint(model, 7x + 12y >= 120)
@objective(model, Min, 12x + 20y)

optimize!(model)

@show value(x);
@show value(y);
@show objective_value(model);

MethodError: no method matching Model(::Type{GLPK.Optimizer})
Closest candidates are:
Model(::Any, !Matched::Symbol, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Any, !Matched::Int64, !Matched::Array{String,1}, !Matched::Array{String,1}, !Matched::Array{Float64,1}, !Matched::Array{Float64,1}, !Matched::Array{Symbol,1}, !Matched::Array{T,1}

can you see what’s wrong? this should work.

Here that worked. Please be sure that you have restarted your Julia section (it may be that the model previously defined is interfering there, I don’t know):

julia> using JuMP

julia> using GLPK
[ Info: Precompiling GLPK [60bf3e95-4087-53dc-ae20-288a0d20c6a6]

julia> model = Model(GLPK.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: GLPK

julia> @variable(model, x >= 0)
x

julia> @variable(model, y >= 0)
y

julia> @constraint(model, 6x + 8y >= 100)
6 x + 8 y ≥ 100.0

julia> @constraint(model, 7x + 12y >= 120)
7 x + 12 y ≥ 120.0

julia> @objective(model, Min, 12x + 20y)
12 x + 20 y

julia> optimize!(model)

julia> @show value(x);
value(x) = 14.999999999999993

julia> @show value(y);
value(y) = 1.2500000000000047

julia> @show objective_value(model);
objective_value(model) = 205.0

julia> 

1 Like

There are a few comments leading you astray in this thread, so let me summarize.

  • Your problem is nonlinear. You need to use the nonlinear interface for JuMP. In particular, you need to use a user-defined function: https://jump.dev/JuMP.jl/stable/nlp/#User-defined-Functions-1.
  • The nonlinear interface requires functions with scalar inputs, so s can’t be a vector input. Read this suggested work-around.
  • @Henrique_Becker is correct that @objective(model, Max, C_opt(s) evaluates this function with the arguments as JuMP variables before passing it to the solver. This is a little confusing, but is very helpful in some cases.
  • I don’t understand the need for OffSetArray, but it looks like your nonlinearity is just the single max(0, C)? Due to nondifferentiability and the large flat area, Ipopt tends to struggle with this. You might get a local optima.
  • There is nothing wrong with how you have defined @variable.
  • Your issue with Model(GLPK.Optimizer) is because you are using an old version of JuMP, although I think you’ve fixed this in another issue.
4 Likes

@odow @Henrique_Becker @lmiq Thank you guys for helping me with this issue, I’ll try to follow your suggestions and see if I can make it work.

1 Like

@danuzco Have you found solution to this problem? If yes, would you please share it for general and greater community learning?

I obviously missed the action, but let me suggest a basic reformulation that handles this:

if Cmax > C
   Cmax = C
end

The modeling “trick” consists in introducing an optimization variable C bounded by Cmax and introducing the equality constraint C_{opt}(s) - C = 0 that synchronizes both values at the optimum.

2 Likes

Interesting remarks. I will look at reparameterization of my problem. Thanks.