JuMP and user defined objective function: JuMPArray dimension problem

Good afternoon,

I had a look at previous topices on this subject and could not find the solution to my error.

I have an optimization problem with a non linear user defined objective function called bk_obj. This function takes as argument the two unknown of my problem and a parameter called D (that will vary in the full problem). It is first defined and then the problem which depends on D is solved in another function. The problem is probably linked to the fact that my objective function mixes as argument unknow and parameter but I am not able to write it correctly…

I get an error about the array dimension that I am not able to solve. I have two unknown so I thought the dimension 2 was correct when I register my user defined function.

Many many thanks for anyone that could help. And sorry if my mistake is obvious.


using JuMP
using Ipopt
using QuadGK
using Interpolations
using LineSearches

  
    u(c) = (c).^3.0
    
    function bk_obj(Sb::AbstractFloat, 
                    cbar::AbstractFloat,
                    D::AbstractFloat)

        c1_ND(θ) = D * cbar * θ  + Sb * 1.2
        val1(θ) = u(c1_ND(θ)) 

        return quadgk(val1, 0.1, 0.9)[1] 

    end 

        
    function model_creator(D::AbstractFloat) 

        m = Model(solver = IpoptSolver())
     
        @variable(m, x[1:2])

        JuMP.register(m, :bk_obj, 2, bk_obj, autodiff=true)

        @NLobjective(m, Max, bk_obj(x[1], x[2], D)  )
        @NLconstraint(m, x[1] + x[2] >= 0.0 )

        JuMP.solve(m)

        return getvalue(x)

    end

    model_creator(1.2)

And the error is:

BoundsError: attempt to access 2-element Array{Float64,1} at index [1:3]
Stacktrace:
 [1] throw_boundserror(::Array{Float64,1}, ::Tuple{UnitRange{Int32}}) at .\abstractarray.jl:484
 [2] checkbounds at .\abstractarray.jl:449 [inlined]
 [3] view at .\subarray.jl:134 [inlined]
 [4] #forward_eval#7(::ReverseDiffSparse.UserOperatorRegistry, ::Function, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{ReverseDiffSparse.NodeData,1}, ::SparseArrays.SparseMatrixCSC{Bool,Int32}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\arquie\.julia\packages\ReverseDiffSparse\ZBur9\src\forward.jl:127
 [5] #forward_eval at .\none:0 [inlined]
 [6] forward_eval_all(::JuMP.NLPEvaluator, ::Array{Float64,1}) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\nlp.jl:445
 [7] eval_grad_f(::JuMP.NLPEvaluator, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\nlp.jl:496
 [8] initialize(::JuMP.NLPEvaluator, ::Array{Symbol,1}) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\nlp.jl:403
 [9] loadproblem!(::Ipopt.IpoptMathProgModel, ::Int32, ::Int32, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Symbol, ::JuMP.NLPEvaluator) at C:\Users\arquie\.julia\packages\Ipopt\45sRy\src\MPBWrapper.jl:55
 [10] _buildInternalModel_nlp(::Model, ::JuMP.ProblemTraits) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\nlp.jl:1244
 [11] #build#123(::Bool, ::Bool, ::JuMP.ProblemTraits, ::Function, ::Model) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\solvers.jl:304
 [12] #build at .\none:0 [inlined]
 [13] #solve#120(::Bool, ::Bool, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Model) at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\solvers.jl:168
 [14] solve at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\solvers.jl:150 [inlined]
 [15] macro expansion at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\macros.jl:314 [inlined]
 [16] model_creator(::Float64) at .\In[3]:26
 [17] top-level scope at In[3]:38

You declared that bk_obj had two arguments, but you passed it three. You can capture D in a closure as follows (nb. I haven’t actually tested this…):

function model_creator(D::AbstractFloat) 
    m = Model(solver = IpoptSolver())
    @variable(m, x[1:2])
    # The new part ...
    function bk_obj_inner(x1, x2)
        # Now `bk_obj_inner` only takes 2 arguments that are variables.
        # We can use `D` in here because it is local to the outer function.
        return bk_obj(x1, x2, D)
    end
    JuMP.register(m, :bk_obj_inner, 2, bk_obj_inner, autodiff=true)
    @NLobjective(m, Max, bk_obj_inner(x[1], x[2])  )
    @NLconstraint(m, x[1] + x[2] >= 0.0 )
    JuMP.solve(m)
    return getvalue(x)
end
1 Like

thanks a lot! i still get a method error, maybe because I give bk_obj both a parameter with a given value (D) and variables (x1 and x2). I tried changing in bk_obj the type by putting Variable but it did not work either.

i had tried this bk(x) = bk_obj(x[1], x[2], D) and also got a method error before. I am not sure how to deal with th is problem: I want it to take a function of three variables, turn it into a function of two variables once one of the variable (the parameter D) has a given value.

Thanks again!

MethodError: no method matching bk_obj(::ForwardDiff.Dual{ForwardDiff.Tag{getfield(JuMP, Symbol("##156#158")){getfield(Main, Symbol("#bk_obj_inner#5")){Float64}},Float64},Float64,2}, ::ForwardDiff.Dual{ForwardDiff.Tag{getfield(JuMP, Symbol("##156#158")){getfield(Main, Symbol("#bk_obj_inner#5")){Float64}},Float64},Float64,2}, ::Float64)
Closest candidates are:
  bk_obj(!Matched::Variable, !Matched::Variable, ::AbstractFloat) at In[2]:14
  bk_obj(!Matched::AbstractFloat, !Matched::AbstractFloat, ::AbstractFloat) at In[3]:14

Remove the type restrictions on the first two arguments to bk_obj:

bk_obj(Sb, cbar, D::AbstractFloat)

When JuMP computes derivatives, it passes in objects that aren’t AbstractFloats.

For your other question, bk(x) won’t work because JuMP doesn’t support vector inputs to user-defined functions. You can read more about user-defined functions in JuMP here Nonlinear Modeling — JuMP -- Julia for Mathematical Optimization 0.18 documentation

1 Like

thanks a lot. it is indeed working. (i thought a type needed to be assigned)

In general, you don’t need type information unless you are creating two functions with the same name:

f(x) = x
f(x::Int) = 3x
f(x::Float64) = -x

f("hello") # "hello"
f(1) # 3
f(1.0) # -1.0
1 Like

thanks a lot for the clarification!

I cannot seem to get it working yet on a larger version of my model… I now have two problems:

  1. I now get a StackOverflowError: with nothing more stated by Julia than that. I looked for existing posts on this topic and ounf this one StackOverflowError - #2 by rdeits but which does not seem to apply in my case unless I am missing something (I was able before to solve the whole model before with Optim for some values of the parameter). Why could this error arise?

  2. I was getting some NaN expressions as my function bk_obj is not defined for some values of my two unknown on which the algorithm can stumble upon while searching for a solution. I tried to circumvent it in a rather wild manner by including an if condition in the function itself returning a 0 when a Nan is found (as I am looking for a max, i thought it could be ok…). I never solved numerically an optimisation problem even in Matlab or any other language, so I am really unsure if this is the way to go. I would be very grateful for any guidanceon this more general question too and I apologize for my ignorance…

Many thanks.

using QuadGK
using JuMP
using Clp
using Ipopt

module Para 
    Re=1.3
    Rl=1.1
    E=2.0
    points=2
    lowerB=0.01
    aversion=3
    Gamma=3.0
end

    import ..Para 
    u(c) = (c).^Para.Gamma
    
    function bk_obj(Sb, 
                    cbar,
                    D::AbstractFloat)

        Lf= Para.Rl/(Para.Re- Para.Rl) * D * cbar
        Sf= Para.E - D - Lf
        P_star= P_star= (Lf /Sb)

        θhigh = (Para.Re * (1 - Sb) + Para.Re * (1 - Sb) * (Para.Re / Para.Rl) - cbar * (P_star) * D) / (cbar * D * (Para.Re - (P_star)) )

             if isinf(θhigh) || isnan(θhigh)
            return 0.0
                else
        
        c1_ND = D * cbar
        c1_D = (1 - Sb) + Lf
        c2ND(θ) = (((1 - Sb) - θ* cbar * D) * Para.Rl + Sb * Para.Re + Lf * Para.Rl + Sf * Para.Re ) / (1 - θ)
        c2D(θ) = (1 - Sb) + Lf + ( (Sb + Sf) * Para.Re ) / (1 - θ)
        val1(θ) = u(θ * c1_ND + (1 - θ)* c2ND(θ)) 
        val2(θ) = u(θ * c1_D + (1 - θ)* c2D(θ)) 

        obj = (quadgk(val1, 0.1, θhigh)[1] + quadgk(val2, θhigh, 0.999)[1])
        
            if isinf(obj) || isnan(obj) 
                    return 0.0 
            else return obj
            end #end of if condition ensuring that integral is defined 
        end #end of if condition ensuring that threshold is defined
    end 
        
    function model_creator(D::AbstractFloat) 

        m = Model(solver = IpoptSolver())   
        @variable(m, x[1:2])

        function bk_obj_inner(x1, x2)
        return bk_obj(x1, x2, D)
        end
    
        C2ND_min(x1, x2) = ((1 - x1) - x1* D) * Para.Rl + x1 * Para.Re 
        + (Para.Rl/(Para.Re- Para.Rl) * D * x2) * Para.Rl 
        + (Para.E - D - Para.Rl/(Para.Re- Para.Rl) * D * x2) * Para.Re 
        
        JuMP.register(m, :bk_obj_inner, 2, bk_obj_inner, autodiff=true)
        JuMP.register(m, :C2ND_min, 2, C2ND_min, autodiff=true)
        @NLobjective(m, Max, bk_obj_inner(x[1], x[2])  )
        @NLconstraint(m, 0.0 <= x[1] <= Para.E )
        @NLconstraint(m, 0.001 <= x[2] <= 1.0 )
        @NLconstraint(m,  0 <=C2ND_min(x[1], x[2]))
        JuMP.solve(m)
        return getvalue(x)

    end

    model_creator(1.2)

and the error:


******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************

This is Ipopt version 3.12.8, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian...:        0
Number of nonzeros in inequality constraint Jacobian.:        4
Number of nonzeros in Lagrangian Hessian.............:        0

Total number of variables............................:        2
                     variables with only lower bounds:        0
                variables with lower and upper bounds:        0
                     variables with only upper bounds:        0
Total number of equality constraints.................:        0
Total number of inequality constraints...............:        3
        inequality constraints with only lower bounds:        0
   inequality constraints with lower and upper bounds:        2
        inequality constraints with only upper bounds:        1

iter    objective    inf_pr   inf_du lg(mu)  ||d||  lg(rg) alpha_du alpha_pr  ls
   0 -0.0000000e+000 1.00e-003 6.00e-001   0.0 0.00e+000    -  0.00e+000 0.00e+000   0

StackOverflowError:

Edit: I changed JuMP.register(m, :C2ND_min, 2, C2ND_min, autodiff=true) to JuMP.register(m, :C2ND_min, 2, C2ND_min, autodiff = :forward) and the StackOverflowError: disappeared. Would anyone know why this happened? I am happy the error is gone but it would be useful for future codes to understand why.

However, the if conditional evaluation is now creating some problems giving an error saying (from what I understand after reading the documentation) that the condition is not of type Bool but of type Symbol which I fail to understand as i tested on simpler function that if isinf(θhigh) || isnan(θhigh) returns either true or false?

TypeError: non-boolean (Symbol) used in boolean context

Stacktrace:
 [1] (::getfield(JuMP, Symbol("#kw##register")))(::NamedTuple{(:autodiff,),Tuple{Symbol}}, ::typeof(JuMP.register), ::Model, ::Symbol, ::Int32, ::Function) at .\none:0
 [2] macro expansion at C:\Users\arquie\.julia\packages\JuMP\Xvn0n\src\macros.jl:339 [inlined]
 [3] model_creator(::Float64) at .\In[15]:68
 [4] top-level scope at In[15]:95