I am trying to solve a constrained optimization problem using JuMP, but there is apparently a conflict between how I define the variables and one of the packages I call in the objective function. I just started with JuMP so it’s probably a very minor thing that I do not understand. The following is a MWE to illustrate the issue (the example itself makes no sense, but it is the same error I get in my code):
using QuantEcon # loads MarkovChain
using GLPK
function f_matrix(p_m)
MC = MarkovChain(p_m)
index_1 = stationary_distributions(MC)[1]
return index_1[1]
end
model = Model(GLPK.Optimizer)
@variable(model, 0 <= p_matrix[1:3,1:3] <= 1)
@objective(model, Min, f_matrix(p_matrix))
When running @objective(model, Min, f_matrix(p_matrix)) it gives the following error:
ERROR: MethodError: no method matching isless(::VariableRef, ::VariableRef)
JuMP is a package for structured optimization and has its strengths and weaknesses. It isn’t well suited for black-box nonlinear optimization problems like you have there.
Note that JuMP does support user-defined nonlinear functions, Nonlinear Modeling · JuMP. That would look something like the following (I didn’t try, there might be typos, etc. It also needs stationary_distributions to be differentiable using ForwardDiff. Not sure if it is.).
using QuantEcon
using JuMP
import Ipopt
function f_matrix(x...)
p_m = reshape(collect(x), (3, 3))
MC = MarkovChain(p_m)
index_1 = stationary_distributions(MC)[1]
return index_1[1]
end
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= p_matrix[1:3, 1:3] <= 1)
register(model, :f_matrix, 9, f_matrix; autodiff = true)
@NLobjective(model, Min, f_matrix(p_matrix...))
(Note that you need Ipopt, a nonlinear optimizer, instead of GLPK, which is a linear optimizer.)