I am suing simple interpolation within a optimization problem by interpolation.jl
I have a 3*3 grid value matrix Val_grid.
Then I implemented this simple code:

i_values = [1.0, 1.2, 1.3]
j_values = [1.0, 1.2, 1.3]

interp = interpolate((i_values, j_values), Val_grid, Gridded(Linear()))

Forward_model = Model(Gurobi.Optimizer)
@variable(Forward_model, x, lower_bound = 1, upper_bound = 1.3)
@variable(Forward_model, y, lower_bound = 1, upper_bound = 1.3)
@objective(Forward_model, Min,  1.5*x + 1.3*y + interp(x, y))

However, then I got the error:

MethodError: no method matching (::Interpolations.GriddedInterpolation{Float64, 2, Matrix{Float64}, Gridded{Linear{Throw{OnGrid}}}, Tuple{Vector{Float64}, Vector{Float64}}})(::VariableRef, ::VariableRef)
Use square brackets for indexing an Array.

But I cannot use my variables as a index. What should I do?


JuMP.jl uses a special DSL (domain-specific language) built with Julia macros to define optimization variables x and y in your case. These variables are callled VariableRef in your error message.

The error message is saying that there was no method found for interp that could take two arguments of type VariableRef.

Perhaps you want a package for black-box optimization instead? Check Optim.jl or the various other packages that do not assume that your objective function is a well-defined “linear” expression of the optimization variables.

Separate from how you would do this with JuMP (at some point, the JuMP DSL becomes more of a hindrance than a help IMO, compared to writing ordinary Julia functions and passing them to an optimizer package with ordinary Julia syntax), I think you are shooting yourself in the foot here by using a gridded linear interpolation for your objective function.

The basic issue is that piecewise linear interpolation is not smooth/differentiable, and the best (fastest converging) optimization algorithms rely on functions being differentiable (usually twice differentiable, with bounded second derivatives). So I would really think about using an interpolation scheme that is smoother (e.g. Chebyshev interpolation if you can choose the sample points, splines, radial basis functions, etc.) if you can possibly arrange it. This depends on the origin of your interpolated data (e.g. whether you can choose the sample points, whether the underlying data is smooth, etc.).

I see.
I checked other packages but it was not helpful. At the end, I linked Julia with Python and did this part in the Python, and sent the result back to Julia.

I appreciate your reply.
Actually, I have to use this gridded interpolation because I am combining two algorithms in the stochastic optimization.
I used Scipy to try this interpolation in Python, and it works.
If my comprehension is correct, the Gurobi solver is unable to receive this non-linear objective function because of the JuMP DSL. Am I correct?

Perhaps you should post the Python code here. I’m positive that people will quickly help with the translation.

Have you considered using Home · ScatteredInterpolation.jl

You can also use GitHub - joehuchette/PiecewiseLinearOpt.jl: Solve optimization problems containing piecewise linear functions

using JuMP
import Gurobi
import Interpolations
import PiecewiseLinearOpt
I = [1.0, 1.2, 1.3]
J = [1.0, 1.2, 1.3]
V = rand(3, 3)
interp = Interpolations.interpolate(
    (I, J),
model = Model(Gurobi.Optimizer)
@variable(model, 1 <= x <= 1.3)
@variable(model, 1 <= y <= 1.3)
z = PiecewiseLinearOpt.piecewiselinear(model, x, y, I, J, (u,v) -> interp[u, v])
@objective(model, Min,  1.5x + 1.3y + z)