# How to compute expoents using JuMP?

I’m trying to compute the best parameters of a surface using JuMP, a very basic example is:

``````# variables
x = linspace(1,4,100);
y = linspace(1,4,100);
# surface
U = [10 - x[i].^2 + y[j].^-2 for i=1:100, j=1:100]
# test surface
p = [10 -1.0 2.0 1.0 -2.0] + rand(1,5)
Umin = [p[1] + p[2]*x[i].^p[3] + p[4]*y[j].^p[5] for i=1:100, j=1:100] ;
# fitness parameter
minimizeit = sum(sum(abs.(U.-Umin)))
``````

My goal is to write a code to compute all values of p - even the exponents. I’ve tried JuMP

``````m = Model(solver=CbcSolver())
@defVar(m, Inf <= p1 <= Inf)
@defVar(m, Inf <= p2 <= Inf)
@defVar(m, 0 <= p3 <= 3)
@defVar(m, Inf <= p4 <= Inf)
@defVar(m, -3 <= p5 <= 0)

@setObjective(m, Min, begin
Umin = [p1 + p2*x[i].^p3 + p4*y[j].^p5 for i=1:100, j=1:100] ;
return minimizeit = sum(sum(abs.(U.-Umin)))
end
)
``````

I get a message:`MethodError: no method matching ^(::Float64, ::JuMP;Variable)`
How can I handle this issue ?

Thank you

Could you quote your code in the future? It makes it easier for people to run your code, and increases your chances of getting a quick response.

You seem to be using a very old version of JuMP (`@defVar` hasn’t been a thing for a long time). Which version of JuMP are you on? Which version of Julia are you on? FYI it’s not recommended to install Julia via `apt-get` as it’s changing so quickly (just a guess); just download 0.6.2 from Download Julia.

JuMP doesn’t define `^(::Float64, ::JuMP.Variable)`. This is because the ‘default’ interface (using `@expression`, `@objective`, and `@constraint` in the latest version of JuMP) only supports affine and quadratic expressions. Using a `JuMP.Variable` as the exponent cannot result in such an expression. you may want to use `@NLexpression` /`@NLobjective`/`@NLconstraint` instead

1. I totally forgot about markdown codes when writing, sorry.

2): I’ve never used JuMP, I’ve just copied and past some code that Google showed me and worked, this is the reason of `@defVar`. My Julia version is 0.6.0 and JuMP is 0.18.1.

1. The same erros exists with NLexpressions
``````# variables
x = linspace(1,4,100);
y = linspace(1,4,100);
# surface
U = [10 - x[i].^2 + y[j].^-2 for i=1:100, j=1:100]

m = Model(solver=IpoptSolver())
@variable(m, Inf <= p1 <= Inf)
@variable(m, Inf <= p2 <= Inf)
@variable(m, 0 <= p3 <= 3)
@variable(m, Inf <= p4 <= Inf)
@variable(m, -3 <= p5 <= 0)
@NLobjective(m, Min, begin
Umin = [p1 + p2*x[i].^p3 + p4*y[j].^p5 for i=1:100, j=1:100] ;
sum(sum(abs.(U.-Umin)));
end)
``````

I’ve tried LsqFit.jl , and it only works to find positive exponents. If someone could knows how to make a proper modification to compute negative exponents, this topic can finish.

``````using LsqFit
function model(xx,p)
x = xx[1,:];
y = xx[2,:];
(p[1] + p[2]*x.^p[3] + p[4]*y.^p[5])[:]
end

x = linspace(1,4,100);
y = linspace(1,4,100);
X = [x y]';
Params = [10, 1.0, 2.0,1.0,2.0];
errors = 0.01*randn(100);
Y = model(X, Params) .+ errors;

fit = curve_fit(model, X, Y, rand(5));
println(fit.param)
``````

It’s not completely clear to me what’s going wrong in the `NLobjective` macro, but it seems to be confused (you may want to open an issue about the confusing error message). The following works:

``````using JuMP
using Ipopt

# variables
n = 100
x = linspace(1,4,n);
y = linspace(1,4,n);
# surface
U = [10 - x[i]^2 + y[j]^-2 for i=1:n, j=1:n] # no need to use .^ here

m = Model(solver=IpoptSolver())
@variable(m, p1) # used to be Inf <= p1 <= Inf
@variable(m, p2) # used to be Inf <= p2 <= Inf
@variable(m, 0 <= p3 <= 3)
@variable(m, p4) # used to be Inf <= p4 <= Inf
@variable(m, -3 <= p5 <= 0)

# create an array of scalar expressions for Umin
@NLexpression(m, Umin[i=1:n, j=1:n], p1 + p2*x[i]^p3 + p4*y[j]^p5) # again, no need to use .^ here

# use Umin in @NLobjective
@NLobjective(m, Min, sum(abs(U[i, j] - Umin[i, j]) for i = 1 : n, j = 1 : n))
``````

Note that the `sum` used in `@NLobjective` works the same as in the linear/quadratic interface, i.e. its syntax is different from regular `Base.sum`.

I tried it for `n = 20`, and it seems to be running into the user iteration limit though.

Edit: adding more restrictive bounds on `p1`, `p2`, and `p4` (e.g. between -10 and 10) and changing `abs(U[i, j] - Umin[i, j])` to `(U[i, j] - Umin[i, j])^2` at least makes Ipopt converge.

Thank you, it works

Just to me understand properly the syntax. In this line

`@NLexpression(m, Umin[i=1:n, j=1:n], p1 + p2*x[i]^p3 + p4*y[j]^p5)`

You are creating a matrix `Umin` of the type `JuMP.Variable`, then you are defining their elements with a vector expression. We don’t need to write `i` and `j` inside for loops because `@NLexpression` can handle this once we define them as iterators.

Am I right ?

1 Like

Yes, except that the element type of `Umin` is `JuMP.NonlinearExpression`.