Optimization with integrals in both objective and the constraint

I’m trying to optimize a function that has an integral in the objective. While I can perform unconstrained optimization, I’m unsure how to use the Julia optimizer. I am trying to do this to add a constraint later, which itself is an integral.

``````using JuMP, Ipopt, Optim, ForwardDiff, QuadGK, NLsolve, Roots, LinearAlgebra, Base.Threads, Distributions
``````
Set parameters
``````begin
r = 0.2
α = 0.85
s = 0.5
β = 0.9
A = 1.0
w = 10.0
γ = 0.0
ll = 0.0 		# lower limit for optimization
ul = 1500.0 	# upper limit for optimization
Dθ = Uniform(0.0, 5.0)
end
``````
Define second period functions
``````begin
θ̃(b, α, w, A, r, s) = min(max(R̄(b,r)/f(b,α),0.0),5.0)
f(b,α) 			= b^α
R̄(b,r) 			= (1+r)*b
endY₂(b,θ,α) 		= θ*f(b,α)
end#C₂(R₂,b,θ,α,w) = Y₂(b,θ,α) - R₂ + w
end
``````

Objective

``````begin
Objhᵦ(b, θ, α, w, A, r, s, β) = β * (A*log(w) - s*(R̄(b,r)-Y₂(b,θ,α)))
Objhᵪ(b, θ, α, w, A, r, s, β) = β * (A*log(Y₂(b,θ,α) - R̄(b,r) + w))
end
``````
Integration over Dθ
``````begin
function FPopthᵦ(D, b, α, w, A, r, s, β)
θ̲ = minimum(D)
θ̄ = θ̃(b, α, w, A, r, s)
return quadgk(y -> Objhᵦ(b, y, α, w, A, r, s, β)*pdf(D, y), θ̲, θ̄)[1]
end

function FPopthᵪ(D, b, α, w, A, r, s, β)
θ̲ = θ̃(b, α, w, A, r, s)
θ̄ = maximum(D)
return quadgk(y -> Objhᵪ(b, y, α, w, A, r, s, β)*pdf(D, y), θ̲, θ̄)[1]
end

### objective function
FPθh(Dθ, b, α, w, A, r, s, β) = FPopthᵦ(Dθ, b, α, w, A, r, s, β) + FPopthᵪ(Dθ, b, α, w, A, r, s, β)
end
``````
Outcome
``````optimize(b -> -FPθh(Dθ, b, α, w, A, r, s, β), ll, ul).minimizer
``````

How to use Julia optimizer for the problem stated above?

``````function Opt()
model = Model(Ipopt.Optimizer)
@variable(model, 1 <= b <= 2, start = 1)
register(model, :FP, 7, FPθh; autodiff = true)
@NLobjective(model, Max, FPθ(Dθ, b, α, w, A, r, s, β))
optimize!(model)
return (b = value(b), value = objective_value(model))
end
``````
This doesn’t work:
``````    Opt()
``````

Finally, how to add the constraint?

``````begin
function πbh(D, b, α, w, A, r, s)
θ̲ = minimum(D)
θ̄ = θ̃(b, α, w, A, r, s)
return quadgk(y -> Y₂(b,y,α)*pdf(D, y), θ̲, θ̄)[1]
end

function πch(D, b, α, w, A, r, s)
θ̲ = θ̃(b, α, w, A, r, s)
θ̄ = maximum(D)
return quadgk(y -> R̄(b,r)*pdf(D, y), θ̲, θ̄)[1]
end

πθh(Dθ, b, α, w, A, r, s, γ) = πbh(Dθ, b, α, w, A, r, s)+πch(Dθ, b, α, w, A, r, s) - (1+γ)*b
end
``````
Constraint: πθh(Dθ, b, α, w, A, r, s, γ) ≥ -1

Can you share what the error says?

You are using the old version of the nonlinear API, starting with JuMP.jl you don’t need `@NL` in the macros anymore. Check out

1 Like

If I fix up your syntax slightly, and make a single reproducible example, I get:

``````julia> using JuMP, Ipopt, QuadGK, Distributions

julia> r = 0.2
0.2

julia> α = 0.85
0.85

julia> s = 0.5
0.5

julia> β = 0.9
0.9

julia> A = 1.0
1.0

julia> w = 10.0
10.0

julia> γ = 0.0
0.0

julia> ll = 0.0
0.0

julia> ul = 1500.0
1500.0

julia> Dθ = Uniform(0.0, 5.0)
Uniform{Float64}(a=0.0, b=5.0)

julia> θ̃(b, α, w, A, r, s) = min(max(R̄(b,r)/f(b,α),0.0),5.0)
θ̃ (generic function with 1 method)

julia> f(b, α) = b^α
f (generic function with 1 method)

julia> R̄(b, r) = (1+r)*b
R̄ (generic function with 1 method)

julia> Y₂(b,θ,α) = θ*f(b,α)
Y₂ (generic function with 1 method)

julia> Objhᵦ(b, θ, α, w, A, r, s, β) = β * (A*log(w) - s*(R̄(b,r)-Y₂(b,θ,α)))
Objhᵦ (generic function with 1 method)

julia> Objhᵪ(b, θ, α, w, A, r, s, β) = β * (A*log(Y₂(b,θ,α) - R̄(b,r) + w))
Objhᵪ (generic function with 1 method)

julia> function FPopthᵦ(D, b, α, w, A, r, s, β)
θ̲ = minimum(D)
θ̄ = θ̃(b, α, w, A, r, s)
return quadgk(y -> Objhᵦ(b, y, α, w, A, r, s, β)*pdf(D, y), θ̲, θ̄)[1]
end
FPopthᵦ (generic function with 1 method)

julia> function FPopthᵪ(D, b, α, w, A, r, s, β)
θ̲ = θ̃(b, α, w, A, r, s)
θ̄ = maximum(D)
return quadgk(y -> Objhᵪ(b, y, α, w, A, r, s, β)*pdf(D, y), θ̲, θ̄)[1]
end
FPopthᵪ (generic function with 1 method)

julia> FPθh(Dθ, b, α, w, A, r, s, β) = FPopthᵦ(Dθ, b, α, w, A, r, s, β) + FPopthᵪ(Dθ, b, α, w, A, r, s, β)
FPθh (generic function with 1 method)

julia> function Opt()
model = Model(Ipopt.Optimizer)
@variable(model, 1 <= b <= 2, start = 1)
FP(b) = FPθh(Dθ, b, α, w, A, r, s, β)
@operator(model, op_FP, 1, FP)
@objective(model, Max, op_FP(b))
optimize!(model)
@assert is_solved_and_feasible(model)
return (b = value(b), value = objective_value(model))
end
Opt (generic function with 1 method)

julia> Opt()
ERROR: Unable to register the function :op_FP.
... many more lines ...
``````

The issue is that JuMP does not support automatically differentiating your `FPθh` function because of the `quadgk`.

I’m not sure why Optim works? It might be doing something different:

``````using Optim
optimize(b -> -FPθh(Dθ, b, α, w, A, r, s, β), ll, ul).minimizer
``````
1 Like

I guess Optim is doing finite differencing, in which case, you could do:

``````function Opt()
model = Model(Ipopt.Optimizer)
@variable(model, 1 <= b <= 2, start = 1)
FP(b) = FPθh(Dθ, b, α, w, A, r, s, β)
finite_diff(f, h = 1e-8) = x -> (f(x + h) - f(x)) / h
∇FP = finite_diff(FP)
∇²FP = finite_diff(∇FP)
@operator(model, op_FP, 1, FP, ∇FP, ∇²FP)
@objective(model, Max, op_FP(b))
optimize!(model)
@assert is_solved_and_feasible(model)
return (b = value(b), value = objective_value(model))
end

Opt()
``````
1 Like

You can also use ForwardDiff.jl, which QuadGK.jl natively supports.

The code using finite difference approximation is working! Thank you so much!

• I have encountered another issue: when I set a different boundary for the choice variable, for example, `1 <= b <= 4`, it shows an AssertionError. The same error occurs when the upper limit is set to 5, 7, 8, or 9, but it works when set to 6, 10, etc. Do you have any idea why this could be happening?

• Finally, is it possible to add a constraint that involves an integral? I have tried using a constraint that’s not an integral, and it works.
`@constraint(model, πθh(Dθ, b, α, w, A, r, s, γ) >= -1)`

it shows an AssertionError

What is the error?

is it possible to add a constraint that involves an integral

JuMP does not have anything built-in for integrals. You’ll need to implement the necessary constraint.

It throws this:

``````AssertionError: is_solved_and_feasible(model)

Opt()@Other: 12
top-level scope@Local: 1[inlined]
``````

The log says that the problem is solved to an acceptable level.

``````Number of Iterations....: 43

(scaled)                 (unscaled)
Objective...............:  -2.1237353163035482e+00    2.1237353163035482e+00
Dual infeasibility......:   4.4408920985006262e-07    4.4408920985006262e-07
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Variable bound violation:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   2.6505208684905717e-08    2.6505208684905717e-08
Overall NLP error.......:   4.4408920985006262e-07    4.4408920985006262e-07

Number of objective function evaluations             = 174
Number of objective gradient evaluations             = 44
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 43
Total seconds in IPOPT                               = 0.136

EXIT: Solved To Acceptable Level.
``````

But it doesn’t show the maximizer. When I plot it as `b(s,r)`, it looks like this:

Could you please give a bit of direction on how to do this?

See the documentation at: Solutions · JuMP.

`solution_summary(model)` will provide more information.

You may need:

``````@assert is_solved_and_feasible(model; allow_almost = true)
``````

This says that the problem is not solved and feasible, but that it was nearly. There are likely some minor issues with tolerances, etc. As one example, the overall NLP error is 4e-7, but the default tolerance is 1e-8. So things are “nearly” solved.

is it possible to add a constraint that involves an integral
Could you please give a bit of direction on how to do this?

Presumably, you want to register another operator for your `πθh` function, and then add `@constraint(model, op_πθh(b) >= -1)`.

1 Like

This seems to be working for now:

``````function Opt(α, w, A, r, s, β)
model = Model(Ipopt.Optimizer)
@variable(model, 1 <= b <= 10, start = 1)
FP(b) = FPθh(Dθ, b, α, w, A, r, s, β)
finite_diff(f, h = 1e-6) = x -> (f(x + h) - f(x)) / h
∇FP = finite_diff(FP)
∇²FP = finite_diff(∇FP)
@operator(model, op_FP, 1, FP, ∇FP, ∇²FP)

πh(b) = πθh(Dθ, b, α, w, A, r, s, γ)
∇πh = finite_diff(πh)
∇²πh = finite_diff(∇πh)
@operator(model, op_πh, 1, πh, ∇πh, ∇²πh)
@constraint(model, op_πh(b) >= 0.0)

@objective(model, Max, op_FP(b))
optimize!(model)
@assert is_solved_and_feasible(model; allow_almost = true)
return (b = value(b), value = objective_value(model), sum = solution_summary(model))
end
``````

1 Like