Error in NL Optimization involving integral of Incomplete Beta Function

Hello everyone,
I’m trying to build an optimization problem using Julia but keep getting error.
My objective function is:

Max ∫01 Bx(n - λn, λn + 1) ⁄ Bx(n - &lambda - 1;n, λn + 1) dx

where B_x is an incomplete beta function, and lambda is my optimal solution.

Here is my code and erros i got:

using JuMP, Ipopt, SpecialFunctions, QuadGK

n = 8
g(x) = quadgk(x -> (beta_inc(n - n*λ, n*λ + 1, x)[1] * beta(n - n*λ, n*λ + 1)) / (beta_inc(n - n*λ - 1, n*λ + 1, x)[1] * beta(n - n*λ - 1, n*λ + 1)), 0, 1, rtol=1e-8)[1]
f(λ) = 0.15 * λ * ((1-λ)^4 - λ^4) * g

m = Model(Ipopt.Optimizer)
register(m, :f, 1, f, autodiff = true)
@variable(m, 0 <= λ)

@NLobjective(m, Max, f(λ))
@NLconstraint(m, λ <= 1)
JuMP.optimize!(m)

println("** Optimal objective function value = ", JuMP.objective_value(m))
println("** Optimal solution = ", JuMP.value.(λ))
ERROR: Unable to register the function :f.

Common reasons for this include:

 * The function takes `f(x::Vector)` as input, instead of the splatted
   `f(x...)`.
 * The function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * The function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.

Stacktrace:
 [1] error(s::String)
   @ Base .\error.jl:33
 [2] _validate_register_assumptions(f::typeof(f), name::Symbol, dimension::Int64)
   @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:327
 [3] MathOptInterface.Nonlinear._UnivariateOperator(op::Symbol, f::Function)
   @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:340
 [4] register_operator(registry::MathOptInterface.Nonlinear.OperatorRegistry, op::Symbol, nargs::Int64, f::Function)
   @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:399
 [5] register_operator
   @ C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\model.jl:217 [inlined]
 [6] register(model::Model, op::Symbol, dimension::Int64, f::Function; autodiff::Bool)
   @ JuMP C:\Users\thvoe\.julia\packages\JuMP\ToPd2\src\nlp.jl:720
 [7] top-level scope
   @ g:\My Drive\Study Abroad\Career\myPP\Optimization Model\Models\beta bid function.jl:8

caused by: MethodError: no method matching *(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, ::typeof(g))
Closest candidates are:
  *(::Any, ::Any, ::Any, ::Any...) at operators.jl:560
  *(::SpecialFunctions.SimplePoly, ::Any) at C:\Users\thvoe\.julia\packages\SpecialFunctions\QH8rV\src\expint.jl:8
  *(::ChainRulesCore.AbstractThunk, ::Any) at C:\Users\thvoe\.julia\packages\ChainRulesCore\0t04l\src\tangent_arithmetic.jl:125        

Stacktrace:
  [1] afoldl(op::typeof(*), a::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, bs::Function)
    @ Base .\operators.jl:533
  [2] *(a::Float64, b::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, c::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, xs::Function)
    @ Base .\operators.jl:560
  [3] f(λ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
    @ Main g:\My Drive\Study Abroad\Career\myPP\Optimization Model\Models\beta bid function.jl:5
  [4] derivative
    @ C:\Users\thvoe\.julia\packages\ForwardDiff\PcZ48\src\derivative.jl:14 [inlined]
  [5] _validate_register_assumptions(f::typeof(f), name::Symbol, dimension::Int64)
    @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:321
  [6] MathOptInterface.Nonlinear._UnivariateOperator(op::Symbol, f::Function)
    @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:340
  [7] register_operator(registry::MathOptInterface.Nonlinear.OperatorRegistry, op::Symbol, nargs::Int64, f::Function)
    @ MathOptInterface.Nonlinear C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\operators.jl:399
  [8] register_operator
    @ C:\Users\thvoe\.julia\packages\MathOptInterface\pgWRA\src\Nonlinear\model.jl:217 [inlined]
  [9] register(model::Model, op::Symbol, dimension::Int64, f::Function; autodiff::Bool)
    @ JuMP C:\Users\thvoe\.julia\packages\JuMP\ToPd2\src\nlp.jl:720
 [10] top-level scope
    @ g:\My Drive\Study Abroad\Career\myPP\Optimization Model\Models\beta bid function.jl:8

I follow some example online for how to register user-defined function but it seems like doesnt work here. I would be very grateful for any help or suggestion to fix this.
Thanks

Hi @thvo1304, welcome to the forum.

There are a few things going on here:

The error is slightly complicated, but it is because you have * g in the definition of f. You probably meant:
f(λ) = 0.15 * λ * ((1-λ)^4 - λ^4) * g(λ).

If you fix this, the next problem is that λ is not defined in your g function. You probably meant:

g(λ) = quadgk(x -> (beta_inc(n - n*λ, n*λ + 1, x)[1] * beta(n - n*λ, n*λ + 1)) / (beta_inc(n - n*λ - 1, n*λ + 1, x)[1] * beta(n - n*λ - 1, n*λ + 1)), 0, 1, rtol=1e-8)[1]

But unfortunately, even if you fix this we still error because JuMP cannot automatically differentiate beta_inc.

If you can compute the derivative, you could pass an analytic derivative to JuMP:

p.s., In this case it’s not helpful, but we recently released new support for nonlinear. Here’s what I get with the new syntax.

julia> using JuMP, Ipopt, SpecialFunctions, QuadGK

julia> n = 8
8

julia> g(λ) = quadgk(x -> (beta_inc(n - n*λ, n*λ + 1, x)[1] * beta(n - n*λ, n*λ + 1)) / (beta_inc(n - n*λ - 1, n*λ + 1, x)[1] * beta(n - n*λ - 1, n*λ + 1)), 0, 1, rtol=1e-8)[1]
g (generic function with 1 method)

julia> f(λ) = 0.15 * λ * ((1-λ)^4 - λ^4) * g(λ)
f (generic function with 1 method)

julia> model = Model(Ipopt.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Ipopt

julia> @operator(model, op_f, 1, f)
NonlinearOperator(f, :op_f)

julia> @variable(model, 0 <= λ <= 1, start = 0.5)
λ

julia> @objective(model, Max, op_f(λ))
op_f(λ)

julia> optimize!(model)
ERROR: Unable to register the function :op_f.

Common reasons for this include:

 * The function takes `f(x::Vector)` as input, instead of the splatted
   `f(x...)`.
 * The function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * The function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.

As examples, instead of:
```julia
my_function(x::Vector) = sum(x.^2)
```
use:
```julia
my_function(x::T...) where {T<:Real} = sum(x[i]^2 for i in 1:length(x))
```

Instead of:
```julia
function my_function(x::Float64...)
    y = zeros(length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```
use:
```julia
function my_function(x::T...) where {T<:Real}
    y = zeros(T, length(x))
    for i in 1:length(x)
        y[i] = x[i]^2
    end
    return sum(y)
end
```

Review the stacktrace below for more information, but it can often be hard to
understand why and where your function is failing. You can also debug this
outside of JuMP as follows:
```julia
import ForwardDiff

# If the input dimension is 1
x = 1.0
my_function(a) = a^2
ForwardDiff.derivative(my_function, x)

# If the input dimension is more than 1
x = [1.0, 2.0]
my_function(a, b) = a^2 + b^2
ForwardDiff.gradient(x -> my_function(x...), x)
```

Stacktrace:
... lots of lines omitted ...

caused by: MethodError: no method matching _beta_inc(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, ::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1})
Stacktrace:
  [1] beta_inc(a::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, b::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 1}, x::Float64)
    @ SpecialFunctions ~/.julia/packages/SpecialFunctions/QH8rV/src/beta_inc.jl:732

Hi @odow, thank you for your response!

I have reviewed my code and came up with a new solution that reduce the complexity of my problem.

1 Like