Erf function is not supported

I have a MINLP that involves error cdf and pdf of normal distribution in the objective function. I used erf from SpecialFunctions it is not supported. I also tried approximations of erf, even that does not work. I used Knitro (using AmplNLWriter) for my experiments, and it does not support erf. Any suggestions on which solver or another formulation for the model will be helpful.
The code with errors is as follows:

begin	
	model = Model(() -> AmplNLWriter.Optimizer("/home/simran/ampl_linux-intel64/mglob"))
	# model = Model(Ipopt.Optimizer)

	# register(model, :er, 1, er, autodiff=true)
	
	@variable(model, n[1:n_types, 1:n_slots].>=0,integer=true)
	@variable(model, α[1:n_slots].≥ 0)

	@expression(model, mu, μ'*n[:,1])
	@expression(model, A1, 1/√(2π)*(α[1]*exp(-(120-mu).^2/(2*α[1]^2))) )
	@NLexpression(model, A2, (mu-120)*(1-(0.5(1+er((120-mu)/α[1])))))
	# @NLexpression(model, A2, 0.5*(mu-120)*err( (mu-120)/2/α[1] ) )
	# @NLexpression(model, A2, (mu - 120) * (1 - 0.5 * (1 + erf((120 - mu) / α[1]))))

	@NLobjective(model, Min, A1 + A2)
	
	@constraint(model,     sum(n, dims=2)   .≤ n_pats)
	@constraint(model, μ'*n[:,1] + c*α[1]    ≥ 120)
	@constraint(model, σ'.^2*n[:,1] .- α[1]^2 .≤ 0)

	optimize!(model)
	
	@show value.(n)
	@show value.(A1)
	@show value.(A2)

end

The er function is:

function er(x)
    sign_x = x >= 0 ? 1 : -1
    abs_x = abs(x)

    # constants
    a1 = 0.254829592
    a2 = -0.284496736
    a3 = 1.421413741
    a4 = -1.453152027
    a5 = 1.061405429
    p = 0.3275911

    # A&S formula 7.1.26
    t = 1.0 / (1.0 + p * abs_x)
    y = 1.0 - (((((a5 * t + a4) * t + a3) * t + a2) * t + a1) * t) * exp(-abs_x * abs_x)
    return sign_x * y  # erf(-x) = -erf(x)
end

MathOptInterface.UnsupportedNonlinearOperator: The nonlinear operator :er is not supported by the model. and erf results in same error.

You can use the erf from SpecialFunctions.jl. Because your model has data that wasn’t in the code block, I made up some random data and tried to run it below, using Alpine.jl to solve the MINLP. The model builds and begins to solve, but then has the error Alpine does not currently support exp operator acting on a variable, so perhaps you can go down the list of MINLP solvers here Installation Guide · JuMP and try them until you find one that can handle that problem. I also changed your variable n to binary, as Alpine.jl doesn’t seem to support integer variables in MINLP. I tried to run EAGO.jl on this problem but it causes a segfault during precompilation for me, but maybe you might have better luck with it.

using JuMP, Ipopt, Alpine, HiGHS, Juniper
using SpecialFunctions # for erf()

n_types = 2
n_slots = 3
n_pats = 10
c = rand()
# α = rand()
σ = rand()
μ = rand(n_types)

nlp_solver = optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
mip_solver = optimizer_with_attributes(HiGHS.Optimizer, "output_flag" => false)
minlp_solver = optimizer_with_attributes(
    Juniper.Optimizer,
    MOI.Silent() => true,
    "mip_solver" => mip_solver,
    "nl_solver" => nlp_solver,
)
model = Model(
    optimizer_with_attributes(
        Alpine.Optimizer,
        "nlp_solver" => nlp_solver,
        "mip_solver" => mip_solver,
        "minlp_solver" => minlp_solver
    ),
)

@variable(model, n[1:n_types, 1:n_slots].>=0,integer=true)
@variable(model, α[1:n_slots].≥ 0)

@expression(model, mu, μ'*n[:,1])
@expression(model, A1, 1/√(2π)*(α[1]*exp(-(120-mu).^2/(2*α[1]^2))) )
@expression(model, A2, (mu-120)*(1-(0.5(1+erf((120-mu)/α[1])))))

@NLobjective(model, Min, A1 + A2)

@constraint(model,     sum(n, dims=2)   .≤ n_pats)
@constraint(model, μ'*n[:,1] + c*α[1]    ≥ 120)
@constraint(model, σ'.^2*n[:,1] .- α[1]^2 .≤ 0)

optimize!(model)
1 Like

Hi @Simran_Lakhani, welcome to the forum. You can use KNITRO.jl. (AmplNLWriter doesn’t support erf.)

Here’s what I get:

julia> using JuMP

julia> ENV["ARTELYS_LICENSE"] = "/Users/Oscar/Documents/Code/knitro-13.1.0-MacOS-64";

julia> import KNITRO

julia> import SpecialFunctions

julia> n_types, n_slots, n_pats = 2, 3, 10
(2, 3, 10)

julia> c = rand()
0.3638056494778954

julia> σ = rand()
0.014356991648503792

julia> μ = rand(n_types)
2-element Vector{Float64}:
 0.18394756118215327
 0.026117955636005163

julia> model = Model(KNITRO.Optimizer)
##### This license is only intended for use by JuMP dev. #####
##### License is valid until Dec 31, 2024 #####
A JuMP Model
├ solver: Knitro
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none

julia> set_silent(model)

julia> @variables(model, begin
           n[1:n_types, 1:n_slots] >= 0, Int
           α[1:n_slots] >= 0
       end)
(VariableRef[n[1,1] n[1,2] n[1,3]; n[2,1] n[2,2] n[2,3]], VariableRef[α[1], α[2], α[3]])

julia> @expressions(model, begin
           mu, μ' * n[:, 1]
           A1, 1 / √(2π) * (α[1] * exp(-(120 - mu)^2 / (2 * α[1]^2))) 
           A2, (mu - 120) * (1 - (0.5 * (1 + SpecialFunctions.erf((120 - mu) / α[1]))))
       end)
(0.18394756118215327 n[1,1] + 0.026117955636005163 n[2,1], 0.3989422804014327 * α[1] * exp((-0.03383670526486202 n[1,1]² - 0.00960866848461365 n[2,1]*n[1,1] - 0.0006821476066043338 n[2,1]² + 44.14741468371679 n[1,1] + 6.268309352641239 n[2,1] - 14400) / (2 α[1]²)), (0.18394756118215327 n[1,1] + 0.026117955636005163 n[2,1] - 120) * (1.0 - (0.5 * (1.0 + erf((-0.18394756118215327 n[1,1] - 0.026117955636005163 n[2,1] + 120) / α[1])))))

julia> @constraints(model, begin
           sum(n; dims = 2) .<= n_pats
           mu + c * α[1] >= 120
           σ'.^2 * n[:, 1] .<= α[1]^2
       end)
(ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}[n[1,1] + n[1,2] + n[1,3] ≤ 10; n[2,1] + n[2,2] + n[2,3] ≤ 10;;], 0.18394756118215327 n[1,1] + 0.026117955636005163 n[2,1] + 0.3638056494778954 α[1] ≥ 120, ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarQuadraticFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}[-α[1]² + 0.00020612320919520763 n[1,1] ≤ 0, -α[1]² + 0.00020612320919520763 n[2,1] ≤ 0])

julia> @objective(model, Min, A1 + A2)
(0.3989422804014327 * α[1] * exp((-0.03383670526486202 n[1,1]² - 0.00960866848461365 n[2,1]*n[1,1] - 0.0006821476066043338 n[2,1]² + 44.14741468371679 n[1,1] + 6.268309352641239 n[2,1] - 14400) / (2 α[1]²))) + ((0.18394756118215327 n[1,1] + 0.026117955636005163 n[2,1] - 120) * (1.0 - (0.5 * (1.0 + erf((-0.18394756118215327 n[1,1] - 0.026117955636005163 n[2,1] + 120) / α[1])))))

julia> optimize!(model)
##### This license is only intended for use by JuMP dev. #####
##### License is valid until Dec 31, 2024 #####

julia> solution_summary(model)
* Solver : Knitro

* Status
  Result count       : 1
  Termination status : LOCALLY_SOLVED
  Message from the solver:
  "0"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : FEASIBLE_POINT
  Objective value    : 8.52306e+01
  Objective bound    : 8.52306e+01
  Relative gap       : 0.00000e+00

* Work counters
  Solve time (sec)   : 2.59800e-03

You can install KNITRO.jl with:

ENV["KNITRODIR"] = "/Users/Oscar/Documents/Code/knitro-13.1.0-MacOS-64"
import Pkg
Pkg.add("KNITRO")

where you change the path as appropriate.

1 Like

@odow out of curiosity, are there any open source solvers out there that would solve this problem? And also, when using Alpine.jl as I did when trying to see if I could get it working, using @objective gave ERROR: The solver does not support an objective function of type MathOptInterface.ScalarNonlinearFunction. but the old style @NLobjective worked, does that mean that Alpine has just not fully updated to the new nonlinear interface?

Juniper.jl should work.

For alpine, see Support `MOI.ScalarNonlinearFunction` · Issue #240 · lanl-ansi/Alpine.jl · GitHub. But it wouldn’t support erf regardless.

2 Likes

Thank you for your reply. I am sorry for not uploading the required parameters. I tried Ampl to use Knitro as I don’t have the license you mentioned. It works there. However, the solver solves it locally, even though the problem is convex. Do you know of any global solvers for the same?

It says locally solved, but assuming the problem is convex (after you relax the integrality, which KNITRO cannot prove), then the local solution is probably also the global minimum.

If you know the problem is Convex, you can also use Pavito.jl:

julia> using JuMP

julia> import Pavito, HiGHS, Ipopt

julia> import SpecialFunctions

julia> n_types, n_slots, n_pats = 2, 3, 10
(2, 3, 10)

julia> c = rand()
0.5083740422368682

julia> σ = rand()
0.9374910436477443

julia> μ = rand(n_types)
2-element Vector{Float64}:
 0.8141958565912996
 0.04646161512584135

julia> model = Model(
           optimizer_with_attributes(
               Pavito.Optimizer,
               "mip_solver" => optimizer_with_attributes(HiGHS.Optimizer, MOI.Silent() => true),
               "cont_solver" => optimizer_with_attributes(Ipopt.Optimizer, MOI.Silent() => true),
           )
       )
A JuMP Model
├ solver: Pavito
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none

julia> set_silent(model)

julia> @variables(model, begin
           n[1:n_types, 1:n_slots] >= 0, Int
           α[1:n_slots] >= 0
       end)
(VariableRef[n[1,1] n[1,2] n[1,3]; n[2,1] n[2,2] n[2,3]], VariableRef[α[1], α[2], α[3]])

julia> @expressions(model, begin
           mu, μ' * n[:, 1]
       end)
(0.8141958565912996 n[1,1] + 0.04646161512584135 n[2,1],)

julia> @constraints(model, begin
           sum(n; dims = 2) .<= n_pats
           mu + c * α[1] >= 120
           σ'.^2 * n[:, 1] .<= α[1]^2
       end)
(ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}[n[1,1] + n[1,2] + n[1,3] ≤ 10; n[2,1] + n[2,2] + n[2,3] ≤ 10;;], 0.8141958565912996 n[1,1] + 0.04646161512584135 n[2,1] + 0.5083740422368682 α[1] ≥ 120, ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarQuadraticFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}[-α[1]² + 0.8788894569197367 n[1,1] ≤ 0, -α[1]² + 0.8788894569197367 n[2,1] ≤ 0])

julia> @NLexpressions(model, begin
           A1, 1 / sqrt(2π) * (α[1] * exp(-(120 - mu)^2 / (2 * α[1]^2))) 
           A2, (mu - 120) * (1 - (0.5 * (1 + erf((120 - mu) / α[1]))))
       end)
(subexpression[1]: (1.0 / sqrt(2.0 * 3.141592653589793)) * (α[1] * exp(-((120.0 - (0.8141958565912996 * n[1,1] + 0.04646161512584135 * n[2,1])) ^ 2.0) / (2.0 * α[1] ^ 2.0))), subexpression[2]: ((0.8141958565912996 * n[1,1] + 0.04646161512584135 * n[2,1]) - 120.0) * (1.0 - 0.5 * (1.0 + erf((120.0 - (0.8141958565912996 * n[1,1] + 0.04646161512584135 * n[2,1])) / α[1]))))

julia> @NLobjective(model, Min, A1 + A2)

julia> optimize!(model)

julia> solution_summary(model)
* Solver : Pavito

* Status
  Result count       : 1
  Termination status : LOCALLY_SOLVED
  Message from the solver:
  "LOCALLY_SOLVED"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : 5.05201e+01
  Objective bound    : 5.05201e+01

* Work counters
  Solve time (sec)   : 2.29149e-02