Method Error and Domain Error when using NLsolve and Optim

Hey all,

I am trying to solve a simple function with two variables (a profit function). I tried two ways: using Optim and hence function, gradient and hessian, as well as NLsolve, and hence only gradient and hessian. However, both either give me “DomainError: Exponentiation yielding a complex result requires a complex argument.” but mostly “MethodError: no method matching”. My first idea was that there is a type conversion within the iteration that let’s the solvers crash. Do you have any ideas???

function f!(F::AbstractArray, J::AbstractArray, x::AbstractArray;
             pa=1.0,
             pn=1.0,
             r=0.05,
             q=1.0,
             s=0.1341,
             Aa=4.04,
             γ=0.54,
             α1= 2.22*10.0^(-14),
             α2=0.75*10.0^(-2),
             β1= 0.32^(-2),
             β2= 0.17)
   
    F[1] = pa*((1 + β1*x[2]^β2)/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2))*(Aa*s)^(1-γ)*γ*x[1]^(γ-1) - (1+r)*q + q
    F[2] = pa*((β2*β1*x[2]^(β2-1)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) - (β2*β1^(2)*x[2]^(2*β2 -1) + β2*β1*x[2]^(β2-1)))/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2)^2)*(Aa*s)^(1-γ)*x[1]^γ - (1+r)*pn
    
    gprime = (β2-1)*β2*β1*x[2]^(β2-2)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) + (β1*β2)^(2)*x[2]^(2*β2 -2) - (2*β2 -1)*β2*β1^(2)*x[2]^(2*β2-2) - (β2-1)*β2*β1*x[2]^(β2-2)
    hprime = 2*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2)*β2*β1*x[2]^(β2-1)
    g = β2*β1*x[2]^(β2-1)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) - (β2*β1^(2)*x[2]^(2*β2 -1) + β2*β1*x[2]^(β2-1))
    h = (1 + β1*x[2]^β2 + α1*μ + α2*μ^2)^2
        
    J[1, 1] = pa*((1 + β1*x[2]^β2)/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2))*(Aa*s)^(1-γ)*γ*(γ-1)*x[1]^(γ-2)
    J[1, 2] = pa*(g/h)*(Aa*s)^(1-γ)*γ*x[1]^(γ-1)
    J[2, 1] = pa*(g/h)*(Aa*s)^(1-γ)*γ*x[1]^(γ-1)
    J[2, 2] = (gprime*h - g*hprime)/(h^2)*pa*(Aa*s)^(1-γ)*x[1]^γ
end

x0 = [0.0, 0.0]
solution = nlsolve(f!, x0)

Thanks a lot!!!

Could you post a MWE with your error? The above gives me μ not defined, so you might have defined a global somewhere earlier in your code?

Your errors seem to indicate two problems:

(1) DomainError: this likely comes from exponentiating negative numbers with exponents between -1 and 1 (e.g. in x[2]^β2); you could get around this by constraining your search to positive x’s (if you’re looking at a profit function I’d assume these are factor inputs and therefore positive in any case?)

(2) MethodError: this is harder to diagnose without seeing all your actual function calls, but given that your only function seems to be f! this likely comes from writing the function definition in a way that is incompatible with the way the function is called in the solver. Optim.optimize for example would expect a function which takes one input array (in your case x), and returns the function value to minimise; depending on the method of optimize you’re calling you might also need to include the gradient (as a separate function!). Your function seems to return the [2,2] element of the Jacobian matrix, which is probably not what you’re actually trying to minimise?

2 Likes

Hi,

first of all, one mistake I figured out is that the gradient and the hessian are not defined for zero values since I divide by the inputs and have negative exponents in gradient and hessian. This indeed would be solved by restricting the input values >0.
Let me give a different way I coded it using optim (so that I premultiplied function, gradient and hessian with -1 since I want to maximize)

const pa=1.0
const pn=1.0
const r=0.05
const q=1.0
const s=0.1341
const Aa=4.04
const γ=0.54
const α1= 2.22*10.0e-3
const α2=0.75*10.0e-2
const β1= 0.32e-2
const β2= 0.17
const μ = 0.99

function f(x)
    -1(pa*((1 + β1*x[2]^β2)/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2))*(Aa*s)^(1-γ)*x[1]^γ - (1+r)*(q*x[1] + pn*x[2]) + q*x[1])
end

function g!(G, x)
    
    G[1] = -1*(pa*((1 + β1*x[2]^β2)/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2))*(Aa*s)^(1-γ)*γ*x[1]^(γ-1) - (1+r)*q + q)
    G[2] = -1*(pa*((β2*β1*x[2]^(β2-1)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) - (β2*β1^(2)*x[2]^(2*β2 -1) + β2*β1*x[2]^(β2-1)))/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2)^2)*(Aa*s)^(1-γ)*x[1]^γ - (1+r)*pn)
    
end

function h!(H,x)
    
    gprime = (β2-1)*β2*β1*x[2]^(β2-2)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) + (β1*β2)^(2)*x[2]^(2*β2 -2) - (2*β2 -1)*β2*β1^(2)*x[2]^(2*β2-2) - (β2-1)*β2*β1*x[2]^(β2-2)
    hprime = 2*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2)*β2*β1*x[2]^(β2-1)
    g = β2*β1*x[2]^(β2-1)*(1 + β1*x[2]^β2 + α1*μ + α2*μ^2) - (β2*β1^(2)*x[2]^(2*β2 -1) + β2*β1*x[2]^(β2-1))
    h = (1 + β1*x[2]^β2 + α1*μ + α2*μ^2)^2
        
    H[1, 1] = -1*(pa*((1 + β1*x[2]^β2)/(1 + β1*x[2]^β2 + α1*μ + α2*μ^2))*(Aa*s)^(1-γ)*γ*(γ-1)*x[1]^(γ-2))
    H[1, 2] = -1*(pa*(g/h)*(Aa*s)^(1-γ)*γ*x[1]^(γ-1))
    H[2, 1] = -1*(pa*(g/h)*(Aa*s)^(1-γ)*γ*x[1]^(γ-1))
    H[2, 2] = -1*((gprime*h - g*hprime)/(h^2)*pa*(Aa*s)^(1-γ)*x[1]^γ)
    H
end

initial_x = [0.5, 0.5]
optimize(f, g!, h!, initial_x)

Thanks a lot

Thanks, that makes more sense. What you probably want to do then is use box constrained optimization to set lower bounds, described in the Optim.jl manual here. The function call would then be:

lower = [1e-5, 1e-5]
upper = [Inf, Inf]
initial_x = [0.5, 0.5]
inner_optimizer = GradientDescent()
results = optimize(f, g!, lower, upper, initial_x, Fminbox(inner_optimizer))

The problem with this is that for some reasons when I tried it just now, Fminbox(inner_optimizer)) throws an error, which seems unrelated to your code (the error also shows up when simply running the example from the Optim.jl docs - I’ll file an issue)

One other option that I’ve used in my own thesis is the NLopt package which wraps the widely used NLopt library. For this you would do something like:

const pa=1.0; const pn=1.0; const r=0.05; const q=1.0; const s=0.1341; const Aa=4.04
const γ=0.54; const α₁= 2.22*10.0e-3; const α₂=0.75*10.0e-2; const β₁= 0.32e-2;
const β₂= 0.17; const μ = 0.99

function obj(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1] = -1*(pa*((1 + β₁*x[2]^β₂)/(1 + β₁*x[2]^β₂ + α₁*μ + α₂*μ^2))*(Aa*s)^(1-γ)*γ*x[1]^(γ-1) - (1+r)*q + q)
        grad[2] = -1*(pa*((β₂*β₁*x[2]^(β₂-1)*(1 + β₁*x[2]^β₂ + α₁*μ + α₂*μ^2) - (β₂*β₁^(2)*x[2]^(2*β₂ -1) + β₂*β₁*x[2]^(β₂-1)))/(1 + β₁*x[2]^β₂ + α₁*μ + α₂*μ^2)^2)*(Aa*s)^(1-γ)*x[1]^γ - (1+r)*pn)
    end

    -(pa*((1 + β₁*x[2]^β₂)/(1 + β₁*x[2]^β₂ + α₁*μ + α₂*μ^2))*(Aa*s)^(1-γ)*x[1]^γ - (1+r)*(q*x[1] + pn*x[2]) + q*x[1])
end

using NLopt
opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [1e-5, 1e-5])
min_objective!(opt, obj)
(minf, minx, ret) = NLopt.optimize(opt, [0.5, 0.5])

(Note that I haven’t thought at all about your problem so the choice of bounds and/or optimisation algorithm here is likely suboptimal but it should hopefully get you going!)

Hope that helps!

P.S. send my regards to Sevi - he was one of the examiners in my viva (small world)!

Thanks a lot,

I will give it a try.

Yeah, what a small world :slight_smile: I will send him your regards!

haaaa it worked out!

Thank you so much!

1 Like