# Optim maximization

Hi everyone,

I want to find the maximum of a function with the Optim.jl package, but I keep getting a DomainError. (I think Optim is trying to evaluate the log of a negative number)

``````using Optim, Interpolations

α = 0.4;
β = 0.96;

k = range(1e-3, 90.0, length = 1001) # grid on k
initial_v = 5 * log.(k.^α); #initial guess of value function evaluated at k
v_func = CubicSplineInterpolation(k, initial_v, extrapolation_bc = Line())

k_eval = k

#Function to maximize (RHS of Bellman equation)
RHS(kprime) = -log(k_eval^α - kprime) - β*v_func(kprime)
Optim.optimize(RHS, [k_eval]) #Getting a domain error

#Adding constraints to this univariate function
Optim.optimize(RHS, eps(1.0), 10000.0) #Domain error persists
``````

For some reason, when I try to solve the exact same problem (for the economists out there, it’s the optimal growth model) but change what the maximization is done w.r.t., it’s able to find a solution.

``````#Trying alternative specification
RHS(c) = -log(c) - β*v_func(k_eval^α - c)
Optim.optimize(RHS, [k_eval]) #solution successfully found
``````

My two questions are:

1. Is there any reason why the first approach ran into the DomainError while the second didn’t?
2. Is there a way to avoid the DomainErrors?

Thank you!

Do your first constraints guarantee that `k_eval^α - kprime` is always bigger than zero? You could just solve the associated equation to see that it doesn’t because
0.001^0.4 - x = 0
gives
x = 0.0630957.

Nevertheless, trying to optimize two different functions, even if the optimization solutions are equivalent to solving the same base problem, will give you different stepping trajectories, where one of the functions might step into the danger zone where you get numerical errors and the other might not. You can even possible get the same phenomenon when trying to optimize the same function while starting at different initial points.