Hello!
I am using Optim.optimize to conduct a MLE algorithm with 6 parameters. The objective function is defined as follows:
function r_obj(r_para)
print(r_para)
df_y_1_p_0.LL_1 = cap_p.((r_prob_0(r_para,df_0.x3).p_1.*r_prob_0(r_para,df_0.x3).den)./r_prob_0(r_para,df_0.x3).denominator)
df_y_0_p_0.LL_0= cap_p.((r_prob_0(r_para,df_0.x3).p_0.*r_prob_0(r_para,df_0.x3).den)./r_prob_0(r_para,df_0.x3).denominator)
ll_0=log.(df_y_1_p_0.LL_1)
ll_1=log.(df_y_0_p_0.LL_0)
sum_ll=sum(-ll_0)+sum(-ll_1)
print(sum_ll)
return (sum_ll)
end
and my constraints and initila values are:
ub=[sqrt(σ_a),1,2π,Inf,Inf,Inf]
lb=[-sqrt(σ_a),-Inf,0,-Inf,-Inf,-Inf]
initial = [0.5,0.6,0.3,-2,1,1]
inner_optimizer = GradientDescent()
Then I try to optimize it by
Optim.optimize(r_obj, lb, ub, initial, Fminbox(inner_optimizer))
I compared the printed output value with the value that I calculated around the true parameters, and I can observe that it’s reaching the minimum around the true value. However, the searching steps are not efficient because it only has a tiny move between loops and it didn’t converge for more than 12 hours. I am wondering if my convergence routine has some problems or if there is a way that I can choose a tolerance level better.
Thank you so much!