Nonconvex.jl reproduces the initial vector as the optimized vector

Thank you so much for your help. It was a great learning curve for me. I just changed the xtol_abs = 0.0 and now the code seems to work. This option was not available in R so I didn’t change it in my Julia code. But once it’s changed the two outputs from both R and Julia match😊.

alg = NLoptAlg(:LD_SLSQP)
options = NLoptOptions(xtol_rel = 0, # stop on small optimization step
maxeval = 2000, # stop on this many function evaluations
ftol_rel = 0.0, # stop on change times function value
ftol_abs = 0.0,
xtol_abs = 0.0,
stopval = -Inf)

The reason for having two constraints in Julia is that I got an error saying the constraint vector is too long as it has 60 constraints. So I split up the main component and added the constraints separately. But in R I can pass all the 60 constraints as a single function.

Again thanks for your help​:blush::blush::blush:

1 Like

Glad it works now.

This is not expected. If you can share a reproducer, please open an issue.

1 Like

It actually seems to be working now which is perfect!

obj_fun_enclosed_1 = x -> OBJ_ciuupi(x, lambda_1, t_vec_1, d_1, alpha_1, no_nodes_GL_1)
constraint_enclosed_1 = x -> constr_func_ciuupi(x, t_vec_1, d_1, alpha_1, rho_1, gamma_vec_1, no_nodes_GL_1, num_ints_1)
model_1 = Model(obj_fun_enclosed_1)
addvar!(model_1, fill(-Inf, length(start_vec_1)), fill(Inf, length(start_vec_1))) 
add_ineq_constraint!(model_1,constraint_enclosed_1)
ad_model_1 = abstractdiffy(model_1, AbstractDifferentiation.ForwardDiffBackend())

Thank you very much

nc_1.minimum # julia 
-0.049709252150146765

map(Float64, r_1_dict[:value])[1] # R
-0.04970926612261829

show(nc_1.minimizer) # julia
[0.07945892833005057, 0.22883958535232168, 0.22858912253199762, 0.09930904832975498, 0.013361970912169317, -0.23375804894767552, -0.13101905583393364, 0.12557306829294462, 0.218289208197006, 0.099441730681084, 0.013448650906492318]
map(Float64, r_1_dict[:par])  # R
11-element Vector{Float64}:
  0.0794589215641772
  0.22875072806708474
  0.22838503286513417
  0.09931841630497468
  0.013250034000027127
 -0.233695262901127
 -0.13100446920395992
  0.1255274689821908
  0.21825871041185918
  0.09946959051999063
  0.01333003874899806

1 Like

Could you also let me know where I can find all the tolerances and arguments that I can pass into NLoptOptions in https://github.com/JuliaNonconvex/NonconvexNLopt.jl/blob/master/src/NonconvexNLopt.jl#L114

See this section in the Nonconvex docs which points to the NLopt documentation. In general, every algorithm will have slightly different options so you can check the NLopt docs and find out which options you can set for your algorithm of choice.

1 Like

Hi,
Could you please help me understand the arguments that go into ‘suboptions’ and ‘NLoptAlg{Symbol, Nothing}’ referring to the earlier example that I was working on there are some lambda that Julia still produces a zero vector where R produces a nonzero vector

Thank you!

suboptions is mostly not relevant unless you are using a nested algorithm in NLopt like MLSL or the augmented Lagrangian algorithm. In this case, suboptions will be the options of the “local optimizer” in the nested algorithm.

Please share an example if you can where Julia and R give different results.

1 Like

Hi,
Thank you for your response😊
The difference in values occurred when I changed the value of lambda. For most lambda values, the outputs are the same but for some like 0.8, 0.9, 1.2, etc. Julia produces a zero vector unlike R

start_vec = zeros(11);
gamma_vec = range(start=-7, stop=7, length=40);
rho = 0.8;
t_vec = range(0.0,5.0);
alpha = 0.05;
d = 6.0;
no_nodes_GL = 70;
lambda = 0.8;
num_ints = 20;

obj_fun_enclosed = x -> OBJ_ciuupi(x, lambda, t_vec, d, alpha, no_nodes_GL);
constraint_enclosed = x -> constr_func_ciuupi_nonconvex(x, t_vec, d, alpha, rho, gamma_vec, no_nodes_GL, num_ints);

model = Model(obj_fun_enclosed);
addvar!(model, fill(-Inf, length(start_vec)), fill(Inf, length(start_vec))) ;
add_ineq_constraint!(model,constraint_enclosed);

ad_model = abstractdiffy(model, AbstractDifferentiation.ForwardDiffBackend());

alg = NLoptAlg(:LD_SLSQP);
options = NLoptOptions(xtol_rel = 0, maxeval = 1000, ftol_rel = 0.0, ftol_abs = 0.0, xtol_abs = 0.0, stopval = -Inf);

sl = Nonconvex.optimize(ad_model, alg, start_vec, options = options);
show(sl.minimizer);
# Output: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
show(sl.status);
# Output: :ROUNDOFF_LIMITED

# To get the derivatives in R, have to change the options as list = Dict("xtol_rel" => 1e-6, "check_derivatives" => true)
r_opt = obj_ciuupi_optim(start_vec, lambda, t_vec, d, alpha, rho, gamma_vec, no_nodes_GL, num_ints);
opt_vals_dict = Dict{Symbol, Any}(names(r_opt) .=> values(r_opt));
show(map(Float64, opt_vals_dict[:par]));
# Output: [0.004672807142645505, 0.03097881117209773, 0.05474224840151757, 0.040294321056390345, 0.011066120260875884, -0.09080941990352809, -0.05818486407224111, 0.014911118555921398, 0.05491697364768549, 0.040140427197995626, 0.011163794813018022]

I’ve checked the gradients computed in Julia and R, and they appear to be the same.

Without having your full runnable code, I won’t be able to help. Perhaps try finite difference in Julia to be 100% comparable to R?

Switching to finite difference seems to help for the specific lambda values you mentioned:

ad_backend = AbstractDifferentiation.FiniteDifferencesBackend()
ad_model = abstractdiffy(model, ad_backend)

However, it doesn’t work for all the lambda values, e.g 0.78. I think your problem is just too sensitive to the initial point which is generally bad anyways. I recommend you use a slight perturbation from 0s as the starting values instead. In general, zeros can behave funny sometimes and can cause term cancellation that lead to numerical phenomena like this. For instance, changing the type of central difference used or using ForwardDiff for the ad_backend (which is supposed to be more accurate than finite difference) cause some of the above lambda values to not converge when starting from all 0s. Starting from a slight perturbation of 0 (1e-4 * rand(11)) is the better fix in my opinion while still using ForwardDiff which should be more accurate and faster than finite difference.

Note that R uses finite difference so I think if you try enough lambda values, some of them might also fail to converge when starting from all 0s.

1 Like

Hello,

I’ve been trying to set the constrtol_abs option in NLopt through the NLoptOptions functions.
When I do
NLoptOptions(constrtol_abs =0.0, ftol_rel = 0.0, ftol_abs = 0.0, xtol_abs = 0.0, xtol_rel = 1e-06, maxeval = 1000, stopval = -Inf)
its returning the error
type Opt has no writable property constrtol_abs
could you please help on replacing the default constrol_abs
Thanks!!

where is this documented?

This option is used in the NLopt package (Readme · NLopt.jl). By default, it is set to 1e-7. I learned this by NLopt.DEFAULT_OPTIONS and NLopt.jl/src/MOI_wrapper.jl at 6ade25740362895bbfff1aee07d35911a6e2df17 · JuliaOpt/NLopt.jl · GitHub I’ve been trying to set these options the same as in R because in R, they have set it to 0.
Thanks!!

From the NLopt README:

The algorithm parameter is required, and all others are optional. The meaning and acceptable values of all parameters, except constrtol_abs , match the descriptions below from the specialized NLopt API. The constrtol_abs parameter is an absolute feasibility tolerance applied to all constraints.

Looking at how this option is handled in the MOI glue code, it is simply added to the RHS of the constraints when using MOI and JuMP by extension. If you use Nonconvex, it is therefore 0 by default.

NLopt has many wrappers in Julia. MathOptInterface (used by JuMP) seems to have this additional option defined which is not in the original NLopt library. Nonconvex.jl doesn’t define this option at all.

1 Like