Boundaries for @NLparameters

I am currently discovering the world of JuMP.jl and I have to say, I am amazed, thanks for this package!

TL;DR: how do I add constraints or limits to parameters which are not defined by @variable but @NLparameter?

Full story:

For some optimisation problems, I have to rerun the optimisation with different initial values, which is of course a common task. In order to save time (I have to be fast since the code is analysing realtime events), I would like to keep the model as it is and just change the initial values.

Just to have an explicit example (this is not an MWE, it’s just to demonstrate my workflow):

model = Model(with_optimizer(Ipopt.Optimizer))

register(model, :qfunc, 5, qfunc, autodiff=true)

@variable(model, -1000 <= d_closest <= 1000, start=0.0)
@variable(model, -10000 <= t_closest <= 10000, start=-400)
@variable(model, -1000 <= z_closest <= 1000, start=476)
@variable(model, -1 <= dir_z <= 1, start=0.2)
@variable(model, -1000 <= t₀ <= 1000, start=0)


Works perfectly fine for some - in the example hardcoded - starting values (they are later obtained by some topology checks beforehand).

I thought I can use set_value() to reset the initial values and call optimize!() again but there is no method for that type:

> set_value(d_closest, 10)
MethodError: no method matching set_value(::VariableRef, ::Int64)
Closest candidates are:
  set_value(!Matched::NonlinearParameter, ::Number) at /home/tgal/.julia/packages/JuMP/jnmGG/src/nlp.jl:144

Fair enough, the hint pointed me to create a NonlinearParameter, which can be constructed using @NLparameter.

So far so good:

@NLparameter(model, d_closest == 0.0)
set_value(d_closest, 20)  # works

However, I was not able to figure out how to keep the boundary conditions.

I naively tried this one but it’s obviously not the way to do it:

@NLparameter(model, d_closest == 0.0)
@NLconstraint(model, cons1, -1000 <= d_closest <= 1000)

since later when running optimize!(model), I get

IPOPT: Failed to construct problem.

I have not used @NLParameter before, so I don’t know about your particular issues.

However, I recently stumbled upon the Parametron package, that might also be helpful in your project.:

Parametron makes it easy to set up and efficiently (ideally, with zero allocation) solve instances of a parameterized family of optimization problems.

Thanks @leethargo I will have a look but I’d rather stick to the current working environment. I switched the frameworks a couple of times already :wink:

This was not a good hint at all. You can use set_start_value to change the starting value of a variable. Parameters are unrelated to what you’re trying to do. set_start_value is easy to miss because it doesn’t appear in the docs (

1 Like

Oops, OK thanks :wink: