# How to access values of JuMP variables during optimization

Is there a way to access values of JuMP variables during the optimization?

I need to use JuMP for a constrained optimization. At each iteration of the optimization, I need to access the values of the parameters (i.e., variable in JuMP terminology) and perform some operations on it. This is easily done in Optim.jl. But I am running into issues with JuMP. I have written up a toy example of an unconstrained optimization problem below to illustrate the issue with JumP:

I first optimize the function `ssr_good` which does not attempt to access the parameter values during the optimization. This runs with both Optim.JL and JuMP and returns the same results.

Next, I try to use JuMP to optimize the function `ssr_bad`. `ssr_bad` takes `ssr_good` and adds a statement to display the value of the JuMP variable during the optimization. The functions are otherwise identical. JuMP does not like this. It first gives the warning `â”Ś Warning: The model has been modified since the last call to `optimize!` (or `optimize!` has not been called yet). If you are iteratively querying solution information and modifying a model, query all the results first, then modify the model.` followed by the error `ERROR: OptimizeNotCalled()`.

``````
# objective functions
function ssr_good(y,x,p)
return sum((y .- (p[1] .+ p[2:10]' .* x)).^2)
end

function ssr_bad(y,x,p)
display(value.(p))
return sum((y .- (p[1] .+ p[2:10]' .* x)).^2)
end

# fake data
x1 = cumsum(ones(10,9), dims=1)
y1 = 2 .+ cumsum(ones(9))' .* x1 .+ rand(10)

## Optimize ssr_good using Optim.jl and JuMP. Both produce same results
# Optimize ssr_good using Optim.jl
Optim.minimizer(optimize(p->ssr_good(y1,x1,p), ones(10)))

# Optimize  ssr_good using JuMP
model = Model(Ipopt.Optimizer)
@variable(model, p_vec[1:10] >=0)
ssr_good_JuMP(p) = ssr_good(y1,x1,p)
@objective(model, Min, ssr_good_JuMP(p_vec))
JuMP.optimize!(model)
value.(p_vec)

## Optimize ssr_bad using JuMP throws error
model = Model(Ipopt.Optimizer)
@variable(model, p_vec[1:10] >=0)
ssr_bad_JuMP(p) = ssr_bad(y1,x1,p)
@objective(model, Min, ssr_bad_JuMP(p_vec))
JuMP.optimize!(model)
value.(p_vec)

``````

In case this is relevant, I am actually interested in updating the values in a struct with each iteration of the optimization. Ideally I could grab `value.(p)` and update my struct using it.

No. JuMP works differently from Optim.

JuMP builds an expression from your function. It does not call `ssr_good` with values during the solve.

At each iteration of the optimization, I need to access the values of the parameters (i.e., variable in JuMP terminology) and perform some operations on it.

Can you provide a reproducible example of what you really want to achieve?

This is an example of what I want to achieve. I have a struct which holds different parameters. For any given guess of the parameter vector by the optimizer, I update the parameters in the struct. I then use a function (e.g., `ssr`) to compute the value of an objective function given the updated struct parameters. This works with Optim but not with JuMP.

I know I can get rid of the struct altogether and use the optimizer parameters directly as in `ssr_good` in the OP. I prefer updating the struct first and then calling the elements of the struct because I have hundreds of parameters and keeping them in a struct helps me organize them. Getting rid of the struct is likely to lead to errors on my part since there are too many parameters to keep track of.

I cannot use Optim in my actual application because I have to impose some linear inequality constraints on the parameters (I have some probabilities that I need to estimate and their sum cannot exceed 1).

``````# objective functions

struct param_struct_def
intercept::Array{Float64}
slopes::Array{Float64}
end
params = Array{param_struct_def}(undef, 1)
params = param_struct_def(zeros(1), zeros(9))

function ssr(y,x,p)
params.intercept .= p[1]
params.slopes    .= p[2:length(p)]

return sum((y .- (params.intercept .+ params.slopes' .* x)).^2)
end

# fake data
x1 = cumsum(ones(10,9), dims=1)
y1 = 2 .+ cumsum(ones(9))' .* x1 .+ rand(10)

## Optimize ssr using Optim.jl and JuMP.
# Optimize ssr using Optim.jl
Optim.minimizer(optimize(p->ssr(y1,x1,p), ones(10)))
display(params)

# Optimize  ssr using JuMP
model = Model(Ipopt.Optimizer)
@variable(model, p_vec[1:10] >=0)
ssr_JuMP(p) = ssr(y1,x1,p)
@objective(model, Min, ssr_JuMP(p_vec))
JuMP.optimize!(model)
value.(p_vec)
``````

Would also be interested to know if there are any other optimizers which allows me to: (i) impose linear inequality constraints, and (ii) access parameter values during the optimization so I can update my struct

JuMP is an algebraic modeling language. It does not require, and shouldnâ€™t really be used, to optimize a complicated Julia function that takes in a vector of decision variables.

I would rewrite your problem to be:

``````using JuMP, Ipopt
N = 10
x1 = cumsum(ones(N, N - 1); dims = 1)
y1 = 2 .+ cumsum(ones(N - 1))' .* x1 .+ rand(N)
model = Model(Ipopt.Optimizer)
@variable(model, intercept >= 0)
@variable(model, slopes[1:N-1] >= 0)
@variable(model, residuals[1:N, 1:N-1])
@constraint(model, residuals .== y1 .- intercept .+ slopes' .* x1)
@objective(model, Min, sum(residuals.^2))
optimize!(model)
@assert is_solved_and_feasible(model)
value(intercept)
value.(slopes)
``````

You can make things very readable and introduce many constraints, decisions variables etc. You donâ€™t need a struct to decompose the single parameter vector.

If you want to stick with the struct, JuMP is not the right tool. Try NLPModels.jl or Optimization.jl. See:

I have some probabilities that I need to estimate and their sum cannot exceed 1

p.s., you might note that the first example on https://jump.dev is linear regression where the parameters sum to 1

1 Like

Is there an advantage to writing the objective function as an `@constraint`? I must define my objective function using an outside function in my actual application. Computing the objective involves solving for equilibria within a system which in turn requires numerical integration over simulated data.

If youâ€™re doing linear regression, then Ipopt has special support for linear constraints, and the objective is a diagonal sum of squares. It solves faster and is more robust.

If that isnâ€™t your actual objective, then it depends.

The more realistic you can make the reproducible example, the better. Itâ€™s hard to offer advice if that isnâ€™t the problem you are interested in solving.