JuMP memory allocation

Hi! :grinning:
I am starting to use JuMP.jl to implement an optimization process on an image (size 266 x 353 pixels). I implement the problem like that to find L₁, L₂ values on each pixel:

model = Model(Ipopt.Optimizer)
set_silent(model)

@variable(model, 0 <= L₁[i = 1:x, j = 1:y], start = start_values.mat1[i, j])
@variable(model, 0 <= L₂[i = 1:x, j = 1:y], start = start_values.mat2[i, j])

function f(spectra, L₁, L₂)
    sum((detector_response(spectra.top[i,j, :], L₁[i,j], L₂[i,j]) - top_image[i, j])^2 +
    (detector_response(spectra.bottom[i,j,:], L₁[i,j], L₂[i,j]) - bottom_image[i, j])^2
     for i in 1:x, j in 1:y)
end 

@objective(model, Min, f(spectra, L₁, L₂))
optimize!(model)

However, this process takes up an enormous amount of RAM (more than 50GB) at the @objective(model, Min, f(spectra, L₁, L₂)) stage. What am I doing wrong and could I improve to reduce RAM usage?

Thanks,
fdekerm

Slicing allocates by default in Julia. Try views: view(spectra.top, i, j, :)

Hi @fdekerme, I’ve moved this to the “Optimization (Mathematical)” section.

Can you provide a reproducible example?

The answer depends one what detector_response is doing. But a simple fix might be:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, 0 <= L₁[i = 1:x, j = 1:y], start = start_values.mat1[i, j])
@variable(model, 0 <= L₂[i = 1:x, j = 1:y], start = start_values.mat2[i, j])
@objective(
    model,
    Min,
    sum(
        (detector_response(spectra.top[i,j, :], L₁[i,j], L₂[i,j]) - top_image[i, j])^2 +
        (detector_response(spectra.bottom[i,j,:], L₁[i,j], L₂[i,j]) - bottom_image[i, j])^2
        for i in 1:x, j in 1:y
    )
)
optimize!(model)

See Performance tips · JuMP