I have an optimization problem with a long time period (96 steps). Reading some papers [1][2], it seems a common way to reduce the optimization time of a Mixed Integer Linear Programming problem is to use something called Rolling Horizon, which is synonymous to Receding Horizon Control and Model Predictive Control.
I’m new to mathematical optimization and JuliaOpt, but I was wondering if there were any examples using this approach to solve an optimization problem?
Typically this is just a roll-your-own type model. You would typically do something like
function solve_stage_t(incoming_state)
model = Model()
# ... definition using `incoming_state`
optimize!(model)
return outgoing_state
end
state = [initial_state]
for t in 1:T
push!(state, solve_stage_t(state[end]))
end
Hi! What was the final implementation for this?
Cause I’m having a similar issue depending on the solver I use.
Here’s the draft of my MWE:
include(makeModels.jl) # <-- I create the spvModel
function rollingHorizon(EMSdata,
W, # weights
Tw, # time window [days]
steps, # number of steps to move the window
Δt) # time length of each step [hr]
# Initialize
tend = 24*Tw; # [hr]
Dt=0:Δt:tend; Dt=Dt*3600; # time array in seconds
s=modelSettings(nEV=1:2, dTime=collect(Dt));
results=Vector{Dict}(undef, steps); # allocate memory
for ts in 1:steps
# build+solve model
model=solvePolicies(KNITRO.Optimizer, s, data);
results[ts]=getResults(model);
# update data
data=update_data(results[ts], s, data);
# move time window
Dt = Dt .+ Δt*3600.0;
s.dTime=collect(Dt);
end
return results
end
Doing this with Ipopt worked and with KNITRO didn’t. Even if I do model=InfiniteModel(); in the middle.