Hello everyone,
I have clear modeling requirements, but not sure how to (or is it possible to) implement my problem with Julia optimization packages, like JuMP. General optimization problem:
Objective function: obj(x, y, z)
Decision variables: (x, y, z)
Constraints: x + y + z = 1 (as an example)
Main requirements for the optimization:
I needed to do a sequential optimization, i.e. repeatedly solve the above optimization problem.
The variables x, y, z are related to actual systems, for example, related to three batteries, x for battery 1, y for battery 2, and z for battery 3.
Thus for the sequential optimization problems, there will be a case at some stage (we do know it in advance, but during the simulation there will be an index value (updated) for each battery to indicate whether a battery is functioning), some batteries may fail due to performance aging.
To adapt to this situation, I need the decision variables to be able to change: for example, if battery 1 fails, I need to "get rid of " decision variable x and continue the optimization with decision variables y, z.
My problem:
Is it possible to implement such varying variables optimization in Julia?
Thank you very much for checking!
Sure, you can do all of this in JuMP. However, your question is a little too broad to get a good answer. (It’s easier to provide advice if you have a first attempt at coding that we can offer suggestions to improve, rather than asking generic questions.)
Have you formulated this on paper first?
Are you trying to find an optimal policy? Or are you okay using a myopic one? For example, do you intend to take into account when the battery will fail in earlier actions?
Thanks, Odow.
Actually, this is the problem I have already implemented in Python and I didn’t
find a suitable solution. And recently I am learning Julia, so I tried to implement
my problem with Julia.
I will read the materials you provide and then repost a specific problem.
For the two questions you posed,
I will recheck (formulate) my problem and ask with a specific example.
You are right, I was doing something like optimal control and my goal is to find an optimal policy.
I think I will need to consider various optimization horizons, both short-term and long-term.