Hi there,
We are translating a large scale LP model from GAMS to Julia/JuMP.
Large-scale means, there are a number of sets (regions, years, sub-annual time resolution - days and hours, technologies, etc.) and 2-5 dimensional variables, and the matrix (before aggregation by CPLEX) might be 5e+7 by 5e+7 with 3e+7 non-zeros. It is solvable in GAMS and Pyomo. Would be great to have JuMP version.
Here is an example of a small model which works well: UTOPIA_BASE_JuMP.7z
But we have encountered a limitation of this approach. It doesn’t work for large-scale models.
First, it seems to be not the best from performance perspective.
Second, there is some limitation in length of dictionaries (see my other question: #37247)
The sets in our model are character vectors (“Symbols” & “Dictionaries” in Julia). And the preference would be keep it this way, using names of regions etc. rather than numbered sets. Though performance is the priority. Would be sparse arrays a better alternative to Dicts/Tuples?
Any thoughts on how to define large-scale models and improve time-performance will be appreciated.