Multiple Constraints using JuMP for unbounded knapsack

I am new to programming, so I hope that the description of my probleme is accurate:

Based on knapsack, I used JuMP and GLPK to come up with a solution for the unbounded knapsack problem.:

function unbounded_knapsack(validparam::Vector::{Float64})
    weight = [validparam[i]^2 for i in 1:length(validparam)]
    model = Model(GLPK.Optimizer)
    @variable(model, x[1:(length(validparam))],Int)
    @objective(model, Max, weight' * x)
    @constraint(model, paramlist'*x == 20)


The Function returns the Vector [0.0,0.0,0.0,0.0,4.0].
For my purpose the code works fine.

Now I want to have additional constraints:


I am getting this error:

ERROR: LoadError: Result index of attribute MathOptInterface.VariablePrimal(1) out of bounds. There are currently 0 solution(s) in the model.

The constraints represent additional knapsacks with weight constrictions = 20,30,40,…
The code should return the number of items of the parameterlist to fill the 3 knapsacks effectively.

Is there a way to implement these addtional constraints?
Do I even use the right Package for the varied unbounded knapsack problem?


You need to check the termination_status(model) to see if the solver found a solution. As the error says, there are 0 solution(s) in the model, which indicates it is infeasible.



I don’t know what this means. I’m guessing you mean paramlist' * x == 20. This will apply the constraint to the same x each time. You need a different variable for each knapsack.

1 Like

I wrote my masters’ dissertation on the UKP. Here is a short and newer version of it (and here is the long and old one). It seems to me that:

  1. You did not solve the UKP, but its specific case that is the subset-sum. In other words, you did not maximize a profit that respected the knapsack constraints, but instead both profit and weight are the same, and you want a solution that is exactly the size of the knapsack (otherwise you should have used <= 20 instead of == 20).
  2. What you seem to want to do is to solve a multiple variant of the UKP, in which there are multiple knapsacks. I am not aware of any works considering this variant (I did never search explicitly for it, I admit). I think this variant is not very studied because it is not interesting. You can solve it just by solving each knapsack one at a time. As the number of items is unbounded, what you put in one knapsack does not affect the others. So you can solve each knapsack separately (losing some shared effort).
  3. The UKP is very easy to solve (despite being weakly NP-Hard). I have implemented many solutions for it in C++ (mainly using dynamic programming, but some branch-and-bound too) at this repo.
  4. The easiest (but not the most optimized) way to solve your problem is to take a vector of knapsacks, and create the variables x[1:(length(validparam)), 1:(length(knapsacks))] (i.e., a cross product, so you can know how many of each variable are in each knapsack), and then adapt the rest of code accordingly, like, for example, changing the weight constraint to something like that:
for (knap_id, capacity) in enumerate(knapsacks) 
    @constraint(model, paramlist'*x[:, knap_id] <= capacity)

Thanks, @Henrique_Becker and @odow for your reply.

It seems like I am working on a subset-sum instead of an unbounded knapsack problem.

I thought about running the algorithm over each problem(i.e. 3). Afterward, I could intersect the 3 vectors(returned from the algorithm) to get a solution vector that works on each case.

For example:
1. problem --> weight = 3
2. problem --> weigth = 6
3. problem --> weigth = 9

The algorithm can pick items from the parameterlist = collect(0:0.25:10)

The algorithm returns[0,…,1,…,0] for problem 1, because it just needs one item with weight 3. Same for the other 2 problems.–> the solution vector would yield the result, that I would need one item with weight, 3 one item with weight 6, and one item with weight 9.

However, my goal is to find the solution vector which solves the 3 subset-sum problems in this manner: I want to minimize the variety of items in the solution vector, not by picking the smallest item/s(0.25) but by using the largest possible items in the parameterlist (10,9.75,9.5,9.25,…).:

3x "3" wouldn't be good
1x "9" + 2x "3" would be better
1x "6" + 1x "3" would be the best, because it has a small variety and the highest item values 

I hope that the description of my problem is understandable.


Ok, so it is a subset sum which by itself does not have a objective function (i.e., it is not an optimization but a decision problem), but in your case you want to minimize “variety” (are you sure you meant minimize and not maximize?), I can only say two things:

  1. One of the decent ways to solve this is a formulation like the one I mentioned before, in which all different “knapsacks/sums” are considered simultaneously. It is the path I suggest.
  2. You need to formalize what is variety. You seem to suggest it should maximize the weight of the largest item inside each knapsack? This is okay even if it needs to put a small piece to fill the remaining gap? For me maximizing variety would be something like minimizing the amount of times the most used item is used, but it may be that variety was just the best generic word for you to express your very specific requirement.
1 Like

Thanks for you reply @Henrique_Becker.

I used your recommended solution(a variable with a column for each knapsack problem):

After running the algorithm I get the number of each item I need to pick to solve all knapsack/subset sum problems

I want to maximize the number of times a certain item is used in all knapsack problems.

As an example, I minimized the overall number of picked items. My objective function is @objective(model,Min,sum(x)).:

The first column equals the number of each item I need to use to solve the first knapsack/subset sum problem: The algorithm tells me, that I need to use 8x the last item.

The second column equals the number of each item I need to use to solve the second knapsack/subset sum problem: The algorithm tells me, that I need to use 1x the 10th item and 2x the last item, and so on…

The first row shows me that the first item in my item list is not used a single time in all of the problems.
The third row shows me that the third item in the item list is used once(in the fifth last column) throughout all of the problems.

My goal is to maximize the number of rows, whose values are all equal to 0 → (sum(row_values)==0
The goal “minimizes” the variety of items I need to use to solve the problems.

If I would pick the smallest item, the first row would be the only row with nonzero values(trivial solution). This would be ideal but not very efficient. I am looking for a compromise between picking large items and maximizing the number of rows with 0-values.

Here you can see the trivial solution(@objective(model,Max,sum(x))):

For the first knapsack problem, I take 273x the smallest item, for the second knapsack problem I take 72x the smallest item, and so on…

I hope that this explanation was understandable

It seems to me that you have two problems:

  1. You probably need a way to model your “minimize variety” (i.e., “maximize the number of rows, whose values are all equal to 0”) objective function that is not the “proxy” you used (by minimizing the number of used pieces). Unless an exact solution is not really needed, and this heuristic objective function is enough for you. If you want to really model the “variety minimization” exactly you should probably create an extra variable for each item type, and use indicator constraints to set these variables to one if any knapsack use the correspondent item type. Then the sum of this variable set will be the number of different piece types used in all knapsack solutions.
  2. You need to decide how to merge the two distinct objective functions you have (“variety minimization” and “amount of pieces used minimization”). I think you have two main possibilities. The first is to create a merged objective function with both objective functions there, each one multiplied by a hand-chosen weight of “how much this objective function is important in relation to the other”. And then with your intuition about what “a good solution looks like” and multiple tests you find what are the optimal weights for you, finally formalizing the problem. Alternatively, you look at the multi-objective facilities of recent solvers but I am not sure JuMP supports it yet. Maybe you would need use the low-level interface and lock into a specific solver to use this feature.