Skip to main content

Knapsack problem with JijModeling-Transpiler

JijModeling-Transpiler is a tool to transpile mathematical models from JijModeling to other modeling platforms. Currently, it supports generating QUBO (Quadratic Unconstrained Binary Optimization) for QUBO solvers and Python-MIP for MIP solvers.

To solve the optimization problem using JijModelingTranspiler, we follow the following steps:

  • Define the problem with the mathematical model using JijModeling and set instance data
  • Compile and transpile a model using the JijModeling-Transpiler
  • Solve a problem by the solver
  • Decode the solution from the solver result

Installation

JijModeling-Transpiler and other necessary libraries can be installed using pip

# !pip install jijmodeling
# !pip install jijmodeling-transpiler
# !pip install openjij

then, you can import them to your code with general import command with conventional short name

import jijmodeling as jm
import jijmodeling_transpiler as jmt
import openjij as oj

Knapsack problem

The knapsack problem is well well-known combinatorial optimization problem. In general cases, a set of items is given and each item has a different value and weight. The goal is to find the optimal set of items whose total weight is less than the given limit and whose total value is as large as possible.

For this tutorial, we use the following example: (https://www.documentation.jijzept.com/docs/tutorial/knapsack/). The price and weight of each treasure are given below and the capacity of the knapsack is 2kg (2000g) which is a constraint for this example. Our goal is to find an optimal combination of treasure with a total weight being less than 2kg and as high a total price as possible.

Treasure ATreasure BTreasure CTreasure DTreasure ETreasure F
Price ($)500070002000100040003000
Weight (g)8001000600400500300

First, we consider generalization of above problem. Let NN be a total number of treasures. We define a list of price v\bm{v} and weight w\bm{w} as follow:

v=(v0,v1,v2,....,vN1)w=(w0,w1,w2,....,wN1)\begin{equation} \begin{split} &\bm{v} = ( v_0, v_1, v_2, ...., v_{N-1} )\\ &\bm{w} = ( w_0, w_1, w_2, ...., w_{N-1} ) \end{split} \end{equation}

We also define a binary variable x{0,1}x \in \{ 0, 1 \}, which represent a selection of treasures.

  • 1 : if we put a treasure in the knapsack
  • 0 : otherwise

Finally, we let WW be the capacity of the knapsack.

Now, let us express this problem as a mathematical model. The objective function is to maximize the total price of treasures in the knapsack as

maxi=0N1vixi\begin{equation} \max \sum_{i=0}^{N-1}v_i x_i \end{equation}

We also need to take into account the constrain which is the capacity of a knapsack WW in this case

i=0N1wixiW\begin{equation} \sum_{i=0}^{N-1} w_i x_i \leq W \end{equation}

Define mathmartical model for the problem

We can implement the defined mathematical model above using JijModeling as follow (please refer to https://www.documentation.jijzept.com/docs/tutorial/knapsack/ for a detail)

# define variables
v = jm.Placeholder('v', ndim=1)
N = v.shape[0]
N.set_latex('N')
w = jm.Placeholder('w',ndim=1)
W = jm.Placeholder('W')
x = jm.BinaryVar('x', shape=(N,))
i = jm.Element('i', belong_to = (0, N))
# set problem
problem = jm.Problem('Knapsack',sense = jm.ProblemSense.MAXIMIZE)
# set objective function
obj = jm.sum(i, v[i]*x[i])
problem += obj
# set total weight constarint 
const = jm.sum(i, w[i]*x[i])
problem += jm.Constraint('weight', const<=W)

Let's print problem we implemented with JijModeling below

problem

Problem:Knapsackmaxi=0N1vixis.t.weighti=0N1wixiWwherex1-dim binary variable\begin{array}{cccc}\text{Problem:} & \text{Knapsack} & & \\& & \max \quad \displaystyle \sum_{i = 0}^{N - 1} v_{i} \cdot x_{i} & \\\text{{s.t.}} & & & \\ & \text{weight} & \displaystyle \sum_{i = 0}^{N - 1} w_{i} \cdot x_{i} \leq W & \\\text{{where}} & & & \\& x & 1\text{-dim binary variable}\\\end{array}

Now let's prepare data for this example as define on the table above.

#create a list for price (v) and weight (w)
price = [5000, 7000, 2000, 1000, 4000, 3000]
weight = [800, 1000, 600, 400, 500, 300]
#set the capacity (constrain)
capacity = 2000

data = {'v': price, 'w':weight, 'W':capacity}

Compile and transpile a model

From here, we compile and transpile the problem into a certain model using JijModeling-Transpiler.

First, we compile the model using compiled_model(). We pass the JijModeling problem we define the above and instant data to jmt.core.compile_model.

compiled_model = jmt.core.compile_model(problem, data, {})

Now, we transpile this compiled model into a Quadratic Unconstraint Binary Optimization (QUBO) model using pubo.transpile_to_pubo().

In general, a constraint optimization problem is converted to an unconstraint optimization problem by the Penalty method. The JijModeling-Transpiler supports two methods, the Penalty method and the Augmented Lagrangian method(ALM), to convert the constraint optimization problem. The detail of the conversion of the model is here.

We summarize the merits and demerits of each method.

PenaltyAugmented Lagrangian method(ALM)
MeritsIf the parameters are large enough, it gives the feasible solution.We do not need the slack variables so that we can reduce the number of the variables.
DemeritsWe need the slack variable to convert the inequality constraint.more parameters require tuning than the penalty method because of the linear term.

We can choose the method by using the relax_method argument in pubo.transpile_to_pubo() with pubo.RelaxationMethod. When you use the pubo.RelaxationMethod.SquaredPenalty with the inequality constraint, JijModeling-Transpiler automatically inserts the slack variables.

In this tutorial, we choose the ALM.

# Quadratic Unconstraint Binary Optimization (QUBO) model
pubo_builder = jmt.core.pubo.transpile_to_pubo(compiled_model=compiled_model, relax_method = jmt.core.pubo.RelaxationMethod.AugmentedLagrangian)
#set multiplier "onehot" to 1.
qubo,const = pubo_builder.get_qubo_dict(multipliers = {'weight': 2.0})

Here, we set the multipliers of the penalty terms as uniform with multipliers arguments. By default (if you do not give the value), it is set to 1. If you would like to control the multipliers more detail, you can use the detailed_parameters arguments instead of the multipliers argument.

Here, qubo is returned as the dictionary whose key is indice of the qubo matrix and value is the qubo element.

Alternativly, you can use MIPBuilder class to compile to Python-MIP if that is more suitable for you.

Solving the problem

Here, we will solve the problem by openjij and decode a result into a JijModeling's SampleSet.

#set sampler 
sampler = oj.SASampler()

#solve the problem
result = sampler.sample_qubo(qubo)

This is a result of computation before decoding.

result
Response(rec.array([([0, 1, 0, 0, 1, 1], -4.16, 1)],
dtype=[('sample', 'i1', (6,)), ('energy', '<f8'), ('num_occurrences', '<i8')]), Variables([0, 1, 2, 3, 4, 5]), {'system': [], 'sampling_time': 378.40288132429123, 'execution_time': 307.50222504138947, 'list_exec_times': array([307.50222504]), 'schedule': {'beta_max': 123.98535116121745, 'beta_min': 0.46209812037329684, 'num_sweeps': 1000}}, 'BINARY')

In the result above, a rec.array corresponds to the solution for this example problem.

Now, we can decode this result from OpenJij into a JijModeling sample set using pubo.decode_from_openjij. It takes three parameters. Result from OpenJij, pubo_builder which we define above, and compiled_model which is compiled model for this problem.

#decode a result to JijModeling sample set
sampleset = jmt.core.pubo.decode_from_openjij(result,pubo_builder,compiled_model)

This is a result of computation after decoding from OpenJij result to JijModeling SampleSet.

sampleset
SampleSet(record=Record(solution={'x': [(([1, 4, 5],), [1.0, 1.0, 1.0], (6,))]}, num_occurrences=[1]), evaluation=Evaluation(energy=[], objective=[14000.0], constraint_violations={"weight": [0.0]}, constraint_forall={}, constraint_values=[], penalty={}), measuring_time=MeasuringTime(solve=SolvingTime(preprocess=None, solve=None, postprocess=None), system=SystemTime(post_problem_and_instance_data=None, request_queue=None, fetch_problem_and_instance_data=None, fetch_result=None, deserialize_solution=None), total=None), metadata={})

In the sampleset, we can get the feasible lowest objective result by using sampleset.lowest

#extract a solution list from sampleset.
sol = sampleset.lowest()
sol
SampleSet(record=Record(solution={'x': [(([1, 4, 5],), [1.0, 1.0, 1.0], (6,))]}, num_occurrences=[1]), evaluation=Evaluation(energy=[], objective=[14000.0], constraint_violations={"weight": [0.0]}, constraint_forall={}, constraint_values=[], penalty={}), measuring_time=MeasuringTime(solve=SolvingTime(preprocess=None, solve=None, postprocess=None), system=SystemTime(post_problem_and_instance_data=None, request_queue=None, fetch_problem_and_instance_data=None, fetch_result=None, deserialize_solution=None), total=None), metadata={})

The result is stored in the form of a sparse matrix in the sampleset.record.solution, if you prefer the dense matrix, you can use to_dense method.

Visualising the solution

Therefore, we can conclude that a optimal combination of tresures is Treasure B, E, and F.

chosen_items = sol.record.solution['x'][0][0][0]
chosen_items
[1, 4, 5]
result_price = [data['v'][i] for i in chosen_items]
result_weight = [data['w'][i] for i in chosen_items]
print('Price of chosen items: ', result_price)
print('Weight of chosen items: ', result_weight)
print('Total price: ', sum(result_price))
print('Total weight: ', sum(result_weight))
print('Constrain', data['W'])
Price of chosen items:  [7000, 4000, 3000]
Weight of chosen items: [1000, 500, 300]
Total price: 14000
Total weight: 1800
Constrain 2000

Total weight of chosen set of trasures is indeed less than the constrain WW