Attention, Learn to Solve Routing Problems
Wouter Kool, Herke van Hoof, Max Welling
University of Amsterdam
Published in ICLR 2019
Motivation
From a high-level perspective, there is a shift in learning paradigm from human engineering to machine learning in recent years.
Machine learning algorithms have replaced humans as the engineers of algorithms to solve various tasks.
For combinatorial optimization, exact methods and heuristics are two main approaches to make decisions.
Heuristics are typically expressed in the form of rules, which can be interpreted as policies to make decisions. We believe that these policies can be parameterized using DNNs, and be trained to obtain new and stronger algorithms for many different combinatorial optimization problems.
The objective of this paper is to design better models and better training methods to replace handcrafted heuristics by learning to route.
we propose to use a powerful model based on attention and train this model using REINFORCE with a simple but effective greedy rollout baseline.
The value of the proposed method is not to outperform existing human-designed heuristics on specific tasks, but to scale to different routing problems with high flexibility.
This is important progress towards the situation where we can learn strong heuristics to solve a wide range of different practical problems for which no good heuristics exist.
Attention Model
The attention model consists of an encoder and a decoder. The encoder is used to learn the representation of each node and the graph. The decoder is used to predict the routing.
Encoder
- Input: coordinates of each node.
- Output: representation of each node; the representation of the graph is computed as the mean of all nodes' embedding.
-
Model: graph attention network.
This figure extracted from the original paper shows the structure of the encoder.
Decoder
- Input: node embeddings; context: graph embedding + start node embedding + previous node embedding.
- Output: a sequence of nodes with the highest compatiblity which are selected to add into the path in each step.
-
Model: graph attention network. The visited nodes are masked out to ensure the feasibility of the solution.
This figure extracted from the original paper shows the structure of the decoder.
Remarks: the graph structure seems only to be used to calculate the compatibility between different nodes by changing it to negative infinity for nonadjacent nodes?
REINFORCE With Greedy Rollout Baseline
Motivation
The goal of a baseline is to estimate the difficulty of the instance s. The difficulty of an instance can (on average) be estimated by the performance of an algorithm applied to it
Method
The baseline is defined as the cost of a solution from a deterministic greedy rollout of the policy defined by the best model so far.
The full training algorithm is shown in the figure below.
Experiments
Experimental Setup
Train:
- Instance size: n = 20, 50, 100
- 100 epoches * 2500 steps * 512 batch size
Test:
- graph size is consistent with training
- 10000 test instances
- greedy decoding + sampling decoding (the best of 1280 sampling solutions)
Baseline:
- SOTA operation research tools
- Previous work on learning to route
Problems
- Travelling salesman problem
- Vehicle routing problem
- Orienteering problem: maximize the prizes collected during a close-loop trip within a length budget
- Prize collecting TSP (PCTSP): each node has not only an associated prize, but also an associated penalty. The goal is to collect at least a minimum total prize, while minimizing the total tour length plus the sum of penalties of unvisited nodes.
- Stochastic PCTSP: the expected node prize is known upfront, but the real collected prize only becomes known upon visitation.
Observation on Experimental Results
- The gap between learning to route and human-designed heuristics increases as the graph size grows
- Samping decoding performs better than greedy decoding, but requires much more computation time
- The experiment didn't show that whether the proposed method can generalize to graph with larger size
Conclusion
From a high-level perspective,
We believe that our method is a powerful starting point for learning heuristics for other combinatorial optimization problems defined on graphs, if their solutions can be described as sequential decisions.
Specifically, compared to previous work,
Compared to previous works, by using attention instead of recurrence (LSTMs) we introduce invariance to the input order of the nodes, increasing learning efficiency.
For future work,
Scaling to larger problem instances is an important direction for future research. Another challenge is that many problems of practical importance have feasibility constraints that cannot be satisfied by a simple masking procedure.