A reader asked me for some information on how the dynamic programming algorithm for the TSP works. I was surprised to find that a Google search found no good resources. Wikipedia merely acknowledges its existence: “One of the earliest applications of dynamic programming is the Held–Karp algorithm that solves the problem in time O(n22n).”
Consider the TSP problem with N+1 points labeled 0, 1, …, N. We start at point 0. The distance from point i to point j is dist[i][j].
In dynamic programming, we seek to solve a problem by first solving smaller instances of the same problem. We start by looking at the really trivial size problem: What is the best order to visit just one of the destinations?
Assume we have N destinations. There are N size-1 problems, one for each of the N destinations. The best way to visit each is the shortest path from node 0 to node N. We store the best solution to each of these problems in a table:
best[subset][end] = dist[0][end]
Here, subset represents which destinations have been visited. More specifically, subset is a bitmask: an integer where bit i is 1 if destination i has been visited and 0 otherwise. Also note that the stored solution is the solution to the A-Z TSP problem, so it does not include the cost to return to the origin. The variable end is the destination that this A-Z problem ends at.
Now, lets expand to all the A-Z TSP problems of size one larger:
best[subset][end] = min(best[subset \ { end }][i] + dist[i][end])
Here, i is an intermediary node that is 1 in subset and not equal to the end variable. Before we move on to the next size problem, we have to fill in all the possible subsets of the current size. Finding all of those subsets of size s is what the function nextSetOf(s) in BpTspSolver.js does. How do we justify this expansion? It does not matter to the distance of the larger problem in what order the intermediary destinations of the 1-size smaller problem ending at i was visited in, so we are safe to only store and look at the optimal such sub-solution.
Finally, when we reach the desired size where all the destinations are visited, we only need to look up the best solution to the A-Z TSP problem in the table:
best[2^(N+1)-1][N]
(2^(N+1)-1 is simply the integer with all the first N+1 bits set to 1). If we want to return to the origin, we instead take the min over all i of
best[2^(N+1)-1][i] + dist[i][0]
This then becomes the best solution to the roundtrip problem. It is important to note that while this solution may look fast, there are 2^(N+1) different subsets involved (really 2^N, because 0 is always visited). Also, each subset solution needs to be stored, causing high memory usage. However, O(2^N) is still much better than the brute force O(N!) solution. 10! is 3 628 800 while 2^10 is only 1024. With the brute force solution, OptiMap could solve optimally up to 9 destinations. The dynamic programming solution now allows OptiMap to solve optimally up to 15 locations.
Hello,
Thank you for allowing my interruption of your day.
We have been using the TSP solution and it works great. (we really appreciate it)
We notice on your example site a route with many addresses can be routed. When we use the solution from our website directly at the Google api we can only route 10 points.
Is there a setting change to allow our implementation work like yours for more that 10 homes? (without spending 25k for the Google premiere membership)
Kind regards,
William
i have implemented the brute force approach (N-1)! using recursion.
I am stuck with the representation of the subsets……………..
could any one tell me how to represent the subsets in 2D arrays.
I want to use the algorithm which uses the subsets
@raj, you represent a subset with a bitmask, which is just an integer. If location i is in the subset, bit number i is set in the integer. For instance, the subset where location 3 and 4 are visited, is represented by the number 24, because bit number 3 (2^3) + bit number 4 (2^4) are set.
A Google search will get you to some great resources on bitmasks. Once you have mastered the concept, feel free to look at tspDynamic in http://gebweb.net/optimap/BpTspSolver.js.
Thanq Geir
suppose if i take 4 vertices 1,2,3,4 then if i visit
1->2->3 then
(2^2)+(2^3) =12
d[1][12] will be the distance from 1->2->3
for 1->3->4
(2^3)+(2^4) =24
d[1][24] will be the distance from 1->3->4
for 2->3->4
(2^3)+(2^4) =24
d[2][24] will be the distance from 2->3->4
tell me the representation is correct or not
WTF……….!!!! 😀 😉 :fear: :O
The variable-opt method is related to, and a generalization of the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead they grow the set as the search process continues. The best known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson ) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing . The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnecting the tour in a different way, then v-opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best known solutions for all other TSPs on which the method had been tried.
to get the actual optimal vertices…?