KEMBAR78
TSP Report | PDF | Dynamic Programming | Computational Complexity Theory
0% found this document useful (0 votes)
22 views5 pages

TSP Report

Uploaded by

mth353214
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views5 pages

TSP Report

Uploaded by

mth353214
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Solve the Travelling Salesman Problem

(Using Dynamic Programming and Bit Masking Appoarch)


Mai Thanh Hoang – 2311048
June 9, 2024

Abstract
The Traveling Salesman Problem (TSP) is believed to be an intractable problem and have no practically
efficient algorithm to solve it. The intrinsic difficulty of the TSP is associated with the combinatorial
explosion of potential solutions in the solution space. When a TSP instance is large, the number of
possible solutions in the solution space is so large as to forbid an exhaustive search for the optimal
solutions. The seemingly “limitless” increase of computational power will not resolve its genuine
intractability. Do we need to explore all the possibilities in the solution space to find the optimal
solutions?. When we design an algorithm to solve an optimization problem, we usually ask the critical
question: “How can we find all exact optimal solutions and how do we know that they are optimal in the
solution space?” This report introduces the Dynamic programming and Bit Masking Appoarch to solve
TSP.

I. Introduction
The TSP is one of the most intensively may traverse. A tour s in S is a closed route that
investigated optimization problems and often visits every node exactly once and returns to the
treated as the prototypical combinatorial starting node at the end. Like many real-world
optimization problem that has provided much optimization problems, the TSP is inherently
motivation for design of new search algorithms, multimodal; that is, it may contain multiple
development of complexity theory, and analysis optimal tours in its solution space. The TSP has
of solution space and search space [1, 2]. The several applications even in its purest
TSP is defined as a complete graph G = {V,E}, formulation, such as planning, logistics, and the
where V is a set of n node and E is an egde manufacture of microchips. Slightly modified, it
matrix containing the set of egdes that connects appears as a sub-problem in many areas, such
the n nodes. The solution space S contains a as DNA sequencing.
finite set of all feasible tours that a salesman

1
II. Methodology
Dynamic programming is a problem-solving strategy that involves dividing a complex problem into
smaller overlapping subproblems. The key idea is to solve each subproblem once and store its solution in
a data structure, typically a table or an array. Whenever the same subproblem arises again, instead of
recalculating its solution from scratch, the stored value is retrieved and reused. This approach proves
advantageous when a problem requires an exponential number of recursive function calls, where each call
performs a relatively small amount of computation. By storing and reusing the solutions to subproblems,
dynamic programming avoids redundant calculations, thereby improving the overall efficiency and
reducing the time complexity of the algorithm.

Steps for how to solve a Dynamic Programming Problem:

1. Identify if it is a Dynamic programming problem.


2. Decide a state expression with the Least parameters.
3. Formulate state and transition relationship.
4. Apply tabulation or memorization.

The key aspects of Dynamic Programming include:


1. Overlapping Subproblems: In the TSP, we often encounter overlapping subproblems. For
example, when finding the shortest path from a city A to another city B, the subpaths between
intermediate cities may be reused for multiple source-destination pairs. Dynamic Programming
allows us to solve these subproblems once and store their solutions for later reuse.
2. Optimal Substructure: The TSP exhibits an optimal substructure property, meaning that
the optimal solution to the problem can be constructed from optimal solutions to its subproblems.
In other words, if we know the shortest paths between all pairs of cities, we can find the overall
shortest tour by combining these subpaths.
3. Memoization: Memoization is a specific implementation technique used in Dynamic
Programming. For the TSP, we can create a memoization table (or a data structure) to store the
shortest distances between pairs of cities. This way, whenever we need to compute the distance
between two cities, we first check if the result is already computed and stored in the memo. If so,
we retrieve it; otherwise, we compute it and store it in the memo for future use.
4. Recursive Formulation: Dynamic Programming problems are often formulated
recursively. In the TSP, we can define a recursive function that computes the shortest path

2
between two cities, considering all possible intermediate cities. This recursive function can be
solved using memoization to avoid redundant computations.
5. Bottom-up Computation: While the TSP can be solved recursively with memoization, it is
often more efficient to use a bottom-up approach. This involves constructing the solution in a
tabular form, filling in the table systematically by considering smaller subproblems first and
building upon their solutions to solve larger subproblems.
6. Time and Space Complexity: Dynamic Programming solutions for the TSP typically
have time and space complexities that are exponential in the number of cities. However, this is
still more efficient than the brute force approach, which has a higher complexity. The space
complexity can be optimized by using efficient data structures or by employing techniques like: a.
7. Bit Masking: Bit masking is a technique that can be used to efficiently represent and
manipulate subsets of cities in the TSP. Instead of using explicit subsets or combinations, we can
represent each subset as a bitmask, where each bit corresponds to the presence or absence of a
city in the subset. Bitwise operations like & (AND), | (OR), << (left shift), and >> (right shift)
can be used to generate, check, and manipulate subsets efficiently.

By leveraging the overlapping subproblem structure, optimal substructure, memoization techniques, and
techniques like bit masking, Dynamic Programming provides an efficient approach to solving the TSP,
especially for instances with a moderate number of cities.

III. Result
1. Problem Input:
- The Graph: using a 2D array G[20][20] - an array cost, where each element cost[i][j] stores
the cost or expense incurred when traveling from city i to city j.
- Number of vertices
- Start vertex (A-Z)

2. Pseudocode:

3
- Explanation of the pseudocode:

1. The first two loops iterate through the entire search space.
2. The first if statement filters out impossible sub-cases where the last vertex is not part of the
subset specified by the mask.
3. prev represents the subset of vertices visited before arriving at the vertex last.
4. v represents the potential last locations we could visit before reaching each vertex in prev.
5. The innermost if statement performs the same task as the previous if, eliminating impossible
cases.
6. The relevant score (curScore) for this path is the sum of the best score for visiting everything
in prev, ending at vertex v, plus the cost or edge weight of traveling from vertex v to the
vertex last, which is where we aim to end up. The edge weight is denoted by a cost function
that the programmer can define. (In some problems, an edge is not given, and a cost needs to
be calculated from one location to the next.)
7. The last line in the innermost loop updates our answer to be the best among all possibilities
where we visit all vertices in prev, followed by traveling to vertex last.

4
- Time Complexity: O(n2 * 2n): Consider the number of houses to be n. So, there are n *
(2n) states and at every state, we are looping over n houses to transit over to next state and

because of memoization we are doing this looping transition only once for each state .

- Auxiliary space: O(n * 2n), due to the memoization of the dp states.


IV. Conclution
This algorithmic approach presents a structured programming paradigm for solving the Traveling
Salesman Problem using dynamic programming combined with bit manipulation techniques. By
breaking down the problem into tractable subproblems and utilizing optimized data structures and
efficient algorithms, the implemented solution achieves significant computational performance
enhancements. This optimization makes it suitable for small- to medium-scale instances of the
problem. The detailed procedural explanations and corresponding code implementations provide
a clear understanding of the step-by-step logic, facilitating the coding process and potential
performance optimizations of the solution.

V. References

[1]. Programming Team Lecture: DP Algorithm for Traveling Salesman Problem. (n.d.). Retrieved

June 9, 2024, from https://www.cs.ucf.edu/~dmarino/progcontests/modules/dptsp/DP-TSP-

Notes.pdf

[2]. GeeksforGeeks. (2013, November 3). Travelling Salesman Problem using Dynamic

Programming. GeeksforGeeks. https://www.geeksforgeeks.org/travelling-salesman-problem-

using-dynamic-programming/

[3]. Coding Blocks. (2017, November 27). Travelling Salesman Problem using Dynamic

Programming - Easiest Approach with Code. YouTube. https://www.youtube.com/watch?

v=JE0JE8ce1V0

[4]. Abdul Bari. (2018, February 22). 4.7 Traveling Salesperson Problem - Dynamic Programming.

YouTube. https://www.youtube.com/watch?v=XaXsJJh-Q5Y&t=387s

You might also like