KEMBAR78
Algorithm review | PDF
Review
Overview
Fundamentals of Analysis of Algorithm Efficiency
Algorithmic Techniques
  Divide-and-Conquer, Decrease-and-Conquer
  Dynamic Programming
  Greedy Technique
Data Structures
  Heaps
  Graphs– adjacency matrices & adjacency linked lists
  Trees




                                                        2
Fundamentals of Analysis of
      Algorithm Efficiency
Basic operations
Worst-, Best-, and Average-case time
efficiencies
Orders of growth
Efficiency of non-recursive algorithms
Efficiency of recursive algorithms


                                         3
Worst-Case, Best-Case, and
       Average-Case Efficiency
Worst case efficiency
  Efficiency (# of times the basic operation will be executed) for the
  worst case input of size n, for which
  The algorithm runs the longest among all possible inputs of size n.
Best case
  Efficiency (# of times the basic operation will be executed) for the
  best case input of size n, for which
  The algorithm runs the fastest among all possible inputs of size n.
Average case:
  Efficiency (#of times the basic operation will be executed) for a
  typical/random input
  NOT the average of worst and best case
  How to find the average case efficiency?                            4
Orders of Growth
Three notations used to compare orders of
growth of algorithms
  O(g(n)): class of functions f(n) that grow no
  faster than g(n)
  Θ (g(n)): class of functions f(n) that grow at
  same rate as g(n)
  Ω(g(n)): class of functions f(n) that grow at least
  as fast as g(n)

                                                  5
Theorem
 If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then
 t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
 The analogous assertions are true for the Ω-
 notation and Θ-notation.
 The algorithm’s overall efficiency will be
 determined by the part with a larger order of
 growth.
   5n2 + 3n + 4


                                             6
Using Limits for Comparing Orders of
                   Growth

                      0       order of growth of T(n)    <       order of growth of g(n)


                      c>0       order of growth of T(n)      =    order of growth of g(n)
limn→∞ T(n)/g(n) =
                          ∞    order of growth of T(n)       >   order of growth of g(n)


   Examples:
   • 10n        vs.               2n2

   • n(n+1)/2   vs.               n2

   • logb n     vs.              logc n
                                                                                      7
Summary of How to Establish Orders of
      Growth of an Algorithm

Method 1: Using limits.
Method 2: Using the theorem.
Method 3: Using the definitions of O-,
Ω-, and Θ-notation.




                                         8
Basic Efficiency classes
fast
             1         constant    High time efficiency

           log n     logarithmic
             n           linear
           n log n      n log n
              n2      quadratic
              n3         cubic
             2n      exponential
slow
             n!        factorial   low time efficiency
                                                  9
Time Efficiency Analysis of Nonrecursive
Algorithms

Steps in mathematical analysis of nonrecursive algorithms:
  Decide on parameter n indicating input size

  Identify algorithm’s basic operation

  Determine worst, average, and best case for input of size n

  Set up summation for C(n) reflecting the number of times the
  algorithm’s basic operation is executed.

  Simplify summation using standard formulas (see Appendix A)



                                                                 10
Time Efficiency Analysis of Recursive
                 Algorithms
Decide on parameter n indicating input size

Identify algorithm’s basic operation

Determine worst, average, and best case for input of size n

Set up a recurrence relation and initial condition(s) for C(n)-the
number of times the basic operation will be executed for an
input of size n (alternatively count recursive calls).

Solve the recurrence or estimate the order of magnitude of the
solution (see Appendix B)

                                                                11
Master’s Theorem

T(n) = aT(n/b) + f (n)    where f (n) ∈ Θ(nk)
1. a < bk         T(n) ∈ Θ(nk)
2. a = bk         T(n) ∈ Θ(nk lg n )
3. a > bk         T(n) ∈ Θ(nlog b a)
Note: the same results hold with O instead of Θ.




                                               12
Divide-and-Conquer
Three Steps of The Divide and
Conquer Approach
The most well known algorithm design strategy:
1. Divide the problem into two or more smaller
   subproblems.

2.   Conquer the subproblems by solving them
     recursively(or recursively).

3.   Combine the solutions to the subproblems
     into the solutions for the original problem.

                                                    14
Divide-and-Conquer Technique
                  a problem of size n



 subproblem 1                            subproblem 2
   of size n/2                             of size n/2



  a solution to                           a solution to
 subproblem 1                            subproblem 2




                      a solution to
                  the original problem                    15
Divide and Conquer Examples
Sorting algorithms
   Mergesort
       In-place?
       Worst-case efficiency?
   Quicksort
       In-place?
       Worst-case , best-case and average-case efficiency?
Binary Tree algorithms
    Definitions
       What is a binary tree?
       A node’s/tree’s height?
       A node’s level?
   Pre-order, post-order, and in-order traversal
   Find the height
   Find the total number of leaves.
   …


                                                             16
Decrease-and-Conquer
Decrease and Conquer
 Exploring the relationship between a solution
 to a given instance of a problem and a
 solution to a smaller instance of the same
 problem.
 Use top down(recursive) or bottom up
 (iterative) to solve the problem.
 Example, an
   A top down (recursive) solution
   A bottom up (iterative) solution


                                            18
Examples of Decrease and Conquer
Decrease by one: the size of the problem is reduced by the same
constant on each iteration/recursion of the algorithm.
   Insertion sort
       In-place?
       Worst-case , best-case and average-case efficiency?
   Graph search algorithms:
      DFS
      BFS

Decrease by a constant factor: the size of the problem is reduced by
the same constant factor on each iteration/recursion of the algorithm.




                                                                  19
A Typical Decrease by One Technique

                     a problem of size n



  subproblem
    of size n-1



 a solution to the
  subproblem




                         a solution to
                     the original problem   20
A Typical Decrease by a Constant Factor
            (half) Technique

                    a problem of size n



 subproblem
   of size n/2



a solution to the
  subproblem




                        a solution to
                    the original problem   21
What’s the Difference?
Consider the problem of exponentiation:
  Compute an
  Divide and conquer:
   an= an/2 * an/2
  Decrease by one:
   an= an-1* a (top down)   an= a*a*a*a*...*a (bottom up)
  Decrease by a constant factor:
   an= (an/2)2


                                                     22
Depth-First Search
The idea
   traverse “deeper” whenever possible.
   When reaching a dead end, the algorithm backs up one edge to the
   parent and tries to continue visiting unvisited vertices from there.
   Break the tie by the alphabetic order of the vertices
   It’s convenient to use a stack to track the operation of depth-first
   search.
DFS forest/tree and the two orderings of DFS
DFS can be implemented with graphs represented as:
   Adjacency matrices: Θ(V2)
   Adjacency linked lists: Θ(V+E)
Applications:
   Topological sorting
   checking connectivity, finding connected components

                                                                   23
Breadth-First Search
The idea
   Traverse “wider” whenever possible.
   Discover all vertices at distance k from s (on level k) before
   discovering any vertices at distance k +1 (at level k+1)
   Similar to level-by-level tree traversals
   It’s convenient to use a queue to track the operation of
   depth-first search.
BFS forest/tree and the one ordering of BFS
BFS has same efficiency as DFS and can be
implemented with graphs represented as:
   Adjacency matrices: Θ(V2)
   Adjacency linked lists: Θ(V+E)
Applications:
   checking connectivity, finding connected components          24
Heapsort
Heaps
 Definition
 Representation
 Properties
 Heap algorithms
   Heap construction
     Top-down
     Bottom-up
   Root deletion
   Heapsort
     In-place?
     Time efficiency?
                        26
Examples of Dynamic Programming
             Algorithms

Main idea:
    solve several smaller (overlapping) subproblems
    record solutions in a table so that each subproblem is only
  solved once
    final state of the table will be (or contain) solution

VS. Divide and Conquer

Computing binomial coefficients

Warshall’s algorithm for transitive closure

Floyd’s algorithms for all-pairs shortest paths
                                                                  27
Greedy Algorithms
Greedy algorithms
Constructs a solution through a sequence of steps, each expanding
  a partially constructed solution obtained so far, until a complete
  solution to the problem is reached. The choice made at each step
  must be:
   Feasible
      Satisfy the problem’s constraints
   locally optimal
      Be the best local choice among all feasible choices
   Irrevocable
      Once made, the choice can’t be changed on subsequent
      steps.
Greedy algorithms do not always yield optimal solutions.

                                                                29
Examples of the Greedy Strategy
Minimum Spanning Tree (MST)
  Definition of spanning tree and MST
  Prim’s algorithm
  Kruskal’s algorithm
Single-source shortest paths
  Dijkstra’s algorithm




                                        30
P, NP, and NP-Complete Problems
 Tractable and intractable problems
 The class P
 The class NP
 The relationship between P and NP
 NP-complete problems



                                      31
Backtracking and Branch-and-Bound
 They guarantees solving the problem exactly
 but doesn’t guarantee to find a solution in
 polynomial time.

 Similarity and difference between
 backtracking and branch-and-bound




                                          32

Algorithm review

  • 1.
  • 2.
    Overview Fundamentals of Analysisof Algorithm Efficiency Algorithmic Techniques Divide-and-Conquer, Decrease-and-Conquer Dynamic Programming Greedy Technique Data Structures Heaps Graphs– adjacency matrices & adjacency linked lists Trees 2
  • 3.
    Fundamentals of Analysisof Algorithm Efficiency Basic operations Worst-, Best-, and Average-case time efficiencies Orders of growth Efficiency of non-recursive algorithms Efficiency of recursive algorithms 3
  • 4.
    Worst-Case, Best-Case, and Average-Case Efficiency Worst case efficiency Efficiency (# of times the basic operation will be executed) for the worst case input of size n, for which The algorithm runs the longest among all possible inputs of size n. Best case Efficiency (# of times the basic operation will be executed) for the best case input of size n, for which The algorithm runs the fastest among all possible inputs of size n. Average case: Efficiency (#of times the basic operation will be executed) for a typical/random input NOT the average of worst and best case How to find the average case efficiency? 4
  • 5.
    Orders of Growth Threenotations used to compare orders of growth of algorithms O(g(n)): class of functions f(n) that grow no faster than g(n) Θ (g(n)): class of functions f(n) that grow at same rate as g(n) Ω(g(n)): class of functions f(n) that grow at least as fast as g(n) 5
  • 6.
    Theorem If t1(n)∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}). The analogous assertions are true for the Ω- notation and Θ-notation. The algorithm’s overall efficiency will be determined by the part with a larger order of growth. 5n2 + 3n + 4 6
  • 7.
    Using Limits forComparing Orders of Growth 0 order of growth of T(n) < order of growth of g(n) c>0 order of growth of T(n) = order of growth of g(n) limn→∞ T(n)/g(n) = ∞ order of growth of T(n) > order of growth of g(n) Examples: • 10n vs. 2n2 • n(n+1)/2 vs. n2 • logb n vs. logc n 7
  • 8.
    Summary of Howto Establish Orders of Growth of an Algorithm Method 1: Using limits. Method 2: Using the theorem. Method 3: Using the definitions of O-, Ω-, and Θ-notation. 8
  • 9.
    Basic Efficiency classes fast 1 constant High time efficiency log n logarithmic n linear n log n n log n n2 quadratic n3 cubic 2n exponential slow n! factorial low time efficiency 9
  • 10.
    Time Efficiency Analysisof Nonrecursive Algorithms Steps in mathematical analysis of nonrecursive algorithms: Decide on parameter n indicating input size Identify algorithm’s basic operation Determine worst, average, and best case for input of size n Set up summation for C(n) reflecting the number of times the algorithm’s basic operation is executed. Simplify summation using standard formulas (see Appendix A) 10
  • 11.
    Time Efficiency Analysisof Recursive Algorithms Decide on parameter n indicating input size Identify algorithm’s basic operation Determine worst, average, and best case for input of size n Set up a recurrence relation and initial condition(s) for C(n)-the number of times the basic operation will be executed for an input of size n (alternatively count recursive calls). Solve the recurrence or estimate the order of magnitude of the solution (see Appendix B) 11
  • 12.
    Master’s Theorem T(n) =aT(n/b) + f (n) where f (n) ∈ Θ(nk) 1. a < bk T(n) ∈ Θ(nk) 2. a = bk T(n) ∈ Θ(nk lg n ) 3. a > bk T(n) ∈ Θ(nlog b a) Note: the same results hold with O instead of Θ. 12
  • 13.
  • 14.
    Three Steps ofThe Divide and Conquer Approach The most well known algorithm design strategy: 1. Divide the problem into two or more smaller subproblems. 2. Conquer the subproblems by solving them recursively(or recursively). 3. Combine the solutions to the subproblems into the solutions for the original problem. 14
  • 15.
    Divide-and-Conquer Technique a problem of size n subproblem 1 subproblem 2 of size n/2 of size n/2 a solution to a solution to subproblem 1 subproblem 2 a solution to the original problem 15
  • 16.
    Divide and ConquerExamples Sorting algorithms Mergesort In-place? Worst-case efficiency? Quicksort In-place? Worst-case , best-case and average-case efficiency? Binary Tree algorithms Definitions What is a binary tree? A node’s/tree’s height? A node’s level? Pre-order, post-order, and in-order traversal Find the height Find the total number of leaves. … 16
  • 17.
  • 18.
    Decrease and Conquer Exploring the relationship between a solution to a given instance of a problem and a solution to a smaller instance of the same problem. Use top down(recursive) or bottom up (iterative) to solve the problem. Example, an A top down (recursive) solution A bottom up (iterative) solution 18
  • 19.
    Examples of Decreaseand Conquer Decrease by one: the size of the problem is reduced by the same constant on each iteration/recursion of the algorithm. Insertion sort In-place? Worst-case , best-case and average-case efficiency? Graph search algorithms: DFS BFS Decrease by a constant factor: the size of the problem is reduced by the same constant factor on each iteration/recursion of the algorithm. 19
  • 20.
    A Typical Decreaseby One Technique a problem of size n subproblem of size n-1 a solution to the subproblem a solution to the original problem 20
  • 21.
    A Typical Decreaseby a Constant Factor (half) Technique a problem of size n subproblem of size n/2 a solution to the subproblem a solution to the original problem 21
  • 22.
    What’s the Difference? Considerthe problem of exponentiation: Compute an Divide and conquer: an= an/2 * an/2 Decrease by one: an= an-1* a (top down) an= a*a*a*a*...*a (bottom up) Decrease by a constant factor: an= (an/2)2 22
  • 23.
    Depth-First Search The idea traverse “deeper” whenever possible. When reaching a dead end, the algorithm backs up one edge to the parent and tries to continue visiting unvisited vertices from there. Break the tie by the alphabetic order of the vertices It’s convenient to use a stack to track the operation of depth-first search. DFS forest/tree and the two orderings of DFS DFS can be implemented with graphs represented as: Adjacency matrices: Θ(V2) Adjacency linked lists: Θ(V+E) Applications: Topological sorting checking connectivity, finding connected components 23
  • 24.
    Breadth-First Search The idea Traverse “wider” whenever possible. Discover all vertices at distance k from s (on level k) before discovering any vertices at distance k +1 (at level k+1) Similar to level-by-level tree traversals It’s convenient to use a queue to track the operation of depth-first search. BFS forest/tree and the one ordering of BFS BFS has same efficiency as DFS and can be implemented with graphs represented as: Adjacency matrices: Θ(V2) Adjacency linked lists: Θ(V+E) Applications: checking connectivity, finding connected components 24
  • 25.
  • 26.
    Heaps Definition Representation Properties Heap algorithms Heap construction Top-down Bottom-up Root deletion Heapsort In-place? Time efficiency? 26
  • 27.
    Examples of DynamicProgramming Algorithms Main idea: solve several smaller (overlapping) subproblems record solutions in a table so that each subproblem is only solved once final state of the table will be (or contain) solution VS. Divide and Conquer Computing binomial coefficients Warshall’s algorithm for transitive closure Floyd’s algorithms for all-pairs shortest paths 27
  • 28.
  • 29.
    Greedy algorithms Constructs asolution through a sequence of steps, each expanding a partially constructed solution obtained so far, until a complete solution to the problem is reached. The choice made at each step must be: Feasible Satisfy the problem’s constraints locally optimal Be the best local choice among all feasible choices Irrevocable Once made, the choice can’t be changed on subsequent steps. Greedy algorithms do not always yield optimal solutions. 29
  • 30.
    Examples of theGreedy Strategy Minimum Spanning Tree (MST) Definition of spanning tree and MST Prim’s algorithm Kruskal’s algorithm Single-source shortest paths Dijkstra’s algorithm 30
  • 31.
    P, NP, andNP-Complete Problems Tractable and intractable problems The class P The class NP The relationship between P and NP NP-complete problems 31
  • 32.
    Backtracking and Branch-and-Bound They guarantees solving the problem exactly but doesn’t guarantee to find a solution in polynomial time. Similarity and difference between backtracking and branch-and-bound 32