KEMBAR78
Iteration, induction, and recursion | PDF
Eng: Mohammed Hussein1
Republic of Yemen
THAMAR UNIVERSITY
Faculty of Computer Science&
Information System
Lecturer, and Researcher atThamar University
By Eng: Mohammed Hussein
mohammedhbi@thuniv.net
Outlines
1. Analysis of running time
2. Iteration, induction, and recursion
2 Eng: Mohammed Hussein
Analysis of running time
Eng: Mohammed Hussein3
 An important criterion for the “goodness” of an algorithm is how
long it takes to run on inputs of various sizes (its “running time”).
 When the algorithm involves recursion, we use a formula called a
recurrence equation, which is an inductive definition that
predicts how long the algorithm takes to run on inputs of different
sizes.
Iteration, induction, and recursion
Eng: Mohammed Hussein4
 Iteration, induction, and recursion are fundamental concepts that
appear in many forms in data models, data structures, and
algorithms.
 Iterative techniques.The simplest way to perform a sequence of
operations repeatedly is to use an iterative construct such as the for-
statement of C and C++.
 Recursive programs which call themselves either directly or
indirectly can be simpler to write, analyze, and understand of C and
C++.
Proofs of program correctness.
Eng: Mohammed Hussein5
 In computer science, we often wish to prove, formally or
informally, that a statement F(n) about a program is true.
 The statement F(n) might, for example, describe what is true on
the n iteration of some loop or what is true for the n recursive
call to some function.
 Iteration : Each beginning programmer learns to use iteration,
employing some kind of looping construct such as the for- or
while-statement of C.
 An example of an Iterative Sorting Algorithm such as Selection
Sort.
Inductive definitions
Eng: Mohammed Hussein6
 The inductive definition is consist of a basis and an inductive
steps.
 Many important concepts of computer science, especially those
involving data models, are best defined by an induction in which we
give a basis rule defining the simplest example or examples of the
concept, and an inductive rule or rules, where we build larger
instances of the concept from smaller ones.
Notation: The Summation
Eng: Mohammed Hussein7
 Greek capital letter sigma is often used to denote a summation,
as
 This particular expression represents the sum of the integers
from 1 to n;
 that is, it stands for the sum 1 + 2 + 3 + · · · + n.
 More generally, we can sum any function f(i) of the summation
index i.
Induction
8
 Induction rules
 Basis: show F(0)
 The basis could be F(1)
 Means 1 = 1 × 2/2
 Hypothesis: assume F(k) holds
for arbitrary k <= n
 Step: Show F(n+1) follows
 For example, we suggested
that the statement
 can be proved true for all n
≥ 1 by an induction on n.
Eng: Mohammed Hussein
8
Induction Example:
Gaussian Closed Form
 Prove 1 + 2 + 3 + … + n = n(n+1) / 2
Basis:
 If n = 0, then 0 = 0(0+1) / 2
Inductive hypothesis:
 Assume 1 + 2 + 3 + … + n = n(n+1) / 2
Step (show true for n+1):
1 + 2 + … + n + n+1 = (1 + 2 + …+ n) + (n+1)
= n(n+1)/2 + n+1 = [n(n+1) + 2(n+1)]/2
= (n+1)(n+2)/2 = (n+1)(n+1 + 1) / 2
n ( n +1) / 29 Eng: Mohammed Hussein
9
A Template for All Inductions
Eng: Mohammed Hussein10
1. Specify the statement F(n) to be proved, for n ≥ i0. Specify what i0 is; often it is 0 or
1, but i0 could be any integer. Explain intuitively what n represents.
2. State the basis case(f).
 These will be all the integers from i0 up to some integer j0. Often j0 = i0, but j0 could
be larger.
3. Prove each of the basis cases F(i0), F(i0 + 1), . . . , F(j0).
4. Set up the inductive step by stating that you are assuming
 F(i0), F(i0 + 1), . . . , F(n)
 (the “inductive hypothesis”) and that you want to prove F(n + 1).
 State that you are assuming n ≥ j0; that is, n is at least as great as the highest basis case.
 Express F(n + 1) by substituting n + 1 for n in the statement F(n).
5. Prove F(n + 1) under the assumptions mentioned in (4).
 If the induction is a weak, rather than complete, induction, then only F(n) will be used
in the proof, but you are free to use any or all of the statements of the inductive
hypothesis.
6. Conclude that F(n) is true for all n ≥ i0 (but not necessarily for smaller n).
Basic Recursion
 Base case: value for which function can be evaluated without
recursion
 Two fundamental rules
 Must always have a base case
 Each recursive call must be to a case that eventually leads toward a
base case
11 Eng: Mohammed Hussein
11
Example Recursion(1/2)
Problem: write an algorithm that will strip digits from an
integer and print them out one by one
void print_out(int n)
{
if(n < 10)
print_digit(n); /*outputs single-digit to terminal*/
else {
print_out(n/10); /*print the quotient*/
print_digit(n%10); /*print the remainder*/
}
}
12 Eng: Mohammed Hussein
12
Example Recursion(2/2)
Prove by induction that the recursive printing program works:
 basis: If n has one digit, then program is correct
 hypothesis: Print_out works for all numbers of k or fewer digits
 case k+1: k+1 digits can be written as the first k digits followed by the
least significant digit
The number expressed by the first k digits is exactly floor( n/10 )? which
by hypothesis prints correctly; the last digit is n%10; so the (k+1)-digit is
printed correctly
By induction, all numbers are correctly printed
13 Eng: Mohammed Hussein
13
Recursive
Eng: Mohammed Hussein14
 Recursive programs are often more succinct or easier to understand
than their iterative counterparts. More importantly, some problems
are more easily solved by recursive programs than by iterative
programs.
 A recursive function that implements a recursive definition will
have a basis part and an inductive part.
 Frequently, the basis part checks for a simple kind of input that can
be solved by the basis of the definition, with no recursive call
needed.
 The inductive part of the function requires one or more recursive
calls to itself and implements the inductive part of the definition.
Recursion
 Don't need to know how recursion is being managed
 Recursion is expensive in terms of space requirement; we
avoid recursion if simple loop will do.
 Last two rules
 Assume all recursive calls work
 Do not duplicate work by solving identical problem in separated recursive calls
 Evaluate fib(4) -- use a recursion tree
fib(n) = fib(n-1) + fib(n-2)
15 Eng: Mohammed Hussein
15
Arithmetic expressions
Eng: Mohammed Hussein16
 If E1 and E2 are arithmetic
expressions, then the following
are also arithmetic expressions:
 1. (E1 + E2)
 2. (E1 − E2)
 3. (E1 × E2)
 4. (E1 / E2)
 5. If E is an arithmetic expression,
then so is (−E).
 The operators +, −, ×, and / are
said to be binary operators,
because they take two arguments.
i. x Basis rule (1)
ii. 10 Basis rule (2)
iii. (x + 10) Recursive rule (1)
on (i) and (ii)
iv. (−(x + 10)) Recursive rule
(5) on (iii)
v. y Basis rule (1)
vi. y × (−( (x + 10) ))
Recursive rule (3) on (v)
and (iv)
Recursive example
Eng: Mohammed Hussein17
 Recursive function that computes n! given a positive integer n.
 This function is a direct transcription of the recursive definition of n! in.
 That is, line (1) distinguishes the basis case from the inductive case.
 We assume that n ≥ 1, so the test of line (1) is really asking whether n =
1. If so, we apply the basis rule, 1! = 1, at line (2).
 If n > 1, then we apply the inductive rule, n! = n × (n − 1)!, at line (3).
int fact(int n)
{
(1) if (n <= 1)
(2) return 1; /* basis */
else
/* induction */
(3) return n*fact(n-1);
}
Euclid’s Algorithm - gcd
Eng: Mohammed Hussein18
 Euclid’s algorithm is based on the fact
that if u is greater than v then the
greatest common divisor of u and v is
the same as the greatest common
divisor of v and u%v.
 This description explains how to
compute the greatest common divisor
of two numbers by computing the
greatest common divisor of two
smaller numbers.
 We can implement this method
directly in C++ simply by having the
gcd function call itself with smaller
arguments:
 cout<< gcd(461952,116298);
int gcd ( int u, int v)
{
if( v==0 )
return u;
else
return gcd(v, u % v);
}
Common mistake in recursive
Eng: Mohammed Hussein19
 One shouldn’t make a recursive call for a larger problem, since that
might lead to a loop in which the program attempts to solve larger
and larger problems.
 Not all programming environments support a general-purpose
recursion facility because of intrinsic difficulties involved.
 when recursion is provided and used, it can be a source of
unacceptable inefficiency.
Sorting algorithm
Eng: Mohammed Hussein20
 In computer science, a sorting algorithm is an algorithm that
puts elements of a list in a certain order.The most-used orders are
numerical order and lexicographical order. Efficient sorting is
important for optimizing the use of other algorithms (such
as search and merge algorithms) that require sorted lists to work
correctly; More formally, the output must satisfy two conditions:
1. The output is in nondecreasing order (each element is no smaller
than the previous element according to the desired total order);
2. The output is a permutation (reordering) of the input.
popular sorting algorithms
Eng: Mohammed Hussein21
 Bubble sort
 Selection sort
 Insertion sort
 Shell sort
 Comb sort
 Merge sort
 Heap sort
 Quick sort
 Counting sort
 Bucket sort
 Radix sort
 Distribution sort
 Time sort
Sorting algorithms
Eng: Mohammed Hussein22
http://en.wikipedia.org/wiki/Sorting_algorithm
Sorting algorithms classified by (1/2):
Eng: Mohammed Hussein23
 Computational complexity (worst, average and best behavior) of element
comparisons in terms of the size of the list (n).
 For typical sorting algorithms good behavior is O(n log n) and bad behavior is
O(n²).
 Ideal behavior for a sort is O(n), but this is not possible in the average case.
 Comparison-based sorting algorithms, which evaluate the elements of the list
via an abstract key comparison operation, need at least O(n log n) comparisons
for most inputs.
 Memory usage (and use of other computer resources). Some sorting
algorithms are an in place sort needs only O(1) memory ; sometimes O(log(n))
additional memory is considered "in place".
 Recursion. Some algorithms are either recursive or non-recursive, while others
may be both (e.g., merge sort).
Sorting algorithms classified by (2/2):
Eng: Mohammed Hussein24
 Stability: stable sorting algorithms maintain the relative order of
records with equal keys (i.e., values).
 Whether or not they are a comparison sort.A comparison sort
examines the data only by comparing two elements with a
comparison operator.
 General method: insertion, exchange, selection, merging, etc..
 Exchange sorts include bubble sort and quick sort.
 Selection sorts include shaker sort and heap sort.
 Adaptability:Whether or not the presorted of the input affects the
running time.Algorithms that take this into account are known to
be adaptive.
Comparison sort algorithms
Eng: Mohammed Hussein25
Algorithm Name Method
Selection sort Selection
Insertion sort Insertion
Merge sort Merging
Tim sort Insertion & Merging
Quick sort Partitioning
Heap sort Selection
Binary tree sort Insertion
Bubble sort Exchanging
Strand sort Selection
Sorting
Eng: Mohammed Hussein26
 Sorting is a fundamental operation in computer science (many
programs use it as an intermediate step), and as a result a large
number of good sorting algorithms have been developed.
 Sorting :To sort a list of n elements we need to permute the
elements of the list so that they appear in nondecreasing order.
 3, 1, 4, 1, 5, 9, 2, 6, 5
 1, 1, 2, 3, 4, 5, 5, 6, 9
 Thus, the sorted array has two 1’s, two 5’s, and one each of
the numbers that appear once in the original array.
Sorting problem
Eng: Mohammed Hussein27
 Here is how we formally define the sorting problem:
Input:
A sequence of n numbers a1,a2 ……...,an.
Output:
A permutation (reordering)( ) of the input
sequence
such that
Selection Sort Algorithm
Eng: Mohammed Hussein28
 Suppose we have an array A of n integers that we wish to sort into
nondecreasing order.
 We may do so by iterating a step in which a smallest element not yet part
of the sorted portion of the array is found and exchanged with the
element in the first position of the unsorted part of the array.
 In the first iteration, we find (“select”) a smallest element among the
values found in the full arrayA[0..n-1] and exchange it withA[0].
 In the second iteration, we find a smallest element in A[1..n-1] and
exchange it with A[1].
 We continue these iterations.At the start of the i + 1st iteration,A[0..i-1]
contains the i smallest elements inA sorted in nondecreasing order, and
the remaining elements of the array are in no particular order.
Selection Sort Algorithm
Eng: Mohammed Hussein29
 The idea of algorithm is quite simple.
 Array is imaginary divided into two parts - sorted one and unsorted
one.
 At the beginning, sorted part is empty, while unsorted one
contains whole array.
 At every step, algorithm finds minimal element in the unsorted
part and adds it to the end of the sorted one.
 When unsorted part becomes empty, algorithm stops.
Selection Sort Algorithm
Eng: Mohammed Hussein30
 Lines (2) through (5) select a smallest element in the
unsorted part of the array, A[i..n-1]. We begin by
setting the small to i in line (2).
 we set small to the index of the smallest element in
A[i..n-1] via for-loop of lines (3) through (5).And
small is set to j if A[j] has a smaller value than any of
the array elements in the range A[i..j-1].
 In lines (6) to (8), we exchange the element in that
position with the element in A[i].
 Notice that in order to swap two elements, we need a
temporary place to store one of them.Thus, we move
the value in A[small] to temp at line (6), move the
value in A[i] to A[small] at line (7), and finally move
the value originally in A[small] from temp to A[i] at
line (8).
Proving Properties of Programs
Eng: Mohammed Hussein31
 The loop invariants of a program are often the most useful short
explanation one can give of how the program works.
 So, the programmer should have a loop rule in mind while writing a
piece of code.
 That is, there must be a reason why a program works, and this reason
often has to do with an inductive hypothesis that holds each time the
program goes around a loop or each time it performs a recursive
call.
 We shall see a technique for explaining what an iterative program
does as it goes around a loop.
 The key to proving a property of a loop in a program is selecting a
loop invariant, or inductive assertion.
The inner loop of Selection Sort
Eng: Mohammed Hussein32
 1. First, we need to initialize small to i, as we do in line (2).
 2.At the beginning of the for-loop of line (3), we need to initialize j
to i + 1.
 3.Then, we need to test whether j < n.
 4. If so, we execute the body of the loop, which consists of lines (4)
and (5).
 5.At the end of the body, we need to increment j and go back to the
test.
 The S statement is Inductive true each time we enter a assertion
particular point in the loop.The statement S is then proved by
induction on a parameter that in some way measures the number of
times we have gone around the loop.
The inner loop of Selection Sort
Eng: Mohammed Hussein33
 we see a point just before the test that is labeled by a
loop-invariant statement we have called S(k);
 The first time we reach the test, j has the value i + 1
and small has the value i.
 The second time we reach the test, j has the value i+2,
because j has been incremented once.
 Because the body (lines 4 and 5) sets small to i + 1 if
A[i + 1] is less thanA[i], we see that small is the index
of whichever of A[i] andA[i + 1] is smaller
 Similarly, the third time we reach the test, the value of j
is i + 3 and small is the index of the smallest of
A[i..i+2].
 S(k): If we reach the test for j < n in the for-statement
of line (3) with k as the value of loop index j, then the
value of small is the index of the smallest of A[i..k-1].
What is Algorithm Analysis?
 How to estimate the time required for an algorithm
 Techniques that drastically reduce the running time of an algorithm
 A mathemactical framwork that more rigorously describes the
running time of an algorithm
pp 34 Eng: Mohammed Hussein
34
Input Size
 Time and space complexity
 This is generally a function of the input size
 E.g., sorting, multiplication
 How we characterize input size depends:
 Sorting: number of input items
 Multiplication: total number of bits
 Graph algorithms: number of nodes & edges
 Etc
35 Eng: Mohammed Hussein
35
Running Time
 Number of primitive steps that are executed
 Except for time of executing a function call most statements roughly
require the same amount of time
 y = m * x + b
 c = 5 / 9 * (t - 32 )
 z = f(x) + g(y)
 We can be more exact if need be
36 Eng: Mohammed Hussein
36
Analysis
 Worst case
 Provides an upper bound on running time
 An absolute guarantee
 Average case
 Provides the expected running time
 Very useful, but treat with care: what is “average”?
 Random (equally likely) inputs
 Real-life inputs
37 Eng: Mohammed Hussein
37
Running time for small inputs
pp 38 Eng: Mohammed Hussein
38
Function of Growth rate
39 Eng: Mohammed Hussein
39
Refrences
Eng: Mohammed Hussein40
 Michael Sipser, Introduction to theTheory of Computation, China
Machine Press.
 John E. Hopcroft, Rajeev Motwani, Introduction to Automata
Theory, Languages, and Computation (Second Edition),Tsinghua
University Press.

Iteration, induction, and recursion

  • 1.
    Eng: Mohammed Hussein1 Republicof Yemen THAMAR UNIVERSITY Faculty of Computer Science& Information System Lecturer, and Researcher atThamar University By Eng: Mohammed Hussein mohammedhbi@thuniv.net
  • 2.
    Outlines 1. Analysis ofrunning time 2. Iteration, induction, and recursion 2 Eng: Mohammed Hussein
  • 3.
    Analysis of runningtime Eng: Mohammed Hussein3  An important criterion for the “goodness” of an algorithm is how long it takes to run on inputs of various sizes (its “running time”).  When the algorithm involves recursion, we use a formula called a recurrence equation, which is an inductive definition that predicts how long the algorithm takes to run on inputs of different sizes.
  • 4.
    Iteration, induction, andrecursion Eng: Mohammed Hussein4  Iteration, induction, and recursion are fundamental concepts that appear in many forms in data models, data structures, and algorithms.  Iterative techniques.The simplest way to perform a sequence of operations repeatedly is to use an iterative construct such as the for- statement of C and C++.  Recursive programs which call themselves either directly or indirectly can be simpler to write, analyze, and understand of C and C++.
  • 5.
    Proofs of programcorrectness. Eng: Mohammed Hussein5  In computer science, we often wish to prove, formally or informally, that a statement F(n) about a program is true.  The statement F(n) might, for example, describe what is true on the n iteration of some loop or what is true for the n recursive call to some function.  Iteration : Each beginning programmer learns to use iteration, employing some kind of looping construct such as the for- or while-statement of C.  An example of an Iterative Sorting Algorithm such as Selection Sort.
  • 6.
    Inductive definitions Eng: MohammedHussein6  The inductive definition is consist of a basis and an inductive steps.  Many important concepts of computer science, especially those involving data models, are best defined by an induction in which we give a basis rule defining the simplest example or examples of the concept, and an inductive rule or rules, where we build larger instances of the concept from smaller ones.
  • 7.
    Notation: The Summation Eng:Mohammed Hussein7  Greek capital letter sigma is often used to denote a summation, as  This particular expression represents the sum of the integers from 1 to n;  that is, it stands for the sum 1 + 2 + 3 + · · · + n.  More generally, we can sum any function f(i) of the summation index i.
  • 8.
    Induction 8  Induction rules Basis: show F(0)  The basis could be F(1)  Means 1 = 1 × 2/2  Hypothesis: assume F(k) holds for arbitrary k <= n  Step: Show F(n+1) follows  For example, we suggested that the statement  can be proved true for all n ≥ 1 by an induction on n. Eng: Mohammed Hussein 8
  • 9.
    Induction Example: Gaussian ClosedForm  Prove 1 + 2 + 3 + … + n = n(n+1) / 2 Basis:  If n = 0, then 0 = 0(0+1) / 2 Inductive hypothesis:  Assume 1 + 2 + 3 + … + n = n(n+1) / 2 Step (show true for n+1): 1 + 2 + … + n + n+1 = (1 + 2 + …+ n) + (n+1) = n(n+1)/2 + n+1 = [n(n+1) + 2(n+1)]/2 = (n+1)(n+2)/2 = (n+1)(n+1 + 1) / 2 n ( n +1) / 29 Eng: Mohammed Hussein 9
  • 10.
    A Template forAll Inductions Eng: Mohammed Hussein10 1. Specify the statement F(n) to be proved, for n ≥ i0. Specify what i0 is; often it is 0 or 1, but i0 could be any integer. Explain intuitively what n represents. 2. State the basis case(f).  These will be all the integers from i0 up to some integer j0. Often j0 = i0, but j0 could be larger. 3. Prove each of the basis cases F(i0), F(i0 + 1), . . . , F(j0). 4. Set up the inductive step by stating that you are assuming  F(i0), F(i0 + 1), . . . , F(n)  (the “inductive hypothesis”) and that you want to prove F(n + 1).  State that you are assuming n ≥ j0; that is, n is at least as great as the highest basis case.  Express F(n + 1) by substituting n + 1 for n in the statement F(n). 5. Prove F(n + 1) under the assumptions mentioned in (4).  If the induction is a weak, rather than complete, induction, then only F(n) will be used in the proof, but you are free to use any or all of the statements of the inductive hypothesis. 6. Conclude that F(n) is true for all n ≥ i0 (but not necessarily for smaller n).
  • 11.
    Basic Recursion  Basecase: value for which function can be evaluated without recursion  Two fundamental rules  Must always have a base case  Each recursive call must be to a case that eventually leads toward a base case 11 Eng: Mohammed Hussein 11
  • 12.
    Example Recursion(1/2) Problem: writean algorithm that will strip digits from an integer and print them out one by one void print_out(int n) { if(n < 10) print_digit(n); /*outputs single-digit to terminal*/ else { print_out(n/10); /*print the quotient*/ print_digit(n%10); /*print the remainder*/ } } 12 Eng: Mohammed Hussein 12
  • 13.
    Example Recursion(2/2) Prove byinduction that the recursive printing program works:  basis: If n has one digit, then program is correct  hypothesis: Print_out works for all numbers of k or fewer digits  case k+1: k+1 digits can be written as the first k digits followed by the least significant digit The number expressed by the first k digits is exactly floor( n/10 )? which by hypothesis prints correctly; the last digit is n%10; so the (k+1)-digit is printed correctly By induction, all numbers are correctly printed 13 Eng: Mohammed Hussein 13
  • 14.
    Recursive Eng: Mohammed Hussein14 Recursive programs are often more succinct or easier to understand than their iterative counterparts. More importantly, some problems are more easily solved by recursive programs than by iterative programs.  A recursive function that implements a recursive definition will have a basis part and an inductive part.  Frequently, the basis part checks for a simple kind of input that can be solved by the basis of the definition, with no recursive call needed.  The inductive part of the function requires one or more recursive calls to itself and implements the inductive part of the definition.
  • 15.
    Recursion  Don't needto know how recursion is being managed  Recursion is expensive in terms of space requirement; we avoid recursion if simple loop will do.  Last two rules  Assume all recursive calls work  Do not duplicate work by solving identical problem in separated recursive calls  Evaluate fib(4) -- use a recursion tree fib(n) = fib(n-1) + fib(n-2) 15 Eng: Mohammed Hussein 15
  • 16.
    Arithmetic expressions Eng: MohammedHussein16  If E1 and E2 are arithmetic expressions, then the following are also arithmetic expressions:  1. (E1 + E2)  2. (E1 − E2)  3. (E1 × E2)  4. (E1 / E2)  5. If E is an arithmetic expression, then so is (−E).  The operators +, −, ×, and / are said to be binary operators, because they take two arguments. i. x Basis rule (1) ii. 10 Basis rule (2) iii. (x + 10) Recursive rule (1) on (i) and (ii) iv. (−(x + 10)) Recursive rule (5) on (iii) v. y Basis rule (1) vi. y × (−( (x + 10) )) Recursive rule (3) on (v) and (iv)
  • 17.
    Recursive example Eng: MohammedHussein17  Recursive function that computes n! given a positive integer n.  This function is a direct transcription of the recursive definition of n! in.  That is, line (1) distinguishes the basis case from the inductive case.  We assume that n ≥ 1, so the test of line (1) is really asking whether n = 1. If so, we apply the basis rule, 1! = 1, at line (2).  If n > 1, then we apply the inductive rule, n! = n × (n − 1)!, at line (3). int fact(int n) { (1) if (n <= 1) (2) return 1; /* basis */ else /* induction */ (3) return n*fact(n-1); }
  • 18.
    Euclid’s Algorithm -gcd Eng: Mohammed Hussein18  Euclid’s algorithm is based on the fact that if u is greater than v then the greatest common divisor of u and v is the same as the greatest common divisor of v and u%v.  This description explains how to compute the greatest common divisor of two numbers by computing the greatest common divisor of two smaller numbers.  We can implement this method directly in C++ simply by having the gcd function call itself with smaller arguments:  cout<< gcd(461952,116298); int gcd ( int u, int v) { if( v==0 ) return u; else return gcd(v, u % v); }
  • 19.
    Common mistake inrecursive Eng: Mohammed Hussein19  One shouldn’t make a recursive call for a larger problem, since that might lead to a loop in which the program attempts to solve larger and larger problems.  Not all programming environments support a general-purpose recursion facility because of intrinsic difficulties involved.  when recursion is provided and used, it can be a source of unacceptable inefficiency.
  • 20.
    Sorting algorithm Eng: MohammedHussein20  In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order.The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input.
  • 21.
    popular sorting algorithms Eng:Mohammed Hussein21  Bubble sort  Selection sort  Insertion sort  Shell sort  Comb sort  Merge sort  Heap sort  Quick sort  Counting sort  Bucket sort  Radix sort  Distribution sort  Time sort
  • 22.
    Sorting algorithms Eng: MohammedHussein22 http://en.wikipedia.org/wiki/Sorting_algorithm
  • 23.
    Sorting algorithms classifiedby (1/2): Eng: Mohammed Hussein23  Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n).  For typical sorting algorithms good behavior is O(n log n) and bad behavior is O(n²).  Ideal behavior for a sort is O(n), but this is not possible in the average case.  Comparison-based sorting algorithms, which evaluate the elements of the list via an abstract key comparison operation, need at least O(n log n) comparisons for most inputs.  Memory usage (and use of other computer resources). Some sorting algorithms are an in place sort needs only O(1) memory ; sometimes O(log(n)) additional memory is considered "in place".  Recursion. Some algorithms are either recursive or non-recursive, while others may be both (e.g., merge sort).
  • 24.
    Sorting algorithms classifiedby (2/2): Eng: Mohammed Hussein24  Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).  Whether or not they are a comparison sort.A comparison sort examines the data only by comparing two elements with a comparison operator.  General method: insertion, exchange, selection, merging, etc..  Exchange sorts include bubble sort and quick sort.  Selection sorts include shaker sort and heap sort.  Adaptability:Whether or not the presorted of the input affects the running time.Algorithms that take this into account are known to be adaptive.
  • 25.
    Comparison sort algorithms Eng:Mohammed Hussein25 Algorithm Name Method Selection sort Selection Insertion sort Insertion Merge sort Merging Tim sort Insertion & Merging Quick sort Partitioning Heap sort Selection Binary tree sort Insertion Bubble sort Exchanging Strand sort Selection
  • 26.
    Sorting Eng: Mohammed Hussein26 Sorting is a fundamental operation in computer science (many programs use it as an intermediate step), and as a result a large number of good sorting algorithms have been developed.  Sorting :To sort a list of n elements we need to permute the elements of the list so that they appear in nondecreasing order.  3, 1, 4, 1, 5, 9, 2, 6, 5  1, 1, 2, 3, 4, 5, 5, 6, 9  Thus, the sorted array has two 1’s, two 5’s, and one each of the numbers that appear once in the original array.
  • 27.
    Sorting problem Eng: MohammedHussein27  Here is how we formally define the sorting problem: Input: A sequence of n numbers a1,a2 ……...,an. Output: A permutation (reordering)( ) of the input sequence such that
  • 28.
    Selection Sort Algorithm Eng:Mohammed Hussein28  Suppose we have an array A of n integers that we wish to sort into nondecreasing order.  We may do so by iterating a step in which a smallest element not yet part of the sorted portion of the array is found and exchanged with the element in the first position of the unsorted part of the array.  In the first iteration, we find (“select”) a smallest element among the values found in the full arrayA[0..n-1] and exchange it withA[0].  In the second iteration, we find a smallest element in A[1..n-1] and exchange it with A[1].  We continue these iterations.At the start of the i + 1st iteration,A[0..i-1] contains the i smallest elements inA sorted in nondecreasing order, and the remaining elements of the array are in no particular order.
  • 29.
    Selection Sort Algorithm Eng:Mohammed Hussein29  The idea of algorithm is quite simple.  Array is imaginary divided into two parts - sorted one and unsorted one.  At the beginning, sorted part is empty, while unsorted one contains whole array.  At every step, algorithm finds minimal element in the unsorted part and adds it to the end of the sorted one.  When unsorted part becomes empty, algorithm stops.
  • 30.
    Selection Sort Algorithm Eng:Mohammed Hussein30  Lines (2) through (5) select a smallest element in the unsorted part of the array, A[i..n-1]. We begin by setting the small to i in line (2).  we set small to the index of the smallest element in A[i..n-1] via for-loop of lines (3) through (5).And small is set to j if A[j] has a smaller value than any of the array elements in the range A[i..j-1].  In lines (6) to (8), we exchange the element in that position with the element in A[i].  Notice that in order to swap two elements, we need a temporary place to store one of them.Thus, we move the value in A[small] to temp at line (6), move the value in A[i] to A[small] at line (7), and finally move the value originally in A[small] from temp to A[i] at line (8).
  • 31.
    Proving Properties ofPrograms Eng: Mohammed Hussein31  The loop invariants of a program are often the most useful short explanation one can give of how the program works.  So, the programmer should have a loop rule in mind while writing a piece of code.  That is, there must be a reason why a program works, and this reason often has to do with an inductive hypothesis that holds each time the program goes around a loop or each time it performs a recursive call.  We shall see a technique for explaining what an iterative program does as it goes around a loop.  The key to proving a property of a loop in a program is selecting a loop invariant, or inductive assertion.
  • 32.
    The inner loopof Selection Sort Eng: Mohammed Hussein32  1. First, we need to initialize small to i, as we do in line (2).  2.At the beginning of the for-loop of line (3), we need to initialize j to i + 1.  3.Then, we need to test whether j < n.  4. If so, we execute the body of the loop, which consists of lines (4) and (5).  5.At the end of the body, we need to increment j and go back to the test.  The S statement is Inductive true each time we enter a assertion particular point in the loop.The statement S is then proved by induction on a parameter that in some way measures the number of times we have gone around the loop.
  • 33.
    The inner loopof Selection Sort Eng: Mohammed Hussein33  we see a point just before the test that is labeled by a loop-invariant statement we have called S(k);  The first time we reach the test, j has the value i + 1 and small has the value i.  The second time we reach the test, j has the value i+2, because j has been incremented once.  Because the body (lines 4 and 5) sets small to i + 1 if A[i + 1] is less thanA[i], we see that small is the index of whichever of A[i] andA[i + 1] is smaller  Similarly, the third time we reach the test, the value of j is i + 3 and small is the index of the smallest of A[i..i+2].  S(k): If we reach the test for j < n in the for-statement of line (3) with k as the value of loop index j, then the value of small is the index of the smallest of A[i..k-1].
  • 34.
    What is AlgorithmAnalysis?  How to estimate the time required for an algorithm  Techniques that drastically reduce the running time of an algorithm  A mathemactical framwork that more rigorously describes the running time of an algorithm pp 34 Eng: Mohammed Hussein 34
  • 35.
    Input Size  Timeand space complexity  This is generally a function of the input size  E.g., sorting, multiplication  How we characterize input size depends:  Sorting: number of input items  Multiplication: total number of bits  Graph algorithms: number of nodes & edges  Etc 35 Eng: Mohammed Hussein 35
  • 36.
    Running Time  Numberof primitive steps that are executed  Except for time of executing a function call most statements roughly require the same amount of time  y = m * x + b  c = 5 / 9 * (t - 32 )  z = f(x) + g(y)  We can be more exact if need be 36 Eng: Mohammed Hussein 36
  • 37.
    Analysis  Worst case Provides an upper bound on running time  An absolute guarantee  Average case  Provides the expected running time  Very useful, but treat with care: what is “average”?  Random (equally likely) inputs  Real-life inputs 37 Eng: Mohammed Hussein 37
  • 38.
    Running time forsmall inputs pp 38 Eng: Mohammed Hussein 38
  • 39.
    Function of Growthrate 39 Eng: Mohammed Hussein 39
  • 40.
    Refrences Eng: Mohammed Hussein40 Michael Sipser, Introduction to theTheory of Computation, China Machine Press.  John E. Hopcroft, Rajeev Motwani, Introduction to Automata Theory, Languages, and Computation (Second Edition),Tsinghua University Press.