KEMBAR78
Introduction to Algorithms | PPTX
Petar Petrov
10.12.2015
 Algorithm Analysis
 Sorting
 Searching
 Data Structures
 An algorithm is a set of instructions to be
followed to solve a problem.
 Correctness
 Finiteness
 Definiteness
 Input
 Output
 Effectiveness
There are two aspects of algorithmic
performance:
 Time
 Space
 First, we start to count the number of basic
operations in a particular solution to assess its
efficiency.
 Then, we will express the efficiency of algorithms
using growth functions.
 We measure an algorithm’s time requirement
as a function of the problem size.
 The most important thing to learn is how
quickly the algorithm’s time requirement
grows as a function of the problem size.
 An algorithm’s proportional time requirement
is known as growth rate.
 We can compare the efficiency of two
algorithms by comparing their growth rates.
 Each operation in an algorithm (or a program) has a
cost.
 Each operation takes a certain of time.
count = count + 1;  take a certain amount of time, but it is
constant
A sequence of operations:
count = count + 1; Cost: c1
sum = sum + count; Cost: c2
 Total Cost = c1 + c2
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
Total Cost <= c1 + max(c2,c3)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
 The time required for this algorithm is proportional
to n
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 +
n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
 The time required for this algorithm is proportional to n2
 Consecutive Statements
 If/Else
 Loops
 Nested Loops
 Informal definitions:
◦ Given a complexity function f(n),
◦ O(f(n)) is the set of complexity functions that are
upper bounds on f(n)
◦ (f(n)) is the set of complexity functions that are
lower bounds on f(n)
◦ (f(n)) is the set of complexity functions that,
given the correct constants, correctly describes f(n)
 Example: If f(n) = 17n3 + 4n – 12, then
◦ O(f(n)) contains n3, n4, n5, 2n, etc.
◦ (f(n)) contains 1, n, n2, n3, log n, n log n, etc.
◦ (f(n)) contains n3
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
Total Cost <= c1 + max(c2,c3)
O(1)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
 The time required for this algorithm is proportional
to n
O(n)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 +
n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
 The time required for this algorithm is proportional to n2
O(n2)
Function Growth Rate Name
c Constant
log N Logarithmic
log2N Log-squared
N Linear
N log N Linearithmic
N2 Quadratic
N3 Cubic
2N Exponential
 Input:
◦ A sequence of n numbers a1, a2, . . . , an
 Output:
◦ A permutation (reordering) a1’, a2’, . . . , an’ of the
input sequence such that a1’ ≤ a2’ ≤ · · · ≤ an’
 In-Place Sort
◦ The amount of extra space required to sort the data
is constant with the input size.
Sorted on first key:
Sort file on second key:
Records with key value
3 are not in order on
first key!!
 Stable sort
◦ preserves relative order of records with equal keys
 Idea: like sorting a hand of playing cards
◦ Start with an empty left hand and the cards facing
down on the table.
◦ Remove one card at a time from the table, and
insert it into the correct position in the left hand
◦ The cards held in the left hand are sorted
To insert 12, we need to
make room for it by moving
first 36 and then 24.
insertionsort (a) {
for (i = 1; i < a.length; ++i) {
key = a[i]
pos = i
while (pos > 0 && a[pos-1] > key) {
a[pos]=a[pos-1]
pos--
}
a[pos] = key
}
}
 O(n2), stable, in-place
 O(1) space
 Great with small number of elements
 Algorithm:
◦ Find the minimum value
◦ Swap with 1st position value
◦ Repeat with 2nd position down
 O(n2), stable, in-place
 Algorithm
◦ Traverse the collection
◦ “Bubble” the largest value to the end using pairwise
comparisons and swapping
 O(n2), stable, in-place
 Totally useless?
1. Divide: split the array in two
halves
2. Conquer: Sort recursively both
subarrays
3. Combine: merge the two sorted
subarrays into a sorted array
mergesort (a, left, right) {
if (left < right) {
mid = (left + right)/2
mergesort (a, left, mid)
mergesort (a, mid+1, right)
merge(a, left, mid+1, right)
}
}
 The key to Merge Sort is merging two sorted
lists into one, such that if you have two lists
X (x1x2…xm) and Y(y1y2…yn) the
resulting list is Z(z1z2…zm+n)
 Example:
L1 = { 3 8 9 } L2 = { 1 5 7 }
merge(L1, L2) = { 1 3 5 7 8 9 }
3 10 23 54 1 5 25 75X: Y:
Result:
3 10 23 54 5 25 75
1
X: Y:
Result:
10 23 54 5 25 75
1 3
X: Y:
Result:
10 23 54 25 75
1 3 5
X: Y:
Result:
23 54 25 75
1 3 5 10
X: Y:
Result:
54 25 75
1 3 5 10 23
X: Y:
Result:
54 75
1 3 5 10 23 25
X: Y:
Result:
75
1 3 5 10 23 25 54
X: Y:
Result:
1 3 5 10 23 25 54 75
X: Y:
Result:
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
86 1599 6 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
86 1599 6 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
99 6 86 15 58 35 86 4 0
86 1599 6 58 35 86 4 0
99 6 86 15 58 35 86 4 0
4 0
99 6 86 15 58 35 86 0 4
4 0
15 866 99 35 58 0 4 86
99 6 86 15 58 35 86 0 4
6 15 86 99 0 4 35 58 86
15 866 99 58 35 0 4 86
0 4 6 15 35 58 86 86 99
6 15 86 99 0 4 35 58 86
0 4 6 15 35 58 86 86 99
Merge Sort runs O (N log N) for all cases, because of
its Divide and Conquer approach.
T(N) = 2T(N/2) + N = O(N logN)
1. Select: pick an element x
2. Divide: rearrange elements so
that x goes to its final position
• L elements less than x
• G elements greater than or equal
to x
3. Conquer: sort recursively L and G
x
x
x
L G
L G
quicksort (a, left, right) {
if (left < right) {
pivot = partition (a, left, right)
quicksort (a, left, pivot-1)
quicksort (a, pivot+1, right)
}
}
 How to pick a pivot?
 How to partition?
 Use the first element as pivot
◦ if the input is random, ok
◦ if the input is presorted? - shuffle in advance
 Choose the pivot randomly
◦ generally safe
◦ random numbers generation can be expensive
 Use the median of the array
◦ Partitioning always cuts the array into half
◦ An optimal quicksort (O(n log n))
◦ hard to find the exact median (chicken-egg?)
◦ Approximation to the exact median..
 Median of three
◦ Compare just three elements: the leftmost, the
rightmost and the center
◦ Use the middle of the three as pivot
 Given a pivot, partition the elements of the
array such that the resulting array consists of:
◦ One subarray that contains elements < pivot
◦ One subarray that contains elements >= pivot
 The subarrays are stored in the original array
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
40 20 10 80 60 50 7 30 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
5. swap a[too_small_index]a[pivot_index]
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
7 20 10 30 40 50 60 80 100pivot_index = 4
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
5. swap a[too_small_index]a[pivot_index]
 Running time
◦ pivot selection: constant time, i.e. O(1)
◦ partitioning: linear time, i.e. O(N)
◦ running time of the two recursive calls
 T(N)=T(i)+T(N-i-1)+cN where c is a
constant
◦ i: number of elements in L
 What will be the worst case?
◦ The pivot is the smallest element, all the time
◦ Partition is always unbalanced
 What will be the best case?
◦ Partition is perfectly balanced.
◦ Pivot is always in the middle (median of the array)
 Java API provides a class Arrays with several
overloaded sort methods for different array
types
 Class Collections provides similar sorting
methods
Arrays methods:
public static void sort (int[] a)
public static void sort (Object[] a)
// requires Comparable
public static <T> void sort (T[] a,
Comparator<? super T> comp)
// uses given Comparator
Collections methods:
public static <T extends Comparable<T>>
void sort (List<T> list)
public static <T> void sort (List<T> l,
Comparator<? super T> comp)
 Given the collection and an element to
find…
 Determine whether the “target”
element was found in the collection
◦ Print a message
◦ Return a value
(an index or pointer, etc.)
 Don’t modify the collection in the
search!
 A search traverses the collection until
◦ the desired element is found
◦ or the collection is exhausted
linearsearch (a, key) {
for (i = 0; i < a.length; i++) {
if (a[i] == key) return i
}
return –1
}
40 20 10 30 7
Search for 20
40 20 10 30 7
Search for 20
40 != 20
40 20 10 30 7
Search for 20
20 = 20
40 20 10 30 7
Search for 20
20 = 20
return 1
40 20 10 30 7
Search for 5
40 20 10 30 7
Search for 5
40 != 5
40 20 10 30 7
Search for 5
20 != 5
40 20 10 30 7
Search for 5
10 != 5
40 20 10 30 7
Search for 5
30 != 5
40 20 10 30 7
Search for 5
7 != 5
return -1
 O(n)
 Examines every item
 Locates a target value in a sorted array/list
by successively eliminating half of the array
on each step
binarysearch (a, low, high, key) {
while (low <= high) {
mid = (low+high) >>> 1
midVal = a[mid]
if (midVal < key) low=mid+1
else if (midVal > key) high=mid+1
else return mid
}
return –(low + 1)
}
3 4 6 7
Search for 4
8 10 13 141
3 4 6 7
Search for 4
8 10 13 141
left right
3 4 6 7
Search for 4
8 10 13 141
4 < 7
left right
3 4 6 7
Search for 4
8 10 13 141
left right
3 4 6 7
Search for 3
8 10 13 141
4 > 3
left right
3 4 6 7
Search for 4
8 10 13 141
left right
3 4 6 7
Search for 4
8 10 13 141
4 = 4
left right
3 4 6 7
Search for 4
8 10 13 141
return 4
left right
3 4 6 7
Search for 9
8 10 13 141
3 4 6 7
Search for 9
8 10 13 141
left right
3 4 6 7
Search for 9
8 10 13 141
9 > 7
left right
3 4 6 7
Search for 9
8 10 13 141
left right
3 4 6 7
Search for 9
8 10 13 141
9 < 10
left right
3 4 6 7
Search for 9
8 10 13 141
left right
3 4 6 7
Search for 9
8 10 13 141
9 > 8
left right
3 4 6 7
Search for 9
8 10 13 141
leftright
right < left
return -7
 Requires a sorted array/list
 O(log n)
 Divide and conquer
Collection
List
Set
SortedSet
Map
SortedMap
LinkedList ArrayList
HashSet
TreeSet
HashMap
TreeMap
Extends
Implements
Interface
Class
 Set
◦ The familiar set abstraction.
◦ No duplicates; May or may not be ordered.
 List
◦ Ordered collection, also known as a sequence.
◦ Duplicates permitted; Allows positional access
 Map
◦ A mapping from keys to values.
◦ Each key can map to at most one value (function).
Set List Map
HashSet ArrayList HashMap
LinkedHashSet LinkedList LinkedHashMap
TreeSet Vector Hashtable
TreeMap
 Ordered
◦ Elements are stored and accessed in a specific
order
 Sorted
◦ Elements are stored and accessed in a sorted
order
 Indexed
◦ Elements can be accessed using an index
 Unique
◦ Collection does not allow duplicates
 A linked list is a series of connected nodes
 Each node contains at least
◦ A piece of data (any type)
◦ Pointer to the next node in the list
 Head: pointer to the first node
 The last node points to NULL
A 
Head
B C
A
data pointer
node
A 
Head
B C
D
x
A 
Head
B CD
A 
Head
B C
x

Head
B C
A 
Head
B C
A 
Head
B C
A
data next
node

previous
Tail
Operation Complexity
insert at beginning O(1)
Insert at end O(1)
Insert at index O(n)
delete at beginning O(1)
delete at end O(1)
delete at index O(n)
find element O(n)
access element by index O(n)
 Resizable-array implementation of the List
interface
 capacity vs. size
A B C
A B C
A B C D
A B C D E
D
capacity > size
capacity = size
A B C
Operation Complexity
insert at beginning O(n)
Insert at end O(1) amortized
Insert at index O(n)
delete at beginning O(n)
delete at end O(1)
delete at index O(n)
find element O(n)
access element by index O(1)
 Some collections are constrained so clients
can only use optimized operations
◦ stack: retrieves elements in reverse order as added
◦ queue: retrieves elements in same order as added
stack
queue
top 3
2
bottom 1
pop, peekpush
front back
1 2 3
addremove, peek
 stack: A collection based on the principle of
adding elements and retrieving them in the
opposite order.
 basic stack operations:
◦ push: Add an element to the top.
◦ pop: Remove the top element.
◦ peek: Examine the top element.
stack
top 3
2
bottom 1
pop, peekpush
 Programming languages and compilers:
◦ method call stack
 Matching up related pairs of things:
◦ check correctness of brackets (){}[]
 Sophisticated algorithms:
◦ undo stack
 queue: Retrieves elements in the order they
were added.
 basic queue operations:
◦ add (enqueue): Add an element to the back.
◦ remove (dequeue): Remove the front element.
◦ peek: Examine the front element.
queue
front back
1 2 3
addremove, peek
 Operating systems:
◦ queue of print jobs to send to the printer
 Programming:
◦ modeling a line of customers or clients
 Real world examples:
◦ people on an escalator or waiting in a line
◦ cars at a gas station
 A data structure optimized for a very
specific kind of search / access
 In a map we access by asking "give me the
value associated with this key."
 capacity, load factor
A -> 65
“Ivan Ivanov"
555389085
ivan@gmail.net
5122466556
12
hash
function
“Ivan"
5/5/1967
 Implements Map
 Fast put, get operations
 hashCode(), equals()
0
1
2
3
4
5
key=“BG” 2117
hashCode()
%6
5
(“BG”, “359”)
 What to do when inserting an element and
already something present?
 Could search forward or backwards for an
open space
 Linear probing
◦ move forward 1 spot. Open?, 2 spots, 3 spots
 Quadratic probing
◦ 1 spot, 2 spots, 4 spots, 8 spots, 16 spots
 Resize when load factor reaches some limit
 Each element of hash table be another data
structure
◦ LinkedList
◦ Balanced Binary Tree
 Resize at given load factor or when any chain
reaches some limit
 Implements Map
 Sorted
 Easy access to the biggest
 logarithmic put, get
 Comparable or Comparator
 0, 1, or 2 children per node
 Binary Search Tree
◦ node.left < node.value
◦ node.right >= node.value
 A priority queue stores a collection of entries
 Main methods of the Priority Queue ADT
◦ insert(k, x)
inserts an entry with key k and value x
◦ removeMin()
removes and returns the entry with smallest key
Priority Queues
15
4
 A heap can be seen as a complete binary tree:
16
14 10
8 7 9 3
2 4 1
 A heap can be seen as a complete binary tree:
16
14 10
8 7 9 3
2 4 1 1 1 111
 In practice, heaps are usually implemented as
arrays:
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 1 =0
 To represent a complete binary tree as an
array:
◦ The root node is A[1]
◦ Node i is A[i]
◦ The parent of node i is A[i/2] (note: integer divide)
◦ The left child of node i is A[2i]
◦ The right child of node i is A[2i + 1]
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 1 =0
16
4 10
14 7 9 3
2 8 1
16 10 14 7 9 3 2 8 140
16
4 10
14 7 9 3
2 8 1
16 10 7 9 3 2 8 14 140
16
14 10
4 7 9 3
2 8 1
16 14 10 4 7 9 3 2 8 10
16
14 10
4 7 9 3
2 8 1
16 14 10 4 7 9 3 2 8 10
16
14 10
4 7 9 3
2 8 1
16 14 10 7 9 3 2 14 80
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 10
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 10
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 10
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 10
 java.util.Collections
 java.util.Arrays exports similar basic operations for an
array.
binarySearch(list, key)
sort(list)
min(list)
max(list)
reverse(list)
shuffle(list)
swap(list, p1, p2)
replaceAll(list, x1, x2)
Finds key in a sorted list using binary search.
Sorts a list into ascending order.
Returns the smallest value in a list.
Returns the largest value in a list.
Reverses the order of elements in a list.
Randomly rearranges the elements in a list.
Exchanges the elements at index positions p1 and p2.
Replaces all elements matching x1 with x2.
Introduction to Algorithms

Introduction to Algorithms

  • 1.
  • 2.
     Algorithm Analysis Sorting  Searching  Data Structures
  • 3.
     An algorithmis a set of instructions to be followed to solve a problem.
  • 4.
     Correctness  Finiteness Definiteness  Input  Output  Effectiveness
  • 5.
    There are twoaspects of algorithmic performance:  Time  Space
  • 6.
     First, westart to count the number of basic operations in a particular solution to assess its efficiency.  Then, we will express the efficiency of algorithms using growth functions.
  • 7.
     We measurean algorithm’s time requirement as a function of the problem size.  The most important thing to learn is how quickly the algorithm’s time requirement grows as a function of the problem size.  An algorithm’s proportional time requirement is known as growth rate.  We can compare the efficiency of two algorithms by comparing their growth rates.
  • 8.
     Each operationin an algorithm (or a program) has a cost.  Each operation takes a certain of time. count = count + 1;  take a certain amount of time, but it is constant A sequence of operations: count = count + 1; Cost: c1 sum = sum + count; Cost: c2  Total Cost = c1 + c2
  • 9.
    Example: Simple If-Statement CostTimes if (n < 0) c1 1 absval = -n c2 1 else absval = n; c3 1 Total Cost <= c1 + max(c2,c3)
  • 10.
    Example: Simple Loop CostTimes i = 1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 i = i + 1; c4 n sum = sum + i; c5 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5  The time required for this algorithm is proportional to n
  • 11.
    Example: Nested Loop CostTimes i=1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 j=1; c4 n while (j <= n) { c5 n*(n+1) sum = sum + i; c6 n*n j = j + 1; c7 n*n } i = i +1; c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8  The time required for this algorithm is proportional to n2
  • 12.
     Consecutive Statements If/Else  Loops  Nested Loops
  • 13.
     Informal definitions: ◦Given a complexity function f(n), ◦ O(f(n)) is the set of complexity functions that are upper bounds on f(n) ◦ (f(n)) is the set of complexity functions that are lower bounds on f(n) ◦ (f(n)) is the set of complexity functions that, given the correct constants, correctly describes f(n)  Example: If f(n) = 17n3 + 4n – 12, then ◦ O(f(n)) contains n3, n4, n5, 2n, etc. ◦ (f(n)) contains 1, n, n2, n3, log n, n log n, etc. ◦ (f(n)) contains n3
  • 14.
    Example: Simple If-Statement CostTimes if (n < 0) c1 1 absval = -n c2 1 else absval = n; c3 1 Total Cost <= c1 + max(c2,c3) O(1)
  • 15.
    Example: Simple Loop CostTimes i = 1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 i = i + 1; c4 n sum = sum + i; c5 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5  The time required for this algorithm is proportional to n O(n)
  • 16.
    Example: Nested Loop CostTimes i=1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 j=1; c4 n while (j <= n) { c5 n*(n+1) sum = sum + i; c6 n*n j = j + 1; c7 n*n } i = i +1; c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8  The time required for this algorithm is proportional to n2 O(n2)
  • 17.
    Function Growth RateName c Constant log N Logarithmic log2N Log-squared N Linear N log N Linearithmic N2 Quadratic N3 Cubic 2N Exponential
  • 20.
     Input: ◦ Asequence of n numbers a1, a2, . . . , an  Output: ◦ A permutation (reordering) a1’, a2’, . . . , an’ of the input sequence such that a1’ ≤ a2’ ≤ · · · ≤ an’
  • 21.
     In-Place Sort ◦The amount of extra space required to sort the data is constant with the input size.
  • 22.
    Sorted on firstkey: Sort file on second key: Records with key value 3 are not in order on first key!!  Stable sort ◦ preserves relative order of records with equal keys
  • 23.
     Idea: likesorting a hand of playing cards ◦ Start with an empty left hand and the cards facing down on the table. ◦ Remove one card at a time from the table, and insert it into the correct position in the left hand ◦ The cards held in the left hand are sorted
  • 24.
    To insert 12,we need to make room for it by moving first 36 and then 24.
  • 28.
    insertionsort (a) { for(i = 1; i < a.length; ++i) { key = a[i] pos = i while (pos > 0 && a[pos-1] > key) { a[pos]=a[pos-1] pos-- } a[pos] = key } }
  • 29.
     O(n2), stable,in-place  O(1) space  Great with small number of elements
  • 30.
     Algorithm: ◦ Findthe minimum value ◦ Swap with 1st position value ◦ Repeat with 2nd position down  O(n2), stable, in-place
  • 31.
     Algorithm ◦ Traversethe collection ◦ “Bubble” the largest value to the end using pairwise comparisons and swapping  O(n2), stable, in-place  Totally useless?
  • 32.
    1. Divide: splitthe array in two halves 2. Conquer: Sort recursively both subarrays 3. Combine: merge the two sorted subarrays into a sorted array
  • 33.
    mergesort (a, left,right) { if (left < right) { mid = (left + right)/2 mergesort (a, left, mid) mergesort (a, mid+1, right) merge(a, left, mid+1, right) } }
  • 34.
     The keyto Merge Sort is merging two sorted lists into one, such that if you have two lists X (x1x2…xm) and Y(y1y2…yn) the resulting list is Z(z1z2…zm+n)  Example: L1 = { 3 8 9 } L2 = { 1 5 7 } merge(L1, L2) = { 1 3 5 7 8 9 }
  • 35.
    3 10 2354 1 5 25 75X: Y: Result:
  • 36.
    3 10 2354 5 25 75 1 X: Y: Result:
  • 37.
    10 23 545 25 75 1 3 X: Y: Result:
  • 38.
    10 23 5425 75 1 3 5 X: Y: Result:
  • 39.
    23 54 2575 1 3 5 10 X: Y: Result:
  • 40.
    54 25 75 13 5 10 23 X: Y: Result:
  • 41.
    54 75 1 35 10 23 25 X: Y: Result:
  • 42.
    75 1 3 510 23 25 54 X: Y: Result:
  • 43.
    1 3 510 23 25 54 75 X: Y: Result:
  • 44.
    99 6 8615 58 35 86 4 0
  • 45.
    99 6 8615 58 35 86 4 0 99 6 86 15 58 35 86 4 0
  • 46.
    99 6 8615 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0
  • 47.
    99 6 8615 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0 99 6 86 15 58 35 86 4 0
  • 48.
    99 6 8615 58 35 86 4 0 99 6 86 15 58 35 86 4 0 86 1599 6 58 35 86 4 0 99 6 86 15 58 35 86 4 0 4 0
  • 49.
    99 6 8615 58 35 86 0 4 4 0
  • 50.
    15 866 9935 58 0 4 86 99 6 86 15 58 35 86 0 4
  • 51.
    6 15 8699 0 4 35 58 86 15 866 99 58 35 0 4 86
  • 52.
    0 4 615 35 58 86 86 99 6 15 86 99 0 4 35 58 86
  • 53.
    0 4 615 35 58 86 86 99
  • 54.
    Merge Sort runsO (N log N) for all cases, because of its Divide and Conquer approach. T(N) = 2T(N/2) + N = O(N logN)
  • 55.
    1. Select: pickan element x 2. Divide: rearrange elements so that x goes to its final position • L elements less than x • G elements greater than or equal to x 3. Conquer: sort recursively L and G x x x L G L G
  • 56.
    quicksort (a, left,right) { if (left < right) { pivot = partition (a, left, right) quicksort (a, left, pivot-1) quicksort (a, pivot+1, right) } }
  • 57.
     How topick a pivot?  How to partition?
  • 58.
     Use thefirst element as pivot ◦ if the input is random, ok ◦ if the input is presorted? - shuffle in advance  Choose the pivot randomly ◦ generally safe ◦ random numbers generation can be expensive
  • 59.
     Use themedian of the array ◦ Partitioning always cuts the array into half ◦ An optimal quicksort (O(n log n)) ◦ hard to find the exact median (chicken-egg?) ◦ Approximation to the exact median..  Median of three ◦ Compare just three elements: the leftmost, the rightmost and the center ◦ Use the middle of the three as pivot
  • 60.
     Given apivot, partition the elements of the array such that the resulting array consists of: ◦ One subarray that contains elements < pivot ◦ One subarray that contains elements >= pivot  The subarrays are stored in the original array
  • 61.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index
  • 62.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
  • 63.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
  • 64.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index
  • 65.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index
  • 66.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index
  • 67.
    40 20 1080 60 50 7 30 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index]
  • 68.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index]
  • 69.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 70.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 71.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 72.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 73.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 74.
    40 20 1030 60 50 7 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 75.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 76.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 77.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 78.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 79.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 80.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 81.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 82.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 83.
    40 20 1030 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1.
  • 84.
    1. while a[too_big_index]<= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1. 5. swap a[too_small_index]a[pivot_index] 40 20 10 30 7 50 60 80 100pivot_index = 0 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index
  • 85.
    7 20 1030 40 50 60 80 100pivot_index = 4 [0] [1] [2] [3] [4] [5] [6] [7] [8] too_big_index too_small_index 1. while a[too_big_index] <= a[pivot_index] ++too_big_index 2. while a[too_small_index] > a[pivot_index] --too_small_index 3. if too_big_index < too_small_index swap a[too_big_index]a[too_small_index] 4. while too_small_index > too_big_index, go to 1. 5. swap a[too_small_index]a[pivot_index]
  • 86.
     Running time ◦pivot selection: constant time, i.e. O(1) ◦ partitioning: linear time, i.e. O(N) ◦ running time of the two recursive calls  T(N)=T(i)+T(N-i-1)+cN where c is a constant ◦ i: number of elements in L
  • 87.
     What willbe the worst case? ◦ The pivot is the smallest element, all the time ◦ Partition is always unbalanced
  • 88.
     What willbe the best case? ◦ Partition is perfectly balanced. ◦ Pivot is always in the middle (median of the array)
  • 89.
     Java APIprovides a class Arrays with several overloaded sort methods for different array types  Class Collections provides similar sorting methods
  • 90.
    Arrays methods: public staticvoid sort (int[] a) public static void sort (Object[] a) // requires Comparable public static <T> void sort (T[] a, Comparator<? super T> comp) // uses given Comparator
  • 91.
    Collections methods: public static<T extends Comparable<T>> void sort (List<T> list) public static <T> void sort (List<T> l, Comparator<? super T> comp)
  • 93.
     Given thecollection and an element to find…  Determine whether the “target” element was found in the collection ◦ Print a message ◦ Return a value (an index or pointer, etc.)  Don’t modify the collection in the search!
  • 94.
     A searchtraverses the collection until ◦ the desired element is found ◦ or the collection is exhausted
  • 95.
    linearsearch (a, key){ for (i = 0; i < a.length; i++) { if (a[i] == key) return i } return –1 }
  • 96.
    40 20 1030 7 Search for 20
  • 97.
    40 20 1030 7 Search for 20 40 != 20
  • 98.
    40 20 1030 7 Search for 20 20 = 20
  • 99.
    40 20 1030 7 Search for 20 20 = 20 return 1
  • 100.
    40 20 1030 7 Search for 5
  • 101.
    40 20 1030 7 Search for 5 40 != 5
  • 102.
    40 20 1030 7 Search for 5 20 != 5
  • 103.
    40 20 1030 7 Search for 5 10 != 5
  • 104.
    40 20 1030 7 Search for 5 30 != 5
  • 105.
    40 20 1030 7 Search for 5 7 != 5 return -1
  • 106.
  • 107.
     Locates atarget value in a sorted array/list by successively eliminating half of the array on each step
  • 108.
    binarysearch (a, low,high, key) { while (low <= high) { mid = (low+high) >>> 1 midVal = a[mid] if (midVal < key) low=mid+1 else if (midVal > key) high=mid+1 else return mid } return –(low + 1) }
  • 109.
    3 4 67 Search for 4 8 10 13 141
  • 110.
    3 4 67 Search for 4 8 10 13 141 left right
  • 111.
    3 4 67 Search for 4 8 10 13 141 4 < 7 left right
  • 112.
    3 4 67 Search for 4 8 10 13 141 left right
  • 113.
    3 4 67 Search for 3 8 10 13 141 4 > 3 left right
  • 114.
    3 4 67 Search for 4 8 10 13 141 left right
  • 115.
    3 4 67 Search for 4 8 10 13 141 4 = 4 left right
  • 116.
    3 4 67 Search for 4 8 10 13 141 return 4 left right
  • 117.
    3 4 67 Search for 9 8 10 13 141
  • 118.
    3 4 67 Search for 9 8 10 13 141 left right
  • 119.
    3 4 67 Search for 9 8 10 13 141 9 > 7 left right
  • 120.
    3 4 67 Search for 9 8 10 13 141 left right
  • 121.
    3 4 67 Search for 9 8 10 13 141 9 < 10 left right
  • 122.
    3 4 67 Search for 9 8 10 13 141 left right
  • 123.
    3 4 67 Search for 9 8 10 13 141 9 > 8 left right
  • 124.
    3 4 67 Search for 9 8 10 13 141 leftright right < left return -7
  • 125.
     Requires asorted array/list  O(log n)  Divide and conquer
  • 126.
  • 127.
     Set ◦ Thefamiliar set abstraction. ◦ No duplicates; May or may not be ordered.  List ◦ Ordered collection, also known as a sequence. ◦ Duplicates permitted; Allows positional access  Map ◦ A mapping from keys to values. ◦ Each key can map to at most one value (function).
  • 128.
    Set List Map HashSetArrayList HashMap LinkedHashSet LinkedList LinkedHashMap TreeSet Vector Hashtable TreeMap
  • 129.
     Ordered ◦ Elementsare stored and accessed in a specific order  Sorted ◦ Elements are stored and accessed in a sorted order  Indexed ◦ Elements can be accessed using an index  Unique ◦ Collection does not allow duplicates
  • 130.
     A linkedlist is a series of connected nodes  Each node contains at least ◦ A piece of data (any type) ◦ Pointer to the next node in the list  Head: pointer to the first node  The last node points to NULL A  Head B C A data pointer node
  • 131.
    A  Head B C D x A Head B CD
  • 132.
  • 133.
  • 134.
    A  Head B C A datanext node  previous Tail
  • 135.
    Operation Complexity insert atbeginning O(1) Insert at end O(1) Insert at index O(n) delete at beginning O(1) delete at end O(1) delete at index O(n) find element O(n) access element by index O(n)
  • 136.
     Resizable-array implementationof the List interface  capacity vs. size A B C
  • 137.
    A B C AB C D A B C D E D capacity > size capacity = size
  • 138.
  • 139.
    Operation Complexity insert atbeginning O(n) Insert at end O(1) amortized Insert at index O(n) delete at beginning O(n) delete at end O(1) delete at index O(n) find element O(n) access element by index O(1)
  • 140.
     Some collectionsare constrained so clients can only use optimized operations ◦ stack: retrieves elements in reverse order as added ◦ queue: retrieves elements in same order as added stack queue top 3 2 bottom 1 pop, peekpush front back 1 2 3 addremove, peek
  • 141.
     stack: Acollection based on the principle of adding elements and retrieving them in the opposite order.  basic stack operations: ◦ push: Add an element to the top. ◦ pop: Remove the top element. ◦ peek: Examine the top element. stack top 3 2 bottom 1 pop, peekpush
  • 142.
     Programming languagesand compilers: ◦ method call stack  Matching up related pairs of things: ◦ check correctness of brackets (){}[]  Sophisticated algorithms: ◦ undo stack
  • 143.
     queue: Retrieveselements in the order they were added.  basic queue operations: ◦ add (enqueue): Add an element to the back. ◦ remove (dequeue): Remove the front element. ◦ peek: Examine the front element. queue front back 1 2 3 addremove, peek
  • 144.
     Operating systems: ◦queue of print jobs to send to the printer  Programming: ◦ modeling a line of customers or clients  Real world examples: ◦ people on an escalator or waiting in a line ◦ cars at a gas station
  • 145.
     A datastructure optimized for a very specific kind of search / access  In a map we access by asking "give me the value associated with this key."  capacity, load factor A -> 65
  • 146.
  • 147.
     Implements Map Fast put, get operations  hashCode(), equals()
  • 148.
  • 149.
     What todo when inserting an element and already something present?
  • 150.
     Could searchforward or backwards for an open space  Linear probing ◦ move forward 1 spot. Open?, 2 spots, 3 spots  Quadratic probing ◦ 1 spot, 2 spots, 4 spots, 8 spots, 16 spots  Resize when load factor reaches some limit
  • 151.
     Each elementof hash table be another data structure ◦ LinkedList ◦ Balanced Binary Tree  Resize at given load factor or when any chain reaches some limit
  • 152.
     Implements Map Sorted  Easy access to the biggest  logarithmic put, get  Comparable or Comparator
  • 153.
     0, 1,or 2 children per node  Binary Search Tree ◦ node.left < node.value ◦ node.right >= node.value
  • 154.
     A priorityqueue stores a collection of entries  Main methods of the Priority Queue ADT ◦ insert(k, x) inserts an entry with key k and value x ◦ removeMin() removes and returns the entry with smallest key Priority Queues 15 4
  • 155.
     A heapcan be seen as a complete binary tree: 16 14 10 8 7 9 3 2 4 1
  • 156.
     A heapcan be seen as a complete binary tree: 16 14 10 8 7 9 3 2 4 1 1 1 111
  • 157.
     In practice,heaps are usually implemented as arrays: 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1 =0
  • 158.
     To representa complete binary tree as an array: ◦ The root node is A[1] ◦ Node i is A[i] ◦ The parent of node i is A[i/2] (note: integer divide) ◦ The left child of node i is A[2i] ◦ The right child of node i is A[2i + 1] 16 14 10 8 7 9 3 2 4 1 16 14 10 8 7 9 3 2 4 1 =0
  • 159.
    16 4 10 14 79 3 2 8 1 16 10 14 7 9 3 2 8 140
  • 160.
    16 4 10 14 79 3 2 8 1 16 10 7 9 3 2 8 14 140
  • 161.
    16 14 10 4 79 3 2 8 1 16 14 10 4 7 9 3 2 8 10
  • 162.
    16 14 10 4 79 3 2 8 1 16 14 10 4 7 9 3 2 8 10
  • 163.
    16 14 10 4 79 3 2 8 1 16 14 10 7 9 3 2 14 80
  • 164.
    16 14 10 8 79 3 2 4 1 16 14 10 8 7 9 3 2 4 10
  • 165.
    16 14 10 8 79 3 2 4 1 16 14 10 8 7 9 3 2 4 10
  • 166.
    16 14 10 8 79 3 2 4 1 16 14 10 8 7 9 3 2 4 10
  • 167.
    16 14 10 8 79 3 2 4 1 16 14 10 8 7 9 3 2 4 10
  • 168.
     java.util.Collections  java.util.Arraysexports similar basic operations for an array. binarySearch(list, key) sort(list) min(list) max(list) reverse(list) shuffle(list) swap(list, p1, p2) replaceAll(list, x1, x2) Finds key in a sorted list using binary search. Sorts a list into ascending order. Returns the smallest value in a list. Returns the largest value in a list. Reverses the order of elements in a list. Randomly rearranges the elements in a list. Exchanges the elements at index positions p1 and p2. Replaces all elements matching x1 with x2.