Algorithm
An algorithm is a set of rules for solving a problem or performing a task.
An algorithm is a finite set of precise instructions for performing a
computation or for solving a problem.
Characteristics:
• Should have a clear starting and ending point.
• Clear, unambiguous, finite steps with a defined input and output
• Algorithms can be expressed in natural language, pseudocode, or
programming languages.
Example find the maximum number in a list:
1. Start
2. Initialize max to the first element of the list.
3. For each element num in the list:
a. If num is greater than max, update max to num.
4. End
Pseudocode
Pseudocode is A simplified, high-level representation of an algorithm that uses
a mix of natural language and programming syntax.
Characteristics:
• It’s not bound by specific programming language syntax
• making it easier to read and write.
• Making it easy to understand
1
• Focuses on logic rather than syntax
Example find the maximum number in a list:
FUNCTION findMax(list):
max ← list[0]
FOR each num IN list:
IF num > max THEN
max ← num
RETURN max
Flowchart
A flowchart is a visual representation of an algorithm using shapes and arrows
to depict the flow of control.
Characteristics:
• It uses standardized symbols (like ovals for start/end, rectangles for
processes, diamonds for decisions) to show the steps and the order in
which they occur.
• Flowcharts are useful for visualizing complex processes.
2
Summary
• Algorithm: A clear set of steps to solve a problem.
• Pseudocode: A text-based version of an algorithm that is easier to read
and understand.
• Flowchart: A visual representation that illustrates the steps and flow of
the algorithm
Suppose, for example, we have four algorithms to solve a problem P: A1, A2,
A3, and A4, and each of them has an execution time of T1, T2, T3, and 4.
Which one will you choose?
Of course, we will choose the shortest execution time
How do we Compare algorithms?
1. Compare execution times?
Experimental Studies
Write a program implementing the algorithm. Run the program with
inputs of varying size and composition. Use a method like System.
currentTimeMillis to get an accurate measure of the actual running time.
Plot the results.
3
Not good:
a) It is necessary to implement the algorithm, which may be difficult.
b) Results may not be indicative of the running time on other inputs
not included in the experiment.
c) To compare two algorithms, the same hardware and software
environments must be used and the same inputs.
2. Count the number of statements executed?
Not good: number of statements vary with the programming
language as well as the style of the individual programmer.
Algorithm 1 Algorithm 2
arr[0] = 0; for(i=0; i<N; i++)
arr[1] = 0; arr[i] = 0;
arr[2] = 0;
...
arr[N-1] = 0;
So, we need to do analysis of algorithms. Such that analysis is independent of
machine time, programming style, etc.
What is the goal of analysis of algorithms?
To compare different algorithms (for efficiency), we look at their time and
space complexity. To compare algorithms mainly in terms of running time but
4
also in terms of other factors (e.g., memory requirements, programmer's effort
etc.).
What do we mean by running time analysis?
Determine how running time increases as the size of the problem increases.
Time complexity
The time complexity of an algorithm is defined as the number of operations
done by the algorithm to solve a problem as a function of the problem size.
Space complexity
The space complexity of an algorithm is defined as the amount of storage used
by the algorithm to solve a problem as a function of the problem size.
Notice that the term complexity here has nothing to do with an algorithm being
simple or complicated.
We consider what is called worst case, average case, and best-case complexity.
Worst case complexity
It is the maximum number of operations done by an algorithm to solve a
problem as a function of the problem size, for all inputs of size n.
Average case complexity
It is the average number of operations done by an algorithm to solve a problem
as a function of the problem size, for all inputs of size n.
Best case complexity
It is the minimum number of operations done by an algorithm to solve a
problem as a function of the problem size, for all inputs of size n.
Theoretical Analysis
5
• Algorithms uses a high-level description of the algorithm instead of an
Implementation.
• Characterizes running time as a function of the input size, n
• Considers all possible inputs.
• Theoretical a analysis allows us to evaluate the speed of an algorithm
independent of the hardware/ software environment.
Primitive Operations:
Primitive operations are the basic computations performed by an algorithm
Examples:
• Evaluating an expression
• Assigning a value to a variable
• Indexing into an array
• Calling a method Returning from a method.
To do Theoretical Analysis we consider that the primitive operations take a
constant amount of time.
Counting Primitive Operations
we can determine the maximum number of primitive operations executed by an
algorithm, as a function of the input size.
Computing running time
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
6
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
Example Algorithm arrayMax
Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst
case.
The concepts worst case, average case, and best-case complexity will be
clarified by simple examples. sequential search of an array
Write an algorithm to find the index of a given key in an array of n elements
and find the complexity of your algorithm.
input:
a: an array to search in (sorted or unsorted)
n: the array size (input size)
key: the element to search for
output:
index: the location of the key in the array, -1 if not found
We use the well-known sequential search algorithm.
7
... ...
found = false;
i=1
while (i <= n) and (not found)
if key = ai
found = true
else
i++
if found
index = i
else index = -1
input size: n
operation to count: the comparison between the key and an array element.
space complexity: n
Time complexity:
The best case is 1 comparison, and it happens if the key equals the 1st array
element.
The worst case is n comparisons, and it happens if the key equals the last
array element, or if the key is not found.
To find the average complexity, we consider the following cases:
8
average number of comparisons =
Remark: We usually use the worst case as a measure of the complexity of an
algorithm.
In the worst-case analysis, we calculate the upper bound on the running time of
an algorithm.
We must know the case that causes a maximum number of operations to be
executed.
For Linear Search, the worst case happens when the element to be searched (x)
is not present in
the array.
Motivation
• Suppose program A takes f(n)=30n+8 microseconds to process any n
records, while program B takes g(n)=n2+1 microseconds to process the n
records. Which program would you choose, knowing you’ll want to
support millions of users?
9
is easy to verify that (numerically, graphically, or otherwise) for n large enough
(n >>1) f(n)will be smaller than g(n). Hence, algorithm B is more efficient
(faster) than algorithm A for large values of n. This is true regardless of the
actual values of the constants.
Consider the example of buying elephants and fish:
Cost: cost_of_elephants + cost_of_fish
Cost ~ cost_of_elephants (approximation)
Asymptotic Analysis of an algorithm: The asymptotic analysis of an
algorithm determines the running time in big Oh notation.
To perform the asymptotic analysis We find the worst case number of primitive
operations executed as a function of the input size, We express this function
with big Oh notation
10
Order of magnitude (asymptotic) notation
(O, Θ, Ω, and o (big O, big theta, big omega, small o)
Definition O
Let f(n) and g(n) be 2 functions of the integer n. We say that f(n) is O(g(n)) if a
positive constant c and a positive integer n0 such that:
f(n) ≤ c g(n) n ≥ n0. This means that the order of the function f(n) ≤ that of
g(n).
Equivalent Definition
𝑓(𝑛)
f(n) is O(g(n)) if lim (
𝑔(𝑛)
) = 𝑐 ≠ ∞⬚ (i.e. c=0 or = constant)
𝑛→∞
• This means that the order of the function f(n) ≤ that of g(n) (i.e. g(n)
function will grow faster than an f(n) function).
• This means the time of algorithm f(n) will be slower than the time of
algorithm g(n). algorithm.
11
Example
a) Show that 30n+8 is O(n).
c, n0 : 30n+8 cn, n>n0 . Let c=31, n0=8.
Assume n> n0=8. Then cn = 31n = 30n + n > 30n+8, so 30n+8 < cn.
Note 30n+8 isn’t less than n anywhere (n>0). And It isn’t even
less than 31n everywhere. But it is less than 31n everywhere to the right of
n=8.
b) Show that n2+1 is O(n2).
c, n0: n2+1 cn2, n> n0: . Let c=2, n0=1.
Assume n>1. Then cn2 = 2n2 = n2+n2 >
n2+1, or n2+1< cn2.
c) Show that 2n+10 is O(n)
c, n0 : 2n+10 cn, n>n0 . Let c=3,
n0=10.
Assume n> n0=10.
Then cn = 3n = 2n + n > 2n+10, so 2n+10 < cn.
d) The function n2 is not O(n)
n2≤cn n ≤c, The above inequality cannot be satisfied since c must be a
constant
12
Some useful Mathematical facts
1. =
2. L’Hôpital’s Rule:
3. Sterling formula: n!
4.
5.
6.
Example: Show that (log n)2 is o(n).
Hence, (log n)2 is o(n).
Example: Show that 2n is not O(nk), where k is a positive integer constant,
regardless of how large k may be.
You can easily verify that the following statements are true:
6n+5 is O(n)
3 n2 + 2 n is O(n2)
106 n2 is O(n2)
log n is O(n)
10 n log n + 100 n is O(n2)
13
log log n is O(log n )
2n is not O(n1000)
Useful Facts about Big O
❖ c>0, O(cf)=O(f+c)=O(f−c)=O(f)
❖ If gO(f) and hO(f), then g+hO(f).
❖ If gO(f1) and hO(f2), then g+hO(f1+f2) =O(max(f1,f2)) (Very
useful!)
❖ If gO(f1) and hO(f2), then ghO(f1f2)
❖ fO(g) gO(h) → fO(h)
❖ The big-Oh notation gives an upper bound on the growth rate of a
function
❖ The statement “f(n) is O(g(n))” means that the growth rate of f(n) is no
more than the growth rate of g(n)
❖ We can use the big-Oh notation to rank functions according to their
growth rate.
14
Example: The following are some of the common functions listed in
increasing order of magnitude.
n!
15
Big O Rules
➢ If f (n) is a polynomial of degree d, then f (n) is O (nd ), i. e.,
1) Drop lower order terms
2) Drop constant factors
3) Use the smallest possible class of functions
Say 2n is O(n) instead of 2n is O (n2 )
4) Use the simplest expression of the class
Say 3n +5 is O(n) instead of 3n +5 is O(3n)
Some example
• We say that n4 + 100n2 + 10n + 50 is of the order of n4 or O(n4)
• We say that 10n3 + 2n2 is O(n3)
• We say that n3 - n2 is O(n3)
• We say that 10 is O(1),
• We say that 1273 is O(1)
16
Relatives of Big O
Definition Θ
• If fO(g) and gO(f) then we say “g and f are of the same order” or “f
is (exactly) order g” and write f(g).
• Another equivalent definition:
c1,c2,k: c1g(x)f(x)c2g(x), x>k
𝑓(𝑛)
• f(n) is Θ(g(n)) if lim (
𝑔(𝑛)
) = 𝑐 ⬚ (0<c<∞)
𝑛→∞
This means that the order of the function f(n) = that of g(n).
Example show that ∑𝑛𝑖=1 𝑖 𝑖𝑠 𝜃 𝑛
𝑛
𝑛(𝑛 + 1)
∑𝑖 = 1 + 2 + 3 + ⋯+ ⋯+ 𝑛 − 1 + 𝑛 =
2
𝑖=1
Definition Ω
f(n) is Ω(g(n)) if f(n) ≥ c g(n) n ≥ n0. This means that the order of the
function f(n) ≥that of g(n).
f(n) is Ω(g(n)) if (i.e. = c, or = ∞)
This means that the order of the function f(n) ≥ that of g(n).
17
Definition o
f(n) is o(g(n)) if f(n) < c g(n) n ≥ n0. This means that the order of the
function f(n) < that of g(n).
f(n) is o(g(n)) if
This means that the order of the function f(n) < that of g(n).
18
Asymptotic Analysis of an algorithm: The asymptotic analysis of an algorithm determines the
running time in big Oh notation.
To perform the asymptotic analysis, We find the worst case number of primitive operations
executed as a function of the input size, We express this function with big Oh notation
Example Algorithm arrayMax
Note that algorithm arrayMax executes 7 n - 2 primitive operations in the worst case.
We determine that algorithm arrayMax executes at most 7 n - 2 primitive operations
We say that algorithm arrayMax runs in O(n) time.
Since constant factors and lower order terms are eventually dropped anyhow, we can ignore
them when counting primitive operations.
19
Running time of various statements
Example1
i = 0;
while (i<N) {
X=X+Y; // O(1)
result = mystery(X); // O(N), just an example...
i++; // O(1)
}
• The body of the while loop: O(N)
• Loop is executed: N times
N x O(N) = O(N2)
20
Example2
if (i<j)
for ( i=0; i<N; i++ )
X = X+i;
else
X=0;
Max ( O(N), O(1) ) = O (N)
Example3
for (i=1, i≤n; i++)
for (j=1, j ≤n; j++)
print “hello”
O(n)xO(n)=O(n2)
Example 4
for (i=1, i≤n; i++)
for (j=1, j ≤j; j++)
print “hello
21
Exercises compute the worst case running two algorithms
22