KEMBAR78
Fundamentals of Algorithms - UNIT I | PDF | Time Complexity | Computer Programming
0% found this document useful (0 votes)
19 views6 pages

Fundamentals of Algorithms - UNIT I

The document outlines the fundamentals of algorithms, including their specification, pseudocode representation, and performance analysis focusing on time and space complexity. It discusses asymptotic notation (Big O, Omega, Theta, and Little O) for evaluating algorithm efficiency and introduces randomized algorithms, detailing their characteristics, advantages, and disadvantages. The document emphasizes the importance of both theoretical analysis and empirical measurement in understanding algorithm performance.

Uploaded by

202300327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Fundamentals of Algorithms - UNIT I

The document outlines the fundamentals of algorithms, including their specification, pseudocode representation, and performance analysis focusing on time and space complexity. It discusses asymptotic notation (Big O, Omega, Theta, and Little O) for evaluating algorithm efficiency and introduces randomized algorithms, detailing their characteristics, advantages, and disadvantages. The document emphasizes the importance of both theoretical analysis and empirical measurement in understanding algorithm performance.

Uploaded by

202300327
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Fundamentals of Algorithms

UNIT I

Introduction – Algorithm Specification, Pseudo code for expressing algorithms,


Performance Analysis-Space complexity, Time complexity, Asymptotic Notation- Big oh
notation, Omega notation, Theta notation and Little oh notation, Performance
Measurement, Randomized algorithms.
—-----

An algorithm is a well-defined, step-by-step procedure for solving a computational problem.


It takes some input, performs a sequence of operations, and produces an output. The goal
of an algorithm is to transform the input into the desired output in an efficient and correct
manner.

Algorithm Specification

Algorithm specification involves clearly defining:

* Input: The type, range, and structure of the data the algorithm expects.

* Output: The desired result the algorithm should produce, also with its type, range, and
structure.

* Correctness: A statement or proof that the algorithm will always produce the correct output
for valid inputs.

* Constraints: Any limitations on resources (e.g., memory, time) or input size.


Pseudocode for Expressing Algorithms
Pseudocode is an informal high-level description of an algorithm. It uses a blend of natural
language and programming constructs, allowing for clarity without the strict syntax rules of a
specific programming language. This makes it easy for humans to understand and translate
into various programming languages.

Common pseudocode constructs include:


* Variables: DECLARE variable_name : data_type

* Assignment: variable_name ← expression

* Conditional statements:
IF condition THEN
// statements
ELSE
// statements
END IF

* Loops:
WHILE condition DO
// statements
END WHILE

FOR variable FROM start TO end DO


// statements
END FOR

* Functions/Procedures:
FUNCTION function_name(parameters) RETURNS return_type
// statements
RETURN value
END FUNCTION

* Input/Output: READ variable, PRINT expression

Example (Finding the maximum element in an array):


FUNCTION FIND_MAX(ARRAY A, N) RETURNS INTEGER
IF N = 0 THEN
RETURN UNDEFINED // Or handle error appropriately
END IF

max_element ← A[0]
FOR i FROM 1 TO N-1 DO
IF A[i] > max_element THEN
max_element ← A[i]
END IF
END FOR
RETURN max_element
END FUNCTION

Performance Analysis

Performance analysis is the process of evaluating the efficiency of an algorithm in terms of


the resources it consumes. The primary resources considered are time and space.

Space Complexity

Space complexity refers to the amount of memory space an algorithm requires to run to
completion. It typically includes:

* Fixed Part: Space required for the algorithm itself (instructions, simple variables,
constants). This part is independent of the input size.

* Variable Part: Space required by variables whose size depends on the input (e.g., input
array, auxiliary data structures created during execution, recursion stack space).
Space complexity is often expressed as a function of the input size n.

Time Complexity
Time complexity refers to the amount of time an algorithm takes to run to completion. It's
usually measured by counting the number of elementary operations (e.g., comparisons,
arithmetic operations, assignments) an algorithm performs. It's also expressed as a function
of the input size n.

We are generally interested in how the time taken grows as the input size grows, rather than
the exact time in seconds, which can vary based on hardware, programming language, and
compiler.

Asymptotic Notation

Asymptotic notation is a mathematical tool used to describe the limiting behavior of functions
when the argument tends towards a particular value or infinity. In the context of algorithms,
it's used to describe the "growth rate" of an algorithm's time or space complexity as the input
size n approaches infinity. This allows us to compare algorithms independent of constant
factors and lower-order terms.

Big O Notation (O-notation)

Definition: O(g(n)) (Big O of g of n) is the set of functions f(n) such that there exist positive
constants c and n_0 where 0 \le f(n) \le c \cdot g(n) for all n \ge n_0.

Meaning: Big O notation provides an upper bound for the growth rate of a function. If an
algorithm's time complexity is O(g(n)), it means that for sufficiently large input sizes, the
running time will be no more than a constant multiple of g(n). It describes the worst-case
scenario or the maximum amount of time an algorithm will take.

Example: If an algorithm has T(n) = 3n^2 + 2n + 5 operations, we can say T(n) = O(n^2).
This is because for large n, 3n^2 dominates the other terms, and we can find c and n_0 such
that 3n^2 + 2n + 5 \le c \cdot n^2.

Omega Notation (\Omega-notation)


Definition: \Omega(g(n)) (Omega of g of n) is the set of functions f(n) such that there exist
positive constants c and n_0 where 0 \le c \cdot g(n) \le f(n) for all n \ge n_0.

Meaning: Omega notation provides a lower bound for the growth rate of a function. If an
algorithm's time complexity is \Omega(g(n)), it means that for sufficiently large input sizes,
the running time will be at least a constant multiple of g(n). It describes the best-case
scenario or the minimum amount of time an algorithm will take.

Example: If an algorithm has T(n) = 3n^2 + 2n + 5 operations, we can say T(n) = \


Omega(n^2).

Theta Notation (\Theta-notation)


Definition: \Theta(g(n)) (Theta of g of n) is the set of functions f(n) such that there exist
positive constants c_1, c_2, and n_0 where 0 \le c_1 \cdot g(n) \le f(n) \le c_2 \cdot g(n) for
all n \ge n_0.

Meaning: Theta notation provides a tight bound (both upper and lower bound) for the growth
rate of a function. If an algorithm's time complexity is \Theta(g(n)), it means that for
sufficiently large input sizes, the running time will be bounded both above and below by
constant multiples of g(n). It describes the exact asymptotic behavior.

Example: If an algorithm has T(n) = 3n^2 + 2n + 5 operations, we can say T(n) = \


Theta(n^2). This is the most precise asymptotic notation for describing the algorithm's
performance.

Little O Notation (o-notation)

Definition: o(g(n)) (Little o of g of n) is the set of functions f(n) such that \lim_{n \to \infty} \
frac{f(n)}{g(n)} = 0.

Meaning: Little o notation denotes an upper bound that is not asymptotically tight. If f(n) =
o(g(n)), it means that f(n) grows strictly slower than g(n). It implies that g(n) is a much looser
upper bound than Big O.

Example: 2n = o(n^2) because \lim_{n \to \infty} \frac{2n}{n^2} = \lim_{n \to \infty} \frac{2}{n} =
0.

Performance Measurement

While asymptotic analysis provides a theoretical understanding of an algorithm's


performance, performance measurement (or empirical analysis) involves actually running the
algorithm with various input sizes and measuring its execution time and memory usage.

Steps involved:

* Implement the algorithm: Write the algorithm in a chosen programming language.

* Choose test cases: Select a range of input sizes and characteristics (e.g., best-case,
worst-case, average-case inputs if applicable).

* Measure time: Use system utilities or programming language features (e.g.,


System.nanoTime() in Java, time.perf_counter() in Python, std::chrono in C++) to record the
start and end times of the algorithm's execution.

* Measure space: Monitor memory consumption using profiling tools or by analyzing data
structure sizes.

* Plot results: Graph the measured time/space against the input size to observe the
empirical growth rate and compare it with the theoretical predictions.
* Analyze and interpret: Identify bottlenecks, compare different implementations, and
validate the theoretical complexity.

Limitations of Performance Measurement:

* Dependent on hardware, operating system, compiler, and programming language.

* Difficult to get precise measurements for very fast algorithms.

* Only provides insights for the tested input sizes.

* Difficult to generalize to all possible inputs.

Randomized Algorithms

A randomized algorithm is an algorithm that uses a degree of randomness as part of its


logic. This means that its behavior is not only determined by the input but also by a
sequence of random bits that it generates.

Key characteristics:

* Random Choices: The algorithm makes random choices during its execution, typically by
using a random number generator.

* Performance Variability: For the same input, a randomized algorithm might have different
execution paths and thus different running times on different runs.

* Types:
* Las Vegas Algorithms: Always produce the correct output, but their running time is a
random variable. The worst-case running time might be very high, but the expected running
time is usually good. (e.g., Randomized Quicksort)

* Monte Carlo Algorithms: Always run in a guaranteed time, but there's a small probability
that they might produce an incorrect output (or no output at all). The probability of error can
often be reduced by repeating the algorithm multiple times. (e.g., Miller-Rabin primality test)

Advantages:
* Simplicity: Can be simpler to design and implement than deterministic algorithms for some
problems.

* Efficiency: Can achieve better average-case or expected-case performance than the best-
known deterministic algorithms for certain problems.

* Avoids Worst-Case: Often avoids the worst-case behavior that can plague deterministic
algorithms (e.g., Quicksort's O(n^2) worst-case is highly unlikely with randomization).

* Useful for specific problem types: Especially effective for problems like primality testing,
cryptography, and load balancing.
Disadvantages:
* No guaranteed worst-case performance: For Las Vegas algorithms, the worst-case time
might still be very poor, even if it's rare.

* Probability of error: Monte Carlo algorithms have a non-zero chance of error, which might
be unacceptable for some applications.

* Need for good random number generator: Reliance on a truly random or pseudo-random
number generator, which can impact security or effectiveness.

Example: Randomized Quicksort


Instead of picking the pivot deterministically (e.g., always the first element), Randomized
Quicksort picks a pivot uniformly at random from the array. This significantly reduces the
likelihood of encountering the O(n^2) worst-case time complexity, making its expected time
complexity O(n \log n).

You might also like