7.
Limitations of Algorithm
Power
Ms. D. Retz Mahima
Research Scholar
Dept. of CSE, Andhra University
Contents
• Lower-Bound Arguments
• Decision Trees
• P, NP and NP – complete problems
• Challenges of Numerical Algorithms.
• Limitations of Algorithms Power
• Backtracking
• Branch-and Bound
• Approximation Algorithms for NP-hard Problems
• Algorithms for solving Nonlinear Equations.
1. Lower Bound Arguments
• To ascertain the efficiency of an algorithm with respect to other algorithms for
the same problem, it is desirable to know the best possible efficiency any
algorithm solving the problem may have.
• To achieve a better algorithm, Knowing a lower bound can improve.
• If such a bound is tight, i.e., we already know an algorithm in the same
efficiency class as the lower bound, we can hope for a constant-factor
improvement at best.
• There are several methods for establishing lower bounds:
1. Trivial Lower Bounds
2. Information-Theoretic Arguments
3. Adversary Arguments
4. Problem Reduction
Lower Bound Arguments
1. Trivial Lower Bounds
• Based on counting the number of items in the problem’s input that must be
processed and the number of output items that need to be produced.
• Since any algorithm must at least “read” all the items it needs to process and “write”
all its outputs, such a count yields a trivial lower bound.
• Example - a trivial lower bound for computing the product of two n × n matrices is
(n2) because any such algorithm has to process 2n2 elements in the input matrices
and generate n2 elements of the product.
2. Information-Theoretic Arguments
• To establish a lower bound based on the amount of information it has to produce.
• Example - deducing a positive integer between 1 and n selected by somebody by
asking that person questions with yes/no answers i.e., n possibilities
• Information-theoretic lower bounds for many problems involving comparisons,
including sorting and searching.
Lower Bound Arguments
3. Adversary Arguments
• The adversary method for establishing lower bounds involves imagining a "malevolent
but honest" adversary.
• This adversary's goal is to force an algorithm to perform the maximum possible amount
of work (e.g., questions, comparisons) by making choices that always lead the algorithm
down the most time-consuming path. The adversary remains consistent with all
previous choices made.
• The lower bound is then determined by the amount of work required to narrow down
the set of potential inputs to a single input along this most time-consuming path.
• Example is proving a 2n−1 lower bound for merging two sorted lists of size n.
• An adversary can always reply to comparisons ai < bj as true if and only if i<j, forcing the merging
algorithm to produce the specific sorted list b1 <a1 <b2 <a2 <⋯<bn <an. To correctly generate this
list, any algorithm must explicitly compare 2n−1 adjacent pairs of elements. If any of these
comparisons are skipped (e.g., a1 not compared to b2 ), the keys could be transposed, leading to an
indistinguishable but incorrect merged list, thus proving 2n−1 is the lower bound.
Lower Bound Arguments
4. Problem Reduction
• The problem reduction approach establishes a lower bound for a problem P by relating
it to another problem Q for which a lower bound is already known.
• To do this, you must demonstrate that an arbitrary instance of problem Q can be
transformed efficiently into an instance of problem P.
• An arbitrary instance of problem Q can be transformed (in a reasonably efficient
fashion) to an instance of problem P, so any algorithm solving P would solve Q as well.
Then a lower bound for Q will be a lower bound for P. If an algorithm can solve P, it
can then also solve Q. Therefore, the known lower bound for problem Q also serves as
a lower bound for problem P.
• The reduction technique is often used to compare the relative complexity of problems.
For example, the formulas
• show that the problems of computing the product of two n-digit integers and squaring
an n-digit integer belong to the same complexity class, despite the latter being
seemingly simpler than the former.
2. Decision Trees
• Many important algorithms, especially those for sorting and searching, work
by comparing items of their inputs.
• We can study the performance of such algorithms with a device called a
decision tree.
• As an example, Figure presents a decision tree of an algorithm for finding a
minimum of three numbers.
• Each internal node of a binary decision tree represents a key comparison
indicated in the node, e.g., k<k’.
• The node’s left subtree contains the information about subsequent
comparisons made if k<k’, and its right subtree does the same for the case of
k>k’.
• Each leaf represents a possible outcome of the algorithm’s run on some input
of size n.
Decision Trees
Decision Trees
• The number of comparisons made by the algorithm on such a run is equal to
the length of this path.
• Hence, the number of comparisons in the worst case is equal to the height of
the algorithm’s decision tree.
• For any binary tree with l leaves and height h, h ≥ log2 l.
Decision Trees for two important problems
• Decision Trees for sorting and
• Decision Trees for searching in a sorted array
Decision Trees for Sorting
• Most sorting algorithms are comparison based, i.e., they work by comparing
elements in a list to be sorted.
• By studying properties of decision trees for such algorithms, we can derive
important lower bounds on their time efficiencies.
Consider, an example, a three-element list a, b, c of orderable items such as real
numbers or strings. For the outcome a<c<b obtained by sorting this list (see
Figure), the permutation in question is 1, 3, 2.
• In general, the number of possible outcomes for sorting an arbitrary n-element
list is equal to n!.
• The height of a binary decision tree for any comparison-based sorting
algorithm and hence the worst-case number of comparisons made by such an
algorithm cannot be less than log2n!:
Cworst(n) ≥ log2 n!.
Decision Tree for the three-element selection sort
Decision Trees for Searching a Sorted array
• Decision trees can be used for establishing lower bounds on the number of key comparisons in
searching a sorted array of n keys: A[0] < A[1] < ... < A[n − 1]. The principal algorithm for this
problem is binary search.
• We can represent any algorithm for searching a sorted array by three-way comparisons with a
ternary decision tree similar to that in Figure. For an array of n elements, all such decision trees
will have 2n + 1 leaves. Since height of the tree is 3, lower bound is Cworst(n) ≥ log3(2n + 1).
Decision Trees for Searching a Sorted array
• To obtain a better lower bound, consider binary rather than ternary decision trees, such as the
one in Figure. Binary decision trees immediately yields Cworst(n) ≥ log2(n + 1).
3. P, NP, and NP-Complete Problems
• In the study of the computational complexity of problems, the first concern is
whether a given problem can be solved in polynomial time by some algorithm
O(nk).
Examples
• Selection Sort
• Basic arithmetic operations
• Determining if a number is prime
DEFINITION 1
• We say that an algorithm solves a problem in polynomial time if its worst-case
time efficiency belongs to O(p(n)), where p(n) is a polynomial of the problem’s
input size n.
• Problems that can be solved in polynomial time are called tractable, and
problems that cannot be solved in polynomial time are called intractable.
P and NP Problems
• Problems that can be solved in polynomial time generally called P.
• A more formal definition includes in P only decision problems - which are
problems with yes/no answers.
DEFINITION 2
Class P is a class of decision problems that can be solved in polynomial
time by (deterministic) algorithms. This class of problems is called
polynomial.
• Some decision problems cannot be solved at all by any algorithm. Such
problems are called undecidable.
• A famous example of an undecidable problem was given by Alan Turing. The
problem is called the halting problem: given a computer program and an input
to it, determine whether the program will halt on that input or continue
working indefinitely on it.
P and NP Problems
• Assume that A is an algorithm that solves the halting problem. That is, for any
program P and input I,
A(P , I ) = 1, if program P halts on input I ;
0, if program P does not halt on input I .
• We can consider program P as an input to itself and use the output of algorithm
A for pair (P , P ) to construct a program Q as follows:
Q(P ) = halts, if A(P , P ) = 0, i.e., if program P does not halt on input P;
does not halt, if A(P , P ) = 1, i.e., if program P halts on input P.
• Then on substituting Q for P , we obtain
Q(Q) = halts, if A(Q, Q) = 0, i.e., if program Q does not halt on input Q;
does not halt, if A(Q, Q) = 1, i.e., if program Q halts on input Q.
• This is a contradiction because neither of the two outcomes for program Q is
possible, which completes the proof.
P and NP Problems
• Many important problems, however, for which no polynomial-time algorithm
has been found:
• Hamiltonian circuit problem, Traveling salesman problem, Knapsack problem,
Integer linear programming problem.
• All these problems have in common is an exponential (or worse) growth of
choices, as a function of input size, from which a solution needs to be found.
• Another common feature of a vast majority of decision problems is the fact that
although solving such problems can be computationally difficult, checking
whether a proposed solution actually solves the problem is computationally
easy, i.e., it can be done in polynomial time.
• This general observation about decision problems has led to the notion of a
nondeterministic algorithm
P and NP Problems
DEFINITION 3
• A nondeterministic algorithm is a two-stage procedure that takes as its input an
instance I of a decision problem and does the following.
1. Nondeterministic (“guessing”) stage: An arbitrary string S is generated that
can be thought of as a candidate solution to the given instance I.
2. Deterministic (“verification”) stage: A deterministic algorithm takes both I
and S as its input and outputs yes if S represents a solution to instance I. (If
S is not a solution to instance I , the algorithm either returns no or is
allowed not to halt at all.)
• A nondeterministic algorithm solves a decision problem if and only if for every
yes instance of the problem it returns yes on some execution.
• Finally, a nondeterministic algorithm is said to be nondeterministic polynomial if
the time efficiency of its verification stage is polynomial.
NP Problems
DEFINITION 4
• Class NP is the class of decision problems that can be solved by non-
deterministic polynomial algorithms. This class of problems is called non-
deterministic polynomial.
• Most decision problems are in NP. First of all, this class includes all the problems
in P:
P ⊆ NP.
• This is true because, if a problem is in P, we can use the deterministic
polynomial time algorithm that solves it in the verification-stage of a
nondeterministic algorithm that simply ignores string S generated in its
nondeterministic (“guessing”) stage.
• NP also contains the Hamiltonian circuit problem, the partition problem,
decision versions of the traveling salesman, the knapsack, graph coloring, and
many hundreds of other difficult combinatorial optimization problems.
NP Problems
• The halting problem, on the other hand, is among the rare examples of decision
problems that are known not to be in NP.
• Is P a proper subset of NP, or are these two classes, the same?
P ? = NP
• Note that P = NP would imply that each of many hundreds of difficult
combinatorial decision problems can be solved by a polynomial-time algorithm
NP Complete Problems
• Informally, an NP-complete problem is a problem in NP that is as difficult as any
other problem in this class because, by definition, any other problem in NP can
be reduced to it in polynomial time.
DEFINITION 5
• A decision problem D1 is said to be polynomially reducible to a decision
problem D2, if there exists a function t that transforms instances of D1 to
instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no instances of
D1 to no instances of D2
2. t is computable by a polynomial time algorithm.
• This definition immediately implies that if a problem D1 is polynomially
reducible to some problem D2 that can be solved in polynomial time, then
problem D1 can also be solved in polynomial time.
NP Complete Problems
DEFINITION 6
• A decision problem D is said to be NP-
complete if:
• 1. it belongs to class NP
• 2. every problem in NP is polynomially
reducible to D.
FIGURE: Notion of an NP-complete problem.
Polynomial-time reductions of NP problems to an NP-
complete problem are shown by arrows.
NP Complete Problems
• The notion of NP-completeness requires, polynomial reducibility of all problems
in NP, both known and unknown, to the problem in question.
• Given the variety of decision problems, specific examples of NP-complete
problems have been actually found.
• Cook showed that the CNF-satisfiability problem is NPcomplete.
• The CNF-satisfiability problem deals with boolean expressions.
• Each boolean expression can be represented in conjunctive normal form, such
as the following expression involving three boolean variables x1, x2, and x3 and
their negations denoted x¯1, x¯2, and x¯3, respectively:
(x1 ∨ ¯x2 ∨ ¯x3) & (x¯1 ∨ x2) & (x¯1 ∨ ¯x2 ∨ ¯x3).
• The CNF-satisfiability problem asks whether or not one can assign values true
and false to variables of a given boolean expression in its CNF form to make the
entire expression true. (if x1 = true, x2 = true, and x3 = false, the entire
expression is true.)
NP Complete Problems
• The well-known problems (or their decision versions) mentioned above—
Hamiltonian circuit, traveling salesman, partition, bin packing, and graph
coloring—are all NP-complete.
• A decision problem is NP-complete can be done in two steps.
• First, one needs to show that the problem in question is in NP; i.e., a randomly
generated string can be checked in polynomial time to determine whether or
not it represents a solution to the problem.
• The second step is to show that every problem in NP is reducible to the problem
in question in polynomial time. Because of the transitivity of polynomial
reduction, this step can be done by showing that a known NP-complete
problem can be transformed to the problem in question in polynomial time
NP Complete Problems
• The definition of NP-completeness immediately implies that if there exists a
deterministic polynomial-time algorithm for just one NP-complete problem,
then every problem in NP can be solved in polynomial time by a deterministic
algorithm, and hence P = NP
3. Challenges of Numerical Algorithms
• Variety of difficulties will be posed by modeling, the task of describing a real-life
phenomenon in mathematical terms.
• What principal obstacles are faced to solve a mathematical problem?
• The first major obstacle is the fact that most numerical analysis problems
cannot be solved exactly.
• They have to be solved approximately, and this is usually done by replacing an
infinite object by a finite approximation.
• For example, the value of ex at a given point x can be computed by
approximating its infinite Taylor’s series about x = 0 by a finite sum of its first
terms, called the nth-degree Taylor polynomial:
• Another example, the definite integral of a function can be approximated by a
finite weighted sum of its values, as in the composite trapezoidal rule:
where h = (b − a)/n, xi = a + ih for i = 0, 1,...,n.
Challenges of Numerical Algorithms
• The errors of such approximations are called truncation errors.
• One of the major tasks in numerical analysis is to estimate the magnitudes of
truncation errors.
• This is done by using calculus tools, from elementary to quite advanced. For
example, for approximation of ex,
• where M = max eξ on the segment with the endpoints at 0 and x.
• This formula makes it possible to determine the degree of Taylor’s polynomial
needed to guarantee a predefined accuracy level of approximation.
Challenges of Numerical Algorithms
• The accuracy of the floating-point representation depends on the number of
significant digits p in representation
• Most computers permit two or even three levels of precision: single precision
(equivalent to between 6 and 7 significant decimal digits), double precision (13
to 14 significant decimal digits), and extended precision (19 to 20 significant
decimal digits).
• Using higher precision arithmetic slows computations but may help to
overcome some of the problems caused by round-off errors.
• Used only for a particular step of the algorithm in question.
• As with an approximation of any kind, it is important to distinguish between the
absolute error and the relative error of representing a number α∗ by its
approximation α:
Challenges of Numerical Algorithms
• Very large and very small numbers cannot be represented in floating-point
arithmetic because of the phenomena called overflow and underflow,
respectively.
• An overflow happens when an arithmetic operation yields a result outside the
range of the computer’s floating-point numbers.
• Examples of overflow arise from the multiplication of large numbers or division
by a very small number.
• Can be eliminated by changing the order of expression (e.g., (1029 . 1130)/1230 =
1029 . (11/12)30) or by computing a logarithm of an expression.
• Underflow occurs when the result of an operation is a nonzero fraction of such
a small magnitude that it cannot be represented as a nonzero floating-point
number.
• Underflow numbers are replaced by zero, with a special sign generated by
hardware.
Challenges of Numerical Algorithms
• In addition to inaccurate representation of numbers, the arithmetic operations
performed in a computer are not always exact, either.
• In particular, subtracting two nearly equal floating-point numbers may cause a
large increase in relative error.
• This phenomenon is called subtractive cancellation.
Example 1
Consider two irrational numbers α∗ = π = 3.14159265 ... and β∗ = π − 6 . 10−7 =
3.14159205 ... represented by floating-point numbers α = 0.3141593 . 101 and β =
0.3141592 . 101, respectively.
• The relative errors of these approximations are small:
The relative error of representing the difference γ ∗ = α∗ − β∗ by the difference of
the floating-point representations
which is very large for a relative error despite quite accurate approximations for
both α and β.
Challenges of Numerical Algorithms
• Many numerical algorithms involve thousands or even millions of arithmetic
operations for typical inputs.
• For such algorithms, the propagation of round-off errors becomes a major
concern from both the practical and theoretical standpoints.
• For some algorithms, round-off errors can propagate through the algorithm’s
operations with increasing effect.
• This highly undesirable property of a numerical algorithm is called instability.
• Some problems exhibit such a high level of sensitivity to changes in their input
that it is all but impossible to design a stable algorithm to solve them.
• Such problems are called ill-conditioned.
Example 2
• Consider the following system of two linear equations in two unknowns:
1.001x + 0.999y = 2
0.999x + 1.001y = 2.
• Its only solution is x = 1, y = 1.
• Consider the system with the same coefficient matrix but slightly different right-
hand side values:
1.001x + 0.999y = 2.002
0.999x + 1.001y = 1.998.
• The only solution to this system is x = 2, y = 0, which is quite far from the
solution to the previous system.
• Hence, a minor change in its coefficients may yield a system with either no
solutions or infinitely many solutions, depending on its right-hand side values.
4. Coping with the Limitations of Algorithm Power
• Two algorithm design techniques—backtracking and branch-and-bound—that
often solve at least some large instances of difficult combinatorial problems.
• Both backtracking and branch-and-bound are based on the construction of a
state-space tree whose nodes reflect specific choices made for a solution’s
components.
• Both techniques terminate a node as soon as it can be guaranteed that no
solution to the problem can be obtained by considering choices that correspond
to the node’s descendants.
• The techniques differ in the nature of problems they can be applied to.
• Branch-and-bound is applicable only to optimization problems because it is
based on computing a bound on possible values of the problem’s objective
function.
• Backtracking applies to non optimization problems.
Coping with the Limitations of Algorithm Power
• The other distinction between backtracking and branch-and-bound lies in the
order in which nodes of the state-space tree are generated.
• For backtracking, this tree is usually developed depth-first (i.e., similar to DFS).
• Branch-and-bound can generate nodes according to several rules: the most
natural one is the so-called best-first rule
Coping with the Limitations of Algorithm Power
1. Backtracking
• A "smarter" version of exhaustive search for problems with many choices.
• How it Works:
• Builds solutions piece by piece.
• Evaluates partial solutions: if a path won't lead to a valid solution, it "backtracks"
(gives up on that path) and tries another option.
• Uses a "state-space tree" to explore choices, typically in a depth-first manner.
• Goal: To find exact solutions by systematically exploring possibilities while pruning
unpromising branches.
• Examples: N-Queens Problem, Hamiltonian Circuit Problem, Subset-Sum Problem.
Coping with the Limitations of Algorithm Power
Backtracking: The State-Space Tree
• Visualizing Choices: Backtracking is often implemented by building a "state-space tree."
• Root: Represents the initial state before any choices are made.
• Nodes: Each node represents a partially constructed solution.
• Nodes at level 1 represent choices for the first component.
• Nodes at level 2 represent choices for the second component, given the first, and so on.
• Promising Nodes: A node is "promising" if the partial solution it represents might still lead to
a complete, valid solution.
• Non-Promising Nodes (Dead Ends): If a partial solution violates constraints or cannot be
extended, the node is "non-promising," and the algorithm stops exploring that path
(pruning).
• Traversal: The tree is typically explored using a Depth-First Search (DFS) approach, going as
deep as possible before backtracking.
Coping with the Limitations of Algorithm Power
General Remarks on Backtracking
• Applicability: Primarily used for difficult combinatorial problems where efficient
exact algorithms are unlikely to exist.
• Performance: While it can solve some instances of non-trivial sizes in
acceptable time, worst-case performance can still be exponential.
• Pruning is Key: The effectiveness of backtracking heavily relies on its ability to
"prune" (eliminate) large portions of the state-space tree early.
• Improvements: Performance can often be enhanced by:
• Exploiting problem symmetries.
• Pre-assigning values to certain solution components.
• Pre-sorting input data.
• Alternative: Even if it doesn't prune, it provides a systematic way to generate all
possible candidates.
Coping with the Limitations of Algorithm Power
2. Branch-and-Bound
• Concept: An enhancement of Backtracking, specifically for optimization
problems (finding best/worst solutions).
• How it Works:
• Similar to backtracking, it builds solutions in a state-space tree.
• Key Addition: For each partial solution (node), it calculates a bound (an estimate) on the
best possible outcome from that point.
• If a node's bound is worse than the best solution already found, that entire branch is
"pruned" (ignored).
• Often uses a "best-first" strategy, exploring the most promising branches first.
• Goal: To find optimal solutions more efficiently than exhaustive search by
intelligently cutting off large parts of the search space.
• Examples: Assignment Problem, Knapsack Problem, Traveling Salesman
Problem.
Coping with the Limitations of Algorithm Power
General Remarks on Branch-and-Bound
• Pruning Conditions: Search paths are terminated if:
• A node's bound is not better than the current best solution.
• The node represents no feasible solutions (constraints are already violated).
• The node represents a complete feasible solution (update the best solution found so far).
• Bounding Function Design:
• A crucial element: must be easy to compute, yet "tight" enough to effectively prune
branches.
• Striking a balance between computational ease and pruning power often requires
experimentation.
• Node Generation Order: Offers flexibility (e.g., best-first, depth-first, breadth-
first), which can significantly impact performance.
• Initial Solution: Providing a good initial feasible solution (if available) can
accelerate the process by allowing earlier pruning.
Coping with the Limitations of Algorithm Power
General Remarks on Backtracking and Branch-and-Bound
• Effectiveness: Both techniques can solve many large instances of difficult combinatorial
problems.
• Worst-Case: In the worst case, they may still explore an exponential number of
possibilities.
• Pruning: Success heavily depends on effectively "pruning" (eliminating) unpromising
branches of the state-space tree.
• Heuristics: Incorporating problem-specific heuristics or exploiting symmetry can
significantly improve performance.
Coping with the Limitations of Algorithm Power
Key Differences:
• Problem Type:
• Backtracking: More general, applicable to both optimization problems (finding the best solution) and non-
optimization problems (finding any valid solution, or all valid solutions).
• Branch-and-Bound: Specifically designed for optimization problems.
• Bounding Function:
• Backtracking: Does not use a bounding function. It only checks if a partial solution is "promising" (can
potentially lead to a solution) or "non-promising" (violates constraints).
• Branch-and-Bound: Critically relies on a bounding function that estimates the best possible objective
function value achievable from a given partial solution. This allows it to compare against the best solution
found so far and prune branches that cannot yield a better result.
• Branch-and-Bound: Critically relies on a bounding function that estimates the best possible objective
function value achievable from a given partial solution. This allows it to compare against the best solution
found so far and prune branches that cannot yield a better result.
• Node Generation Order:
• Backtracking: Typically uses a depth-first search strategy, going deep into one path before exploring others.
• Branch-and-Bound: Often uses a best-first search strategy, expanding the "most promising" node (the one
with the best bound) among all live nodes, though other strategies are possible.
5. Approximation Algorithms for NP-hard Problems
For problems that are computationally too hard to solve exactly in a reasonable time (NP-hard).
• Goal: To find "good enough" (approximate) solutions quickly (in polynomial time).
Key Metrics:
• Accuracy Ratio: How close the approximate solution is to the optimal one.
• Performance Ratio: The worst-case accuracy guarantee for an algorithm.
Approaches for Traveling Salesman Problem (TSP):
• Greedy: Simple (e.g., Nearest-Neighbor), but can be very inaccurate.
• MST-Based: Uses a Minimum Spanning Tree (e.g., Christofides algorithm) for better
guarantees on accuracy.
• Local Search: Iteratively improves a tour by small changes (e.g., 2-opt, Lin-Kernighan),
often yielding excellent practical results.
Approaches for Knapsack Problem:
• Greedy: Based on value-to-weight ratio, can be enhanced for better accuracy.
• Approximation Schemes: Algorithms that can achieve any desired accuracy level, though
they might get slower for higher accuracy.
6. Algorithms for Solving Nonlinear Equations
• Numerical methods to find approximate solutions (roots or zeros) for equations
like f(x)=0.
• Most nonlinear equations don't have simple, exact analytical formulas for their
roots.
Common Methods:
• Bisection Method:
• Finds a root by repeatedly halving an interval where the function's sign changes.
• Guaranteed to converge, but relatively slow.
• Method of False Position:
• Similar to bisection, but uses a straight line connecting two points to estimate the next root.
• Can be faster than bisection.
• Newton's Method:
• Uses tangent lines to the function's graph to iteratively refine an approximation.
• Very fast convergence if the initial guess is good, but may diverge or fail if conditions aren't
met.
6. Algorithms for Solving Nonlinear Equations
Challenges of Numerical Algorithms
• Approximation Necessity: Most continuous mathematical problems cannot be
solved exactly; they require approximate solutions.
• Types of Errors:
• Truncation Errors: Occur from replacing infinite mathematical objects (like series or
integrals) with finite approximations.
• Round-off Errors: Result from the limited precision of representing real numbers in
computers.
• Subtractive Cancellation: A significant issue were subtracting two nearly equal
floating-point numbers can lead to a large increase in relative error.
• Instability & Ill-Conditioned Problems: Some algorithms are "unstable" (errors
propagate with increasing effect), and some problems are "ill-conditioned"
(highly sensitive to small input changes), making stable solutions difficult.
Thank You!