KEMBAR78
RProgrammingassignmenthPPT_05.07.24.pptx
Visit: https://www.rprogrammingassignmenthelp.com/
Email: support@rprogrammingassignmenthelp.com
Phone: +1(707)470-2530
Topic: Advance Algorithm in R Programming
Welcome to our sample assignment presentation provided by
RProgrammingAssignmentHelp.com, your dedicated partner in
mastering Advanced Algorithms in R Programming. At
RProgrammingAssignmentHelp.com, we specialize in assisting
students with complex algorithmic concepts, ensuring clarity and
understanding through expert guidance. This presentation
showcases comprehensive solutions to intricate problems involving
advanced algorithms, emphasizing our proficiency in analytical
methods and problem-solving strategies.
Explore a variety of topics covered in this presentation, including
dynamic programming, graph algorithms, computational geometry,
and optimization techniques. We illustrate our expertise in handling
complex algorithms with precision, offering step-by-step solutions
that demystify theoretical concepts and enhance practical
application skills.
Advance Algorithm in R Programming
Problem 1
Unlike regular heaps, Fibonacci heaps do not achieve their good
performance by keeping the depth of the heap small.
Demonstrate this by exhibiting a sequence of Fibonacci heap
operations on n items that produce a heap­
ordered tree of depth
Ω(n).
Suppose we have a Fibonacci heap that is a single chain of k − 1
nodes. The following operations make a chain of length k. Let
min be the current minimum of the Fibonacci heap: 1. Insert
items x1 < x2 < x3 < min in an arbitrary order. 2. Delete the
minimum, which is x1. 3. Decrease the key of x3 to be −∞, i.e.
the minimum. 4. Delete the minimum, which is x3 = −∞.
Solution:
The second step is the key one: it removes x1, joins x2 and x3 as a
chain, and then joins the original chain with the chain containing x2
and x3 (obtaining a tree where x2 is the root, with x3 and the
original k − 1­
nodes chain as children). The third step just removes
x3 from the chain, and the last step completely deletes it. The result
is that x2 is now the root of the original chain, so we have
constructed a chain of length k. For the base case, just insert a
single node. Thus, we obtain k-­
nodes chain with O(k) operations;
therefore, we can construct Ω(n)­
nodes chain with n operations.
Note that decrease-­
key operation was essential for obtaining Ω(n)
depth: without it, you can only obtain binomial heaps (which have
logarithmic depth).
Problem 2.
Suppose that Fibonacci heaps were modified so that a node was
cut only after losing k children.
Show that this will improve the amortized cost of decrease key
(to a better constant) at the cost of a worse cost for delete­
min
(by a constant factor).
Solution:
For each node, we store a counter of how many of its
children were removed (call count i the counter of node i).
To analyze the running time of the operations
Conversely, the delete-­
min operation will have a higher
amortized cost. The analysis is the same as in the case of the
original Fibonacci heaps. Thus, the amortized cost of delete-­
min
is bounded by the maximum degree of a heap in our data
structure.
Problem 3
(a) On tradeoffs in the heap operations. (a) Let P be a priority
queue that performs insert, delete-­
min, and merge in O(log
n) time, and performs make-­
heap in O(n) time where n is
the size of resulting priority queue. Show that P can be
modified to perform insert in O(1) amortized time, without
affecting the cost of delete-­
min or merge (i.e. O(log n)
amortized time). Assume that the priority queue does not
support an efficient decrease-­
key operation.
We can augment the priority queue P with a linked list l.
We modify the insert operation so it just puts the element
in the linked list l. Now we define a consolidate operation
that adds the elements of the linked list to the priority
queue.
Solution:
We do this by creating a new priority queue P containing
�
only the items in the linked list l using make­
-heap. This takes
O(m) time, where m is the size of the linked list. We then
merge the two queues P and P’ in O(log n) time. Therefore,
the total consolidation time is O(m + log n). We modify
delete­
-min to first consolidate, and then call the original
delete-­
min. We modify merge to first consolidate each of the
augmented priority queues, and then call the original merge.
Consider a set of initially empty augmented priority queues
{P’} (that may be merged later) on which all operations are
performed. The potential function φ’ is defined as the sum
of the size of the lists of each of the priority queues P’ . Note
that inserting in a particular priority queue takes O(1)
amortized time. Delete-­
min on any particular priority queue
also takes only O(log nh) amortized, time, where nh is the
size of that priority queue, since the O(mh + log nh) real
work to consolidate is decreased to amortized O(log nh) by
the potential from the queue before delete-­
min was
processed. Now, consider the amortized time to merge two
of the augmented priority queues. We spend amortized time
of O(log nh) + O(log nh ) to consolidate each one, plus the
real work of merging the two priority queues which takes
O(log nh) time, assuming nh > n h. The total amortized
�
time, then, is O(log nh)
Using the above technique, show that even binary heaps can be
modified to support insert in O(1) amortized time while maintaining
an O(log n) time bound for delete-­
min. Note that binary heaps do
not support merge in O(log n) time.
(b)
Solution:
The basic idea is to use a heap of heaps (together with a list as in
part (a)). The data structure is composed of several binary heaps
P1, . . . Pk and a “master” binary heap M.
The heaps P1, . . . Pk contain the elements of the data structure.
The heap M contains as its elements the heaps P1, . . . Pk, which
are keyed (compared) according to the values of their roots. To
insert an element into the data structure, we just add it to the
linked list l. Delete-­
min first does a consolidation (which takes O(m
+ log n) time, where m is the length of l). Then, delete-­
min
retrieves the “smallest” heap Pi from M. The root of Pi is the
minimum element in the data structure. Remove the minimum
from Pi (usual heap operation). If Pi is not empty, insert the
modified Pi back into M. In the consolidation step, we construct a
binary heap Pk+1 from l (if l is non-empty) and empty the list l. This
can be done in O(m) using a standard heap construction algorithm.
To finish the consolidation step, we insert Pk+1 into into M. To
analyze the running time, let the potential function be equal to the
length of the list l. Then, insert takes O(1). Consolidation takes O(m
+ log n) real time, but O(log n) amortized time (since the length of
the list decreases by m). Delete-­
min takes O(log n) time (note that
the depth of all heaps, M, P1, . . . Pk is always O(log n)).
Problem 4.
The least common ancestor (sometimes called lowest common
ancestor) of nodes v and w in an n node rooted tree T is the node
furthest from the root that is an ancestor of both v and w. The
following algorithm solves the offline problem. That is, given a set
of query pairs, it computes all of the answers quickly. It makes use a
union­find data structure. You may want to review this data
structure (cf. [CLRS], Chapter 21), and in particular the union ­
by ­
rank heuristic. Offline LCA: Associate with each node an extra field
“name”. Process the nodes of T in post order. To process a node,
consider all of the query pairs it is a member of. For each pair, if the
other endpoint has not yet been processed, do nothing. If the other
endpoint has been processed do a find on it, and record the
“name” of the result as the LCA of this pair. After considering all of
the pairs, union the node with its parent, and set the “name” of the
set representative to be the parent
We leave it as an exercise (not to be turned in), that this algorithm is
correct, and takes O((n + m)α(n)) time. (If you haven’t met α before,
it is an inverse of Ackerman’s function and grows VERY, VERY slowly
—even slower than log n. It is only 4 on the number of particles in
∗
the universe.) Of course, in some instances we would like to find
least common ancestors online. That is, we aren’t told all of the
pairs up front; we get queries one at a time.
Show how to use the techniques of persistent data structures to
preprocess a tree in O(n log n) time so as to allow LCA queries to be
answered in O(log n) time. Aim for a simple solution here, even if
you solve part (b). Hint: path compression is messy for the persistent
data structure, and is not necessary to achieve O(log n) time for
union and find operations. Note also that nodes have arbitrary in
degree, so path copying won’t work
(a)
Solution:
Consider the offline algorithm: we process nodes in post order (i.e.,
we traverse the nodes using DFS, and process a node only after
processsing all of its children). When we process a node a, we
answer queries (a, b), such that b was processed earlier than a by
doing a find in our union­find data structure D; the “name” of the
result is the answer to the query. Then we union a with the parent of
a, and set the name of the set ­
representative to be the name of the
parent.
The relationship to persistent data structures is as follows. We view
the order in which we process the nodes as time. Note that changes
to the union ­
find data structure D occur exactly at the times the nodes
are processed, so that we can think of the data structure as changing
over time: D1, D2, . . . Dn. Suppose we run the above algorithm, but at
each time t we process a node, we save the state of Dt .
Now, suppose we wish to answer a query of the form (a, b).
Suppose b was processed after a at time t. Revert to the data
structure Dt, and do a “find” of a. This would answer the query
(a, b). The goal, then, is to design a persistent version of the
union­find data structure to support the following two
operations:
• find(x,t): Find the name of x’s component at time t.
• union(w,p,t): Union the component with name w and the
component with name p at time t.
We use the disjoint­f
orest implementation of the union­
find
data structure using the union by rank heuristic. For each
node, the parent pointer will also store the timestamp t at
which the parent pointer became non­
null (note that this
occurs exactly once for each node in the tree)
Therefore, to do find(x,t), we walk up the parent pointers until we find a
node whose parent pointer became non ­
null at a time later than t.
However, we need to find the name of this component. To do this, we
create a log of the operations done on the union­find data structure.
The log is an array mapping time­
stamps to the names of the
components unioned. To compute the root node, we can lookup the
name of the parent component corresponding to the time­
stamp of the
last edge traversed. Following parent pointers takes O(log n) time due to
union by rank, so the find operation takes O(log n) time. To do a
union(w,p,t), we first do a find(w,t) and a find(p,t). Then we do union by
rank and timestamp the edge added with t. Now we need to update the
log: a log entry (w, p) is added to the tth element in the log array. It is
clear that the union operation takes O(log n) time, so the preprocessing
time takes O(n log n) time.

RProgrammingassignmenthPPT_05.07.24.pptx

  • 1.
  • 2.
    Welcome to oursample assignment presentation provided by RProgrammingAssignmentHelp.com, your dedicated partner in mastering Advanced Algorithms in R Programming. At RProgrammingAssignmentHelp.com, we specialize in assisting students with complex algorithmic concepts, ensuring clarity and understanding through expert guidance. This presentation showcases comprehensive solutions to intricate problems involving advanced algorithms, emphasizing our proficiency in analytical methods and problem-solving strategies. Explore a variety of topics covered in this presentation, including dynamic programming, graph algorithms, computational geometry, and optimization techniques. We illustrate our expertise in handling complex algorithms with precision, offering step-by-step solutions that demystify theoretical concepts and enhance practical application skills. Advance Algorithm in R Programming
  • 3.
    Problem 1 Unlike regularheaps, Fibonacci heaps do not achieve their good performance by keeping the depth of the heap small. Demonstrate this by exhibiting a sequence of Fibonacci heap operations on n items that produce a heap­ ordered tree of depth Ω(n). Suppose we have a Fibonacci heap that is a single chain of k − 1 nodes. The following operations make a chain of length k. Let min be the current minimum of the Fibonacci heap: 1. Insert items x1 < x2 < x3 < min in an arbitrary order. 2. Delete the minimum, which is x1. 3. Decrease the key of x3 to be −∞, i.e. the minimum. 4. Delete the minimum, which is x3 = −∞. Solution:
  • 4.
    The second stepis the key one: it removes x1, joins x2 and x3 as a chain, and then joins the original chain with the chain containing x2 and x3 (obtaining a tree where x2 is the root, with x3 and the original k − 1­ nodes chain as children). The third step just removes x3 from the chain, and the last step completely deletes it. The result is that x2 is now the root of the original chain, so we have constructed a chain of length k. For the base case, just insert a single node. Thus, we obtain k-­ nodes chain with O(k) operations; therefore, we can construct Ω(n)­ nodes chain with n operations. Note that decrease-­ key operation was essential for obtaining Ω(n) depth: without it, you can only obtain binomial heaps (which have logarithmic depth). Problem 2. Suppose that Fibonacci heaps were modified so that a node was cut only after losing k children.
  • 5.
    Show that thiswill improve the amortized cost of decrease key (to a better constant) at the cost of a worse cost for delete­ min (by a constant factor). Solution: For each node, we store a counter of how many of its children were removed (call count i the counter of node i). To analyze the running time of the operations
  • 6.
    Conversely, the delete-­ minoperation will have a higher amortized cost. The analysis is the same as in the case of the original Fibonacci heaps. Thus, the amortized cost of delete-­ min is bounded by the maximum degree of a heap in our data structure.
  • 7.
    Problem 3 (a) Ontradeoffs in the heap operations. (a) Let P be a priority queue that performs insert, delete-­ min, and merge in O(log n) time, and performs make-­ heap in O(n) time where n is the size of resulting priority queue. Show that P can be modified to perform insert in O(1) amortized time, without affecting the cost of delete-­ min or merge (i.e. O(log n) amortized time). Assume that the priority queue does not support an efficient decrease-­ key operation. We can augment the priority queue P with a linked list l. We modify the insert operation so it just puts the element in the linked list l. Now we define a consolidate operation that adds the elements of the linked list to the priority queue. Solution:
  • 8.
    We do thisby creating a new priority queue P containing � only the items in the linked list l using make­ -heap. This takes O(m) time, where m is the size of the linked list. We then merge the two queues P and P’ in O(log n) time. Therefore, the total consolidation time is O(m + log n). We modify delete­ -min to first consolidate, and then call the original delete-­ min. We modify merge to first consolidate each of the augmented priority queues, and then call the original merge. Consider a set of initially empty augmented priority queues {P’} (that may be merged later) on which all operations are performed. The potential function φ’ is defined as the sum of the size of the lists of each of the priority queues P’ . Note that inserting in a particular priority queue takes O(1) amortized time. Delete-­ min on any particular priority queue also takes only O(log nh) amortized, time, where nh is the size of that priority queue, since the O(mh + log nh) real
  • 9.
    work to consolidateis decreased to amortized O(log nh) by the potential from the queue before delete-­ min was processed. Now, consider the amortized time to merge two of the augmented priority queues. We spend amortized time of O(log nh) + O(log nh ) to consolidate each one, plus the real work of merging the two priority queues which takes O(log nh) time, assuming nh > n h. The total amortized � time, then, is O(log nh) Using the above technique, show that even binary heaps can be modified to support insert in O(1) amortized time while maintaining an O(log n) time bound for delete-­ min. Note that binary heaps do not support merge in O(log n) time. (b) Solution: The basic idea is to use a heap of heaps (together with a list as in part (a)). The data structure is composed of several binary heaps P1, . . . Pk and a “master” binary heap M.
  • 10.
    The heaps P1,. . . Pk contain the elements of the data structure. The heap M contains as its elements the heaps P1, . . . Pk, which are keyed (compared) according to the values of their roots. To insert an element into the data structure, we just add it to the linked list l. Delete-­ min first does a consolidation (which takes O(m + log n) time, where m is the length of l). Then, delete-­ min retrieves the “smallest” heap Pi from M. The root of Pi is the minimum element in the data structure. Remove the minimum from Pi (usual heap operation). If Pi is not empty, insert the modified Pi back into M. In the consolidation step, we construct a binary heap Pk+1 from l (if l is non-empty) and empty the list l. This can be done in O(m) using a standard heap construction algorithm. To finish the consolidation step, we insert Pk+1 into into M. To analyze the running time, let the potential function be equal to the length of the list l. Then, insert takes O(1). Consolidation takes O(m + log n) real time, but O(log n) amortized time (since the length of the list decreases by m). Delete-­ min takes O(log n) time (note that the depth of all heaps, M, P1, . . . Pk is always O(log n)).
  • 11.
    Problem 4. The leastcommon ancestor (sometimes called lowest common ancestor) of nodes v and w in an n node rooted tree T is the node furthest from the root that is an ancestor of both v and w. The following algorithm solves the offline problem. That is, given a set of query pairs, it computes all of the answers quickly. It makes use a union­find data structure. You may want to review this data structure (cf. [CLRS], Chapter 21), and in particular the union ­ by ­ rank heuristic. Offline LCA: Associate with each node an extra field “name”. Process the nodes of T in post order. To process a node, consider all of the query pairs it is a member of. For each pair, if the other endpoint has not yet been processed, do nothing. If the other endpoint has been processed do a find on it, and record the “name” of the result as the LCA of this pair. After considering all of the pairs, union the node with its parent, and set the “name” of the set representative to be the parent
  • 12.
    We leave itas an exercise (not to be turned in), that this algorithm is correct, and takes O((n + m)α(n)) time. (If you haven’t met α before, it is an inverse of Ackerman’s function and grows VERY, VERY slowly —even slower than log n. It is only 4 on the number of particles in ∗ the universe.) Of course, in some instances we would like to find least common ancestors online. That is, we aren’t told all of the pairs up front; we get queries one at a time. Show how to use the techniques of persistent data structures to preprocess a tree in O(n log n) time so as to allow LCA queries to be answered in O(log n) time. Aim for a simple solution here, even if you solve part (b). Hint: path compression is messy for the persistent data structure, and is not necessary to achieve O(log n) time for union and find operations. Note also that nodes have arbitrary in degree, so path copying won’t work (a)
  • 13.
    Solution: Consider the offlinealgorithm: we process nodes in post order (i.e., we traverse the nodes using DFS, and process a node only after processsing all of its children). When we process a node a, we answer queries (a, b), such that b was processed earlier than a by doing a find in our union­find data structure D; the “name” of the result is the answer to the query. Then we union a with the parent of a, and set the name of the set ­ representative to be the name of the parent. The relationship to persistent data structures is as follows. We view the order in which we process the nodes as time. Note that changes to the union ­ find data structure D occur exactly at the times the nodes are processed, so that we can think of the data structure as changing over time: D1, D2, . . . Dn. Suppose we run the above algorithm, but at each time t we process a node, we save the state of Dt .
  • 14.
    Now, suppose wewish to answer a query of the form (a, b). Suppose b was processed after a at time t. Revert to the data structure Dt, and do a “find” of a. This would answer the query (a, b). The goal, then, is to design a persistent version of the union­find data structure to support the following two operations: • find(x,t): Find the name of x’s component at time t. • union(w,p,t): Union the component with name w and the component with name p at time t. We use the disjoint­f orest implementation of the union­ find data structure using the union by rank heuristic. For each node, the parent pointer will also store the timestamp t at which the parent pointer became non­ null (note that this occurs exactly once for each node in the tree)
  • 15.
    Therefore, to dofind(x,t), we walk up the parent pointers until we find a node whose parent pointer became non ­ null at a time later than t. However, we need to find the name of this component. To do this, we create a log of the operations done on the union­find data structure. The log is an array mapping time­ stamps to the names of the components unioned. To compute the root node, we can lookup the name of the parent component corresponding to the time­ stamp of the last edge traversed. Following parent pointers takes O(log n) time due to union by rank, so the find operation takes O(log n) time. To do a union(w,p,t), we first do a find(w,t) and a find(p,t). Then we do union by rank and timestamp the edge added with t. Now we need to update the log: a log entry (w, p) is added to the tth element in the log array. It is clear that the union operation takes O(log n) time, so the preprocessing time takes O(n log n) time.