KEMBAR78
multi threaded and distributed algorithms | PPT
Multithreaded and
Distributed Algorithms
Prof. Shashikant V. Athawale
Assistant Professor | Computer Engineering
Department | AISSMS College of Engineering,
Kennedy Road, Pune , MH, India - 411001
Multithreaded Algorithms:
● Algorithms designed to achieve concurrent execution of two or more
parts of a program for maximum utilization of CPU.
● A multithreaded program contains two or more parts that can run
concurrently. Each part of such a program is called a thread, and each
thread defines a separate path of execution.
Multithreaded Algorithms KeyPoints:
● Communication Models :Shares memory and Distributed memory.
● Types of Threading :Static threading and Dynamic Threading.
● Dynamic Multithreading : Parallel, sync and spawn
Performance Measures:
● Work: total time taken by entire program on single processor.
● Span : Longest time to execute strand.
● Tp : Running time using P processors.
● Ti : work on ith processor.
● T∞ : span for unlimited process.
● Tp : P processors perform p unit of work.
● T1 : total work to be done.
● Work law
Tp>=T1 P
● Span Law:
Tp>=T∞
● Speedup =T1/Tp
● Perfect linear speedup : T1/Tp=P
● Parallelism T1/T∞
● Slackness : T1/P T∞
Parallel Loops:
● Spawn
● Sync
function Fib(𝑛)
if 𝑛 ≤ 1 then return 𝑛
else 𝑥 = spawn Fib(𝑛 − 1)
𝑦 = Fib(𝑛 − 2) sync
return 𝑥 + 𝑦
end if end function
Parallel Loops
Mat-Vec(A, x)
1 n = A.rows
2 let y be a new vector of length n
3 parallel for i = 1 to n
4 yi = 0
5 parallel for i = 1 to n
6 for j = 1 to n
7 return yi = yi + aij⋅xj
Race Condition :
RaceCondition()
X <- 10
Parallel for i <-1 to 2 do
X <- X + 5
end
Multithreaded Matrix Multiplication (parallel for)
Mat-Vec(A, x)
1 n = A.rows
2 let y be a new vector of length n
3 parallel for i = 1 to n
4 yi = 0
5 parallel for i = 1 to n
6 for j = 1 to n
7 return yi = yi + aij⋅xj
Multithreaded Matrix Multiplication (spawn sync)
P-Matrix-Multiply-Recursive(C, A, B)
1 n = A.rows
2 if n == 1
3 c11 = a11⋅b11
4 else
let T be a new n×n matrix
5 partition A, B, C, and T into n/2×n/2 submatrices
A11, A12, A21, A22; etc.
6. spawn P-Matrix-Multiply-Recursive(C11, A11, B11)
7. spawn P-Matrix-Multiply-Recursive(C12, A11, B12)
8. spawn P-Matrix-Multiply-Recursive(C21, A21, B11)
9. spawn P-Matrix-Multiply-Recursive(C22, A21, B12)
10. spawn P-Matrix-Multiply-Recursive(T11, A12, B21)
11. spawn P-Matrix-Multiply-Recursive(T12, A12, B22)
12. spawn P-Matrix-Multiply-
Recursive(T21, A22, B21)
13. P-Matrix-Multiply-Recursive(T22,
A22, B22)
14. sync
15. parallel for i = 1 to n
16. parallel for j = 1 to n
17. cij = cij + tij
Parallel merge sort
Procedure parallel mergesort(id, n, data, newdata)
begin
data = sequentialmergesort(data)
for dim = 1 to n
data = parallelmerge(id, dim, data)
endfor
newdata = data
end
Distributed Algorithms:
 A distributed algorithm is an algorithm, run on a distributed system, that
does not assume the previous existence of a central coordinator. A
distributed system is a collection of processors that do not share memory
or a clock.
DistributedBreadthFirstSearch
States: distance, initially 0 for initiator and inf for all other nodes, internal send
buffers
Initiator initialization code: send distance to all neighbors
All processes:
upon receiving d from p:
if d+1 < distance:
distance := d+1
parent := p
send distance to all neighbors
Distributed Minimum Spanning Tree
1: function EdgePartition(G (V, E))
2: η = Memory of each machine
3: e = E
4: while |e| > η do
5: l = Θ |E| η
6: Split e into e1, e2, e3 ... el using a universal hash function
7: Compute T ∗ i = KRUSKAL(G (V, ei)) . In parallel
8: e = ∪iT ∗ i
9: end while
10: A = KRUSKAL(G (V, e))
11: return A
12: end function
Naive String Matching Algorithm :
 Slide the pattern over text one by one and check for a match. If a match is
found, then slides by 1 again to check for subsequent matches.
1. n ← length [T]
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1.....m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s
Rabin Karp Algorithm :
The Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching
algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses
hashing to find any one of a set of pattern strings in a text. For text of length
n and p patterns of combined length m
RABIN-KARP-MATCHER (T, P, d, q)
1. n ← length [T]
2. m ← length [P]
3. h ← dm-1 mod q
4. p ← 0
5. t0 ← 0
6. for i ← 1 to m
7. do p ← (dp + P[i]) mod q
8. t0 ← (dt0+T [i]) mod q
9. for s ← 0 to n-m
10. do if p = ts
11. then if P [1.....m] = T [s+1.....s + m]
12. then "Pattern occurs with shift" s
13. If s < n-m
14. then ts+1 ← (d (ts-T [s+1]h)+T [s+m+1])mod q
Thank you

multi threaded and distributed algorithms

  • 1.
    Multithreaded and Distributed Algorithms Prof.Shashikant V. Athawale Assistant Professor | Computer Engineering Department | AISSMS College of Engineering, Kennedy Road, Pune , MH, India - 411001
  • 2.
    Multithreaded Algorithms: ● Algorithmsdesigned to achieve concurrent execution of two or more parts of a program for maximum utilization of CPU. ● A multithreaded program contains two or more parts that can run concurrently. Each part of such a program is called a thread, and each thread defines a separate path of execution.
  • 3.
    Multithreaded Algorithms KeyPoints: ●Communication Models :Shares memory and Distributed memory. ● Types of Threading :Static threading and Dynamic Threading. ● Dynamic Multithreading : Parallel, sync and spawn
  • 4.
    Performance Measures: ● Work:total time taken by entire program on single processor. ● Span : Longest time to execute strand.
  • 5.
    ● Tp :Running time using P processors. ● Ti : work on ith processor. ● T∞ : span for unlimited process. ● Tp : P processors perform p unit of work. ● T1 : total work to be done.
  • 6.
    ● Work law Tp>=T1P ● Span Law: Tp>=T∞ ● Speedup =T1/Tp ● Perfect linear speedup : T1/Tp=P ● Parallelism T1/T∞ ● Slackness : T1/P T∞
  • 7.
    Parallel Loops: ● Spawn ●Sync function Fib(𝑛) if 𝑛 ≤ 1 then return 𝑛 else 𝑥 = spawn Fib(𝑛 − 1) 𝑦 = Fib(𝑛 − 2) sync return 𝑥 + 𝑦 end if end function
  • 8.
    Parallel Loops Mat-Vec(A, x) 1n = A.rows 2 let y be a new vector of length n 3 parallel for i = 1 to n 4 yi = 0 5 parallel for i = 1 to n 6 for j = 1 to n 7 return yi = yi + aij⋅xj
  • 9.
    Race Condition : RaceCondition() X<- 10 Parallel for i <-1 to 2 do X <- X + 5 end
  • 10.
    Multithreaded Matrix Multiplication(parallel for) Mat-Vec(A, x) 1 n = A.rows 2 let y be a new vector of length n 3 parallel for i = 1 to n 4 yi = 0 5 parallel for i = 1 to n 6 for j = 1 to n 7 return yi = yi + aij⋅xj
  • 11.
    Multithreaded Matrix Multiplication(spawn sync) P-Matrix-Multiply-Recursive(C, A, B) 1 n = A.rows 2 if n == 1 3 c11 = a11⋅b11 4 else let T be a new n×n matrix 5 partition A, B, C, and T into n/2×n/2 submatrices
  • 12.
    A11, A12, A21,A22; etc. 6. spawn P-Matrix-Multiply-Recursive(C11, A11, B11) 7. spawn P-Matrix-Multiply-Recursive(C12, A11, B12) 8. spawn P-Matrix-Multiply-Recursive(C21, A21, B11) 9. spawn P-Matrix-Multiply-Recursive(C22, A21, B12) 10. spawn P-Matrix-Multiply-Recursive(T11, A12, B21) 11. spawn P-Matrix-Multiply-Recursive(T12, A12, B22)
  • 13.
    12. spawn P-Matrix-Multiply- Recursive(T21,A22, B21) 13. P-Matrix-Multiply-Recursive(T22, A22, B22) 14. sync 15. parallel for i = 1 to n 16. parallel for j = 1 to n 17. cij = cij + tij
  • 14.
    Parallel merge sort Procedureparallel mergesort(id, n, data, newdata) begin data = sequentialmergesort(data) for dim = 1 to n data = parallelmerge(id, dim, data) endfor newdata = data end
  • 15.
    Distributed Algorithms:  Adistributed algorithm is an algorithm, run on a distributed system, that does not assume the previous existence of a central coordinator. A distributed system is a collection of processors that do not share memory or a clock.
  • 16.
    DistributedBreadthFirstSearch States: distance, initially0 for initiator and inf for all other nodes, internal send buffers Initiator initialization code: send distance to all neighbors All processes: upon receiving d from p: if d+1 < distance: distance := d+1 parent := p send distance to all neighbors
  • 17.
    Distributed Minimum SpanningTree 1: function EdgePartition(G (V, E)) 2: η = Memory of each machine 3: e = E 4: while |e| > η do 5: l = Θ |E| η 6: Split e into e1, e2, e3 ... el using a universal hash function
  • 18.
    7: Compute T∗ i = KRUSKAL(G (V, ei)) . In parallel 8: e = ∪iT ∗ i 9: end while 10: A = KRUSKAL(G (V, e)) 11: return A 12: end function
  • 19.
    Naive String MatchingAlgorithm :  Slide the pattern over text one by one and check for a match. If a match is found, then slides by 1 again to check for subsequent matches.
  • 20.
    1. n ←length [T] 2. m ← length [P] 3. for s ← 0 to n -m 4. do if P [1.....m] = T [s + 1....s + m] 5. then print "Pattern occurs with shift" s
  • 21.
    Rabin Karp Algorithm: The Rabin–Karp algorithm or Karp–Rabin algorithm is a string-searching algorithm created by Richard M. Karp and Michael O. Rabin (1987) that uses hashing to find any one of a set of pattern strings in a text. For text of length n and p patterns of combined length m
  • 22.
    RABIN-KARP-MATCHER (T, P,d, q) 1. n ← length [T] 2. m ← length [P] 3. h ← dm-1 mod q 4. p ← 0 5. t0 ← 0 6. for i ← 1 to m 7. do p ← (dp + P[i]) mod q
  • 23.
    8. t0 ←(dt0+T [i]) mod q 9. for s ← 0 to n-m 10. do if p = ts 11. then if P [1.....m] = T [s+1.....s + m] 12. then "Pattern occurs with shift" s 13. If s < n-m 14. then ts+1 ← (d (ts-T [s+1]h)+T [s+m+1])mod q
  • 24.