Unit-2
Distributed Mutual Exclusion
Mutual exclusion
ensures that concurrent processes have serialized access to
shared resources - the critical section problem.
At any point in time, only one process can be executing in its
critical section.
Shared variables (semaphores) cannot be used in a distributed system
Mutual exclusion must be based on message passing, in the
context of unpredictable delays and incomplete knowledge
In some applications (e.g. transaction processing) the resource is
managed by a server which implements its own lock along with
mechanisms to synchronize access to the resource.
Approaches to Distributed Mutual Exclusion
Central coordinator based approach
A centralized coordinator determines who enters the CS
Distributed approaches to mutual exclusion
Token based approach
A unique token is shared among the sites. A site is allowed to
enter its CS if it possesses the token.
Mutual exclusion is ensured because the token is unique.
Non-token based approach
Two or more successive rounds of messages are exchanged
among the sites to determine which site will enter the CS next.
Quorum based approach
1
Each site requests permission to execute the CS from a subset of
sites (called a quorum).
Any two quorums contain a common site. This common site is
responsible to make sure that only one request executes the CS at
any time.
Requirements/Conditions
Safety Property (Mutual Exclusion)
At any instant, only one process can execute the critical section.
Liveness Property (Progress)
This property states the absence of deadlock and starvation. Two or
more sites should not endlessly wait for messages which will never
arrive.
Fairness (Bounded Waiting)
Each process gets a fair chance to execute the CS. Fairness property
generally means the CS execution requests are executed in the order of
their arrival (time is determined by a logical clock) in the system.
Performance Metrics for Mutual Exclusion Algorithms
Message complexity
The number of messages required per CS execution by a site.
Synchronization delay
After a site leaves the CS, it is the time required before the next site
enters the CS
Response time
The time interval a request waits for its CS execution to be over after its
request messages have been sent out
2
System throughput
The rate at which the system executes requests for the CS.
System throughput=1/(SD+E)
where SD is the synchronization delay and E is the average critical section
execution time
Mutual Exclusion Techniques Covered
Central Coordinator Algorithm
In a distributed environment it seems more natural to implement mutual
exclusion, based upon distributed agreement - not on a central coordinator.
Distributed Non-token based (Timestamp-Based Algorithms)
Lamport’s Algorithm
Ricart-Agrawala1 Algorithm
Variation – Quorum based (Maekawa’s Algorithm)
Distributed Token Based
Ricart-Agrawala Second Algorithm
Token Ring Algorithm
3
4
Lamport’s Algorithm
Basic Idea
Requests for CS are executed in the increasing order of timestamps and
time is determined by logical clocks.
Every site S_i keeps a queue, request queue_i , which contains mutual
exclusion requests ordered by their timestamps.
This algorithm requires communication channels to deliver messages
the FIFO order.
Performance – Lamport’s Algorithm
For each CS execution Lamport’s algorithm requires
(N − 1) REQUEST messages, (N − 1) REPLY messages, and (N − 1)
RELEASE messages.
Thus, Lamport’s algorithm requires 3(N − 1) messages per CS
invocation.
Optimization
In Lamport’s algorithm, REPLY messages can be omitted in certain
situations.
For example, if site Sj receives a REQUEST message from site Si after it
has sent its own REQUEST message with timestamp higher than the
timestamp of site Si ’s request, then site Sj need not send a REPLY
message to site Si .
This is because when site Si receives site Sj ’s request with timestamp
higher than its own, it can conclude that site Sj does not have any
smaller timestamp request which is still pending.
With this optimization, Lamport’s algorithm requires between 3(N − 1)
and 2(N − 1) messages per CS execution.
5
Ricart-Agrawala Algorithm
Uses only two types of messages – REQUEST and REPLY.
It is assumed that all processes keep a (Lamport’s) logical clock which is
updated according to the clock rules.
The algorithm requires a total ordering of requests. Requests are
ordered according to their global logical timestamps; if timestamps are
equal, process identifiers are compared to order them.
The process that requires entry to a CS multicasts the request message to all
other processes competing for the same resource.
Process is allowed to enter the CS when all processes have replied to
this message.
The request message consists of the requesting process’ timestamp
(logical clock) and its identifier.
Each process keeps its state with respect to the CS: released, requested, or
held.
6
7
Quorum-Based Consensus – Maekawa’s Algorithm
Site obtains permission only from a subset of sites to enter CS
Multicasts messages to a voting subset of processes’
Each process pi is associated with a voting set vi (of processes)
Each process belongs to its own voting set
The intersection of any two voting sets is non-empty
Each voting set is of size K
Each process belongs to M other voting sets
To access a critical section, pi requests permission from all other
processes in its own voting set vi
Voting set member gives permission to only one requestor at a
time, and queues all other requests
Guarantees safety
May not guarantee liveness (may deadlock)
Maekawa showed that K=M=N works best
One way of doing this is to put N processes in a N by N
matrix and take union of row & column containing pi as its
voting set.
8
9
Ricart-Agrawala Second Algorithm
A process is allowed to enter the critical section when it gets the token.
Initially the token is assigned arbitrarily to one of the processes.
In order to get the token it sends a request to all other processes competing
for the same resource.
The request message consists of the requesting process’ timestamp
(logical clock) and its identifier.
When a process Pi leaves a critical section
it passes the token to one of the processes which are waiting for it; this
will be the first process Pj, where j is searched in order [ i+1, i+2, ..., n,
1, 2, ..., i-2, i-1] for which there is a pending request.
If no process is waiting, Pi retains the token (and is allowed to enter the
CS if it needs); it will pass over the token as result of an incoming
request.
How does Pi find out if there is a pending request?
Each process Pi records the timestamp corresponding to the last
request it got from process Pj, in request Pi[ j]. In the token itself,
token[ j] records the timestamp (logical clock) of Pj’s last holding of the
token. If requestPi[ j] > token[ j] then Pj has a pending request.
10
11
Suzuki-Kazami Broadcast Algorithm
If a site wants to enter the CS and it does not have the token, it broadcasts a
REQUEST message for the token to all other sites.
A site which possesses the token sends it to the requesting site upon the
receipt of its REQUEST message.
If a site receives a REQUEST message when it is executing the CS, it sends the
token only after it has completed the execution of the CS.
Two Issues
Outdated Requests: Due to variable message delays, site may receive
token request message after request has been satisfied. Token to
outdated requestor results in poor performance
When a process is done, which of the outstanding requests should it
satisfy?
Election Algorithms
It doesn’t matter which process is elected.
What is important is that one and only one process is chosen (we call
this process the coordinator) and all processes agree on this decision.
Assume that each process has a unique number (identifier).
In general, election algorithms attempt to locate the process with the
highest number, among those which currently are up.
Election is typically started after a failure occurs.
The detection of a failure (e.g. the crash of the current coordinator) is
normally based on time-out a process that gets no response for a
period of time suspects a failure and initiates an election process.
An election process is typically performed in two phases:
Select a leader with the highest priority.
12
Inform all processes about the winner.
The Bully Algorithm
A process has to know the identifier of all other processes
(it doesn’t know, however, which one is still up); the process with the
highest identifier, among those which are up, is selected.
Any process could fail during the election procedure.
When a process Pi detects a failure and a coordinator has to be elected
it sends an election message to all the processes with a higher identifier
and then waits for an answer message:
If no response arrives within a time limit
Pi becomes the coordinator (all processes with higher identifier
are down)
it broadcasts a coordinator message to all processes to let them
know.
If an answer message arrives,
Pi knows that another process has to become the coordinator
it waits in order to receive the coordinator message.
If this message fails to arrive within a time limit (which means
that a potential coordinator crashed after sending the answer
message) Pi resends the election message.
When receiving an election message from Pi
a process Pj replies with an answer message to Pi and
then starts an election procedure itself( unless it has already started
one) it sends an election message to all processes with higher
identifier.
13
Finally all processes get an answer message, except the one which becomes
the coordinator.
14
The Ring-based Algorithm
We assume that the processes are arranged in a logical ring
Each process knows the address of one other process, which is its
neighbor in the clockwise direction.
The algorithm elects a single coordinator, which is the process with the
highest identifier.
Election is started by a process which has noticed that the current
coordinator has failed.
The process places its identifier in an election message that is passed to
the following process.
When a process receives an election message
It compares the identifier in the message with its own.
If the arrived identifier is greater, it forwards the received
election message to its neighbor
If the arrived identifier is smaller it substitutes its own identifier in
the election message before forwarding it.
If the received identifier is that of the receiver itself this will be
the coordinator.
The new coordinator sends an elected message through the ring.
15
16
Distributed Deadlocks
Deadlocks is a fundamental problem in distributed systems.
A process may request resources in any order, which may not be known a
priori and a process can request resource while holding others.
If the sequence of the allocations of resources to the processes is not
controlled, deadlocks can occur.
A deadlock is a state where a set of processes request resources that are
held by other processes in the set.
Conditions for a deadlocks
Mutual exclusion, hold-and-wait, No-preemption and circular wait.
Modeling Deadlocks
In addition to the standard assumptions (no shared memory, no global clock,
no failures), we make the following assumptions:
The systems have only reusable resources.
Processes are allowed to make only exclusive access to resources.
There is only one copy of each resource.
A process can be in two states: running or blocked.
In the running state (also called active state), a process has all the
needed resources and is either executing or is ready for execution.
In the blocked state, a process is waiting to acquire some resource.
The state of the system can be modeled by directed graph, called a wait for
graph (WFG).
In a WFG , nodes are processes and there is a directed edge from node
P1 to mode P2 if P1 is blocked and is waiting for P2 to release some
resource.
17
A system is deadlocked if and only if there exists a directed cycle or knot in
the WFG.
Techniques for Handling Deadlocks
Note: No site has accurate knowledge of the current state of the system
Techniques
Deadlock Prevention (collective/ordered requests, preemption)
Inefficient, impractical
Deadlock Avoidance
A resource is granted to a process if the resulting global system
state is safe
Requires advance knowledge of processes and their resource
requirements
Impractical
Deadlock Detection and Recovery
Maintenance of local/global WFG and searching of the WFG for
the presence of cycles (or knots), local/centralized deadlock
detectors
Recovery by operator intervention, break wait-for dependencies,
termination and rollback
Classes of Deadlock Detection Algorithms
Path-pushing
distributed deadlocks are detected by maintaining an explicit
global WFG (constructed locally and pushed to neighbors)
Edge-chasing (single resource model, AND model)
18
the presence of a cycle in a distributed graph structure is be
verified by propagating special messages called probes, along the
edges of the graph. The formation of cycle can be detected by a
site if it receives the matching probe sent by it previously.
Diffusion computation (OR model, AND-OR model)
deadlock detection computation is diffused through the WFG of
the system.
Global state detection (Unrestricted, P-out-of-Q model)
Take a snapshot of the system and examining it for the condition
of a deadlock.
Process Management
Process migration
Freeze the process on the source node and restart it at the destination
node
Transfer of the process address space
Forwarding messages meant for the migrant process
Handling communication between cooperating processes separated as
a result of migration
Handling child processes
Process migration in heterogeneous systems
19
Mosix: File Access
20
Dynamic Load Balancing
Dynamic Load Balancing on Highly Parallel Computers
Seek to minimize total execution time of a single application running in
parallel on a multiprocessor system
Sender Initiated Diffusion (SID), Receiver Initiated Diffusion(RID),
Hierarchical Balancing Method (HBM), Gradient Model (GM),
Dynamic Exchange method (DEM)
Dynamic Load Balancing on Web Servers
Seek to improve response time using distributed web-server
architectures , by scheduling client requests among multiple nodes in a
transparent way
Client-based approach, DNS-Based approach, Dispatcher-based
approach, Server-based approach
Dynamic Load Balancing on Multimedia Servers
Aim to maximize requests and preserve QoS for admitted requests by
adaptively scheduling requests given knowledge of where objects are
placed
21
Adaptive Scheduling of Video Objects, Predictive Placement of
Video Objects
Distributed File Systems (DFS)
DFS is a distributed implementation of the classical file system model
Issues - File and directory naming, semantics of file sharing
Important features of DFS
Transparency, Fault Tolerance
Implementation considerations
caching, replication, update protocols
The general principle of designing DFS: know the clients have cycles to burn,
cache whenever possible, exploit usage properties, minimize system wide
change, trust the fewest possible entries and batch if possible.
File Sharing Semantics
One-copy semantics
Updates are written to the single copy and are available
immediately
Serializability
Transaction semantics (file locking protocols implemented - share
for read, exclusive for write).
Session semantics
Copy file on open, work on local copy and copy back on close
22
Example: Sun-NFS
Supports heterogeneous systems
Architecture
Server exports one or more directory trees for access by
remote clients
Clients access exported directory trees by mounting them to
the client local tree
Diskless clients mount exported directory to the root
directory
Protocols
Mounting protocol
Directory and file access protocol - stateless, no open-close
messages, full access path on read/write
Semantics - no way to lock files
23