SHRI RAMSWAROOP MEMORIAL
UNIVERSITY
MOBILE COMPUTING REPORT
Session: 2022-2023
Submitted to: Submitted by:
Mr. Neelesh Mishra Shivend Singh
201910101110037
B. Tech CS 72
1
Mutual Exclusion
Mutual Exclusion is a concurrency control property which is introduced to prevent
race conditions. It is the requirement that a process can not enter its critical section
while another concurrent process is currently present or executing in its critical
section i.e. only one process is allowed to execute the critical section at any given
instance of time.
Mutual Exclusion in Distributed System
In Distributed systems, we neither have shared memory nor a common physical
clock and there for we can not solve mutual exclusion problem using shared
variables. To eliminate the mutual exclusion problem in distributed system
approach based on message passing is used.
A site in distributed system do not have complete information of state of the
system due to lack of shared memory and a common physical clock.
Requirements of Mutual Exclusion Algorithm
• No Deadlock : Two or more site should not endlessly wait for any message
that will never arrive.
• No Starvation : Every site who wants to execute critical section should get
an opportunity to execute it in finite time. Any site should not wait
indefinitely to execute critical section while other site are repeatedly
executing critical section
• Fairness : Each site should get a fair chance to execute critical section. Any
request to execute critical section must be executed in the order they are
made i.e. Critical section execution requests should be executed in the order
of their arrival in the system.
• Fault Tolerance : In case of failure, it should be able to recognize it by
itself in order to continue functioning without any disruption.
2
Solution to distributed mutual Exclusion
As we know shared variables or a local kernel can not be used to implement
mutual exclusion in distributed systems. Message passing is a way to implement
mutual exclusion. Below are the three approaches based on message passing to
implement mutual exclusion in distributed systems:
a. Token Based Algorithm
• A unique token is shared among all the sites.
• If a site possesses the unique token, it is allowed to enter its critical section
• This approach uses sequence number to order requests for the critical
section.
• Each requests for critical section contains a sequence number. This
sequence number is used to distinguish old and current requests.
• This approach insures Mutual exclusion as the token is unique
• Example: Suzuki-Kasami’s Broadcast Algorithm
b. Non-token based approach
• A site communicates with other sites in order to determine which sites
should execute critical section next. This requires exchange of two or more
successive round of messages among sites.
• This approach use timestamps instead of sequence number to order requests
for the critical section.
• When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
3
• All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme
• Example: Lamport's algorithm, Ricart–Agrawala algorithm.
c. Quorum based approach
• Instead of requesting permission to execute the critical section from all
other sites, Each site requests only a subset of sites which is called
a quorum.
• Any two subsets of sites or Quorum contains a common site.
• This common site is responsible to ensure mutual exclusion
• Example: Maekawa’s Algorithm
Lamport’s Algorithm for Mutual Exclusion
Lamport’s Distributed Mutual Exclusion Algorithm is a permission based
algorithm proposed by Lamport as an illustration of his synchronization scheme
for distributed systems.
In permission based timestamp is used to order critical section requests and to
resolve any conflict between requests.
In Lamport’s Algorithm critical section requests are executed in the increasing
order of timestamps i.e a request with smaller timestamp will be given permission
to execute critical section first than a request with larger timestamp.
In this algorithm:
• Three type of messages (REQUEST, REPLY and RELEASE) are used
and communication channels are assumed to follow FIFO order.
• A site send a REQUEST message to all other site to get their permission
to enter critical section.
4
• A site send a REPLY message to requesting site to give its permission to
enter the critical section.
• A site send a RELEASE message to all other site upon exiting the critical
section.
• Every site Si, keeps a queue to store critical section requests ordered by
their timestamps. request_queuei denotes the queue of site Si
• A timestamp is given to each critical section request using Lamport’s
logical clock.
• Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of
critical section request is always in the order of their timestamp.
Algorithm:
To enter Critical section
• When a site Si wants to enter the critical section, it sends a request
message Request(tsi, i) to all other sites and places the request
on request_queuei. Here, Tsi denotes the timestamp of Site S i
• When a site Sj receives the request message REQUEST(tsi, i) from site Si,
it returns a timestamped REPLY message to site S i and places the request
of site Si on request_queuej
To execute the critical section
• A site Si can enter the critical section if it has received the message with
timestamp larger than (tsi, i) from all other sites and its own request is at
the top of request_queuei
To release the critical section
• When a site Si exits the critical section, it removes its own request from the
top of its request queue and sends a timestamped RELEASE message to
all other sites
5
• When a site Sj receives the timestamped RELEASE message from site Si,
it removes the request of Si from its request queue
Message Complexity
Lamport’s Algorithm requires invocation of 3(N – 1) messages per critical
section execution. These 3(N – 1) messages involves
• (N – 1) request messages
• (N – 1) reply messages
• (N – 1) release messages
Drawbacks of Lamport’s Algorithm:
• Unreliable approach: failure of any one of the processes will halt the
progress of entire system.
• High message complexity: Algorithm requires 3(N-1) messages per
critical section invocation.
Performance:
• Synchronization delay is equal to maximum message transmission time
• It requires 3(N – 1) messages per CS execution.
• Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.