KEMBAR78
DC Lab Manual 2024 - 25 | PDF | Network Socket | Method (Computer Programming)
0% found this document useful (0 votes)
50 views52 pages

DC Lab Manual 2024 - 25

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views52 pages

DC Lab Manual 2024 - 25

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

DISTRIBUTED COMPUTING

LABORATORY MANUAL

Lab Manual
Final Year Semester-VII

Odd Semester

1
Institutional Vision, Mission and Quality Policy

Our Vision

To foster and permeate higher and quality education with value added engineering, technology
programs, providing all facilities in terms of technology and platforms for all round development with
societal awareness and nurture the youth with international competencies and exemplary level of
employability even under highly competitive environment so that they are innovative adaptable and
capable of handling problems faced by our country and world at large.

RAIT’s firm belief in new form of engineering education that lays equal stress on academics and
leadership building extracurricular skills has been a major contribution to the success of RAIT as one
of the most reputed institution of higher learning. The challenges faced by our country and world in
the 21 Century needs a whole new range of thought and action leaders, which a conventional
educational system in engineering disciplines are ill equipped to produce. Our reputation in providing
good engineering education with additional life skills ensure that high grade and highly motivated
students join us. Our laboratories and practical sessions reflect the latest that is being followed in the
Industry. The project works and summer projects make our students adept at handling the real life
problems and be Industry ready. Our students are well placed in the Industry and their performance
makes reputed companies visit us with renewed demands and vigour.

Our Mission
The Institution is committed to mobilize the resources and equip itself with men and materials of
excellence thereby ensuring that the Institution becomes pivotal center of service to Industry,
academia, and society with the latest technology. RAIT engages different platforms such as
technology enhancing Student Technical Societies, Cultural platforms, Sports excellence centers,
Entrepreneurial Development Center and Societal Interaction Cell. To develop the college to become

2
an autonomous Institution & deemed university at the earliest with facilities for advanced research
and development programs on par with international standards. To invite international and reputed
national Institutions and Universities to collaborate with our institution on the issues of common
interest of teaching and learning sophistication.

RAIT’s Mission is to produce engineering and technology professionals who are innovative and
inspiring thought leaders, adept at solving problems faced by our nation and world by providing
quality education.

The Institute is working closely with all stake holders like industry, academia to foster knowledge
generation, acquisition, dissemination using best available resources to address the great challenges
being faced by our country and World. RAIT is fully dedicated to provide its students skills that make
them leaders and solution providers and are Industry ready when they graduate from the Institution.

We at RAIT assure our main stakeholders of students 100% quality for the programmes we deliver.
This quality assurance stems from the teaching and learning processes we have at work at our campus
and the teachers who are handpicked from reputed institutions IIT/NIT/MU, etc. and they inspire the
students to be innovative in thinking and practical in approach. We have installed internal procedures
to better skills set of instructors by sending them to training courses, workshops, seminars and
conferences. We have also a full fledged course curriculum and deliveries planned in advance for a
structured semester long programme. We have well developed feedback system employers, alumni,
students and parents from to fine tune Learning and Teaching processes. These tools help us to ensure
same quality of teaching independent of any individual instructor. Each classroom is equipped with
Internet and other digital learning resources.

The effective learning process in the campus comprises a clean and stimulating classroom
environment and availability of lecture notes and digital resources prepared by instructor from the
comfort of home. In addition student is provided with good number of assignments that would trigger
his thinking process. The testing process involves an objective test paper that would gauge the
understanding of concepts by the students. The quality assurance process also ensures that the
learning process is effective. The summer internships and project work based training ensure learning
process to include practical and industry relevant aspects. Various technical events, seminars and
conferences make the student learning complete.

3
Our Quality Policy

Our Quality Policy

It is our earnest endeavour to produce high quality engineering professionals who are
innovative and inspiring, thought and action leaders, competent to solve problems
faced by society, nation and world at large by striving towards very high standards in
learning, teaching and training methodologies.

Our Motto: If it is not of quality, it is NOT RAIT!

4
Departmental Vision, Mission

Vision
To impart higher and quality education in computer science with value added engineering and
technology programs to prepare technically sound, ethically strong engineers with social awareness.
To extend the facilities, to meet the fast changing requirements and nurture the youths with
international competencies and exemplary level of employability and research under highly
competitive environments.

Mission
To mobilize the resources and equip the institution with men and materials of excellence to provide
knowledge and develop technologies in the thrust areas of computer science and Engineering. To
provide the diverse platforms of sports, technical, cocurricular and extracurricular activities for the
overall development of student with ethical attitude. To prepare the students to sustain the impact of
computer education for social needs encompassing industry, educational institutions and public
service. To collaborate with IITs, reputed universities and industries for the technical and overall
upliftment of students for continuing learning and entrepreneurship.

5
Departmental Program Educational Objectives
(PEOs)

1. Learn and Integrate


To provide Computer Engineering students with a strong foundation in the mathematical,
scientific and engineering fundamentals necessary to formulate, solve and analyze
engineering problems and to prepare them for graduate studies.

2. Think and Create


To develop an ability to analyze the requirements of the software and hardware, understand
the technical specifications, create a model, design, implement and verify a computing system
to meet specified requirements while considering real-world constraints to solve real world
problems.

3. Broad Base
To provide broad education necessary to understand the science of computer engineering and
the impact of it in a global and social context.

4. Techno-leader
To provide exposure to emerging cutting edge technologies, adequate training &
opportunities to work as teams on multidisciplinary projects with effective communication
skills and leadership qualities.

5. Practice citizenship
To provide knowledge of professional and ethical responsibility and to contribute to society
through active engagement with professional societies, schools, civic organizations or other
community activities.

6. Clarify Purpose and Perspective


To provide strong in-depth education through electives and to promote student awareness on the
life-long learning to adapt to innovation and change, and to be successful in their professional
work or graduate studies.

6
Departmental Program Outcomes (POs)

PO1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex
engineering problems.
PO2. Problem analysis: Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated conclusions using first
principles of mathematics, natural sciences, and engineering sciences.
PO3. Design/development of solutions: Design solutions for complex engineering
problems and design system components or processes that meet the specified needs
with appropriate consideration for the public health and safety, and the cultural,
societal, and environmental considerations.
PO4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data,
and synthesis of the information to provide valid conclusions.
PO5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
PO6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.
PO7. Environment and sustainability: Understand the impact of the professional
engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.
PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
PO9. Individual and team work: Function effectively as an individual, and as a member
or leader in diverse teams, and in multidisciplinary settings.
PO10. Communication: Communicate effectively on complex engineering activities with
the engineering community and with society at large, such as, being able to

7
comprehend and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.
PO11. Project management and finance: Demonstrate knowledge and understanding of
the engineering and management principles and apply these to one’s own work, as a
member and leader in a team, to manage projects and in multidisciplinary
environments.
PO12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological
change.

Program Specific Outcomes:

PSO1: To build competencies towards problem solving with an ability to understand,


identify, analyze and design the problem, implement and validate the solution including both
hardware and software.

PSO2: To build appreciation and knowledge acquiring of current computer techniques with
an ability to use skills and tools necessary for computing practice.

PSO3: To be able to match the industry requirements in the area of computer science and
engineering. To equip skills to adopt and imbibe new technologies.

8
Index
Sr. No. Contents Page No.
1. Experiment No. 1

2. Experiment No. 2

3. Experiment No. 3

4. Experiment No. 4

5. Experiment No. 5

6. Experiment No. 6

7. Experiment No. 7

8. Experiment No. 8

9. Experiment No. 9

10. Experiment No. 10

11 Experiment No. 11

9
List of Experiments
Sr. Course
Experiments Name
No. Outcome

1 Program to demonstrate datagram Socket for Chat Application CO1


using Java

2 Program to implement RMI Application using Java CO2

3 Program to demonstrate Bully Election Algorithm using Java CO3

Program to demonstrate Berkeley Clock Synchronization


4 CO3
Algorithm using Java
Program to implement Token Ring Algorithm for distributed
5 CO4
mutual exclusion

6 Program to implement Non-Token based Algorithm for distributed


CO4
mutual exclusion.

7 Program to Simulate Load Balancing Algorithm using Java CO5

8 Program to implement Deadlock management in distributed


CO6
system.

9 Case Study(out of syllabus)

10
Experiment Plan, Course Objectives &
Course Outcome
Course Objectives:

1. To provide students with contemporary knowledge in distributed systems.

2. To equip students with skills to analyze and design distributed applications.

3. To provide master skills to measure the performance of distributed synchronization


algorithms

Course Outcomes:

CO1 Demonstrate knowledge of the basic elements and concepts related to


distributed system technologies.
CO2 Illustrate the middleware technologies that support distributed applications such as
RPC, RMI and Object based middleware.
CO3
Analyse the various techniques used for clock synchronization and mutual exclusion

CO4 Demonstrate the concepts of Resource and Process management and


Scheduling algorithms
CO5
Demonstrate the concepts of Consistency and Replication Management

CO6 Apply the knowledge of Distributed File System to analyze various file systems like
NFS, AFS and the experience in building large-scale distributed applications.

11
Module Week Course
Experiments Name
No. No. Outcome

1 W1 Program to demonstrate datagram Socket for Chat CO1


Application using Java

2 W2 Program to implement RMI Application using Java CO2

Program to demonstrate Bully Election Algorithm


3 W3 CO3
using Java
Program to demonstrate Berkeley Clock
4 W4 CO3
Synchronization Algorithm using Java
Program to implement Token Ring Algorithm for
5 W5 CO4
distributed mutual exclusion

6 W6 Program to implement Non-Token based Algorithm for


CO4
distributed mutual exclusion.

7 W7 Program to Simulate Load Balancing Algorithm using


CO5
Java

8 W8 Program to implement Deadlock management in


CO6
distributed system.

9 W9 Case Study(out of syllabus)

12
Mapping of Course outcomes with Program outcomes
Subject Contribution to Program outcomes (PO)
Weight
1 2 3 4 5 6 7 8 9 10 11 12

Demonstrate knowledge of the basic


3 3 1 1 2
elements and concepts related to
distributed system technologies.
Illustrate the middleware technologies
that support distributed applications 1 1 2 2 2 1 1
such as RPC, RMI and Object based
middleware.
Analyze the various techniques used
1 1 2 2 1 1 1 1
for clock synchronization and mutual
exclusion.
THEORY
Demonstrate the concepts of
1 1 2 2 1 1 1 1
Resource and Process management
20%
and Scheduling algorithms.
Demonstrate the concepts of
1 1 2 2 1 1 1 1
Consistency and Replication
Management
Apply the knowledge of Distributed
File System to analyze various file
1 1 2 2 1 1 1 1
systems like NFS, AFS and the
experience in building large-scale
distributed applications.
1 2 3 4 5 6 7 8 9 10 11 12

Demonstrate knowledge of the basic


3 3 1 1 2
PRATICAL elements and concepts related to
/ ORAL distributed system technologies.
+ Illustrate the middleware technologies
TERM that support distributed applications 1 1 2 2 2 1 1
WORK such as RPC, RMI and Object based
80% middleware.
Analyze the various techniques used
1 1 2 2 1 1 1 1
for clock synchronization and mutual
exclusion.
Demonstrate the concepts of
1 1 2 2 1 1 1 1
Resource and Process management
and Scheduling algorithms.
Demonstrate the concepts of
1 1 2 2 1 1 1 1
Consistency and Replication
Management
Apply the knowledge of Distributed
File System to analyze various file
1 1 2 2 1 1 1 1
systems like NFS, AFS and the
experience in building large-scale
distributed applications.

13
Mapping of Course outcomes with Program Specific outcomes:
Contribution to
Course Outcomes Program Specific
outcomes
(PSO)
1 2 3

CO1 Demonstrate knowledge of the basic elements and 3 2


concepts related to distributed system technologies.
CO2 Illustrate the middleware technologies that support distributed 2 3 2
applications such as RPC, RMI and Object based middleware.
CO3 Analyze the various techniques used for clock 3 2 1
synchronization and mutual exclusion.
CO4 Demonstrate the concepts of Resource and Process 3 2 2
management and Scheduling algorithms.
CO5 Demonstrate the concepts of Consistency and Replication 3 2 2
Management
Apply the knowledge of Distributed File System to analyze
CO6 various file systems like NFS, AFS and the experience in 3 2 2
building large-scale distributed applications.

14
Mapping Course Outcomes (CO) -
Program Outcomes (PO)
Study and Evaluation Scheme
Course Course Name Teaching Scheme Credits Assigned
Code

CSL80 DISTRIBUTE Theor Practica Tutoria Theor Practica Tutoria Tota


2 D SYSTEM y l l y l l l
LAB
-- 02 -- -- 01 -- 01

Course Code Course Name Examination Scheme

CSL802 DISTRIBUTED Term Work Oral Total


SYSTEM LAB
25 25 50

Term Work:

Laboratory work will be based on above syllabus with minimum 10 experiments to be incorporated.

Laboratory work (experiments) ...................... (15) Marks.

Assignments .............................................(05) Marks.

Attendance (Theory + Practical)… ....................(05) Marks

TOTAL ................................................. (25) Marks.

Practical Exam:

Exam will be based on the above and syllabus.

15
Distributed Computing

Experiment No. : 1

Program to demonstrate datagram


Socket for Chat Application using Java

16
Experiment No. 1
1. Aim: Program to demonstrate datagram Socket for Chat Application using Java

2. Objectives:

 To understand fundamental concepts of computer communication


 To understand sockets and ports

3 Outcomes:

From this experiment, the student will be able to


 Demonstrate knowledge of the basic elements and concepts related to
distributed system technologies
 Understand different models of distributed system and discuss the challenges and
opportunities faced by them.

4. Hardware/Software Required : JDK 1.6

5. Theory:

Socket: An interface between an application process and transport layer. The


application process can send/receive messages to/from another application
process (local or remote) via a socket.

17
Client/Server Communication

At a basic level, network based systems consist of a server, client, and a media for
communication as shown in Figure 1. A computer running a program that makes
request for services is called client machine. A computer running a program that
offers requested services from one or more clients is called server machine. The
media for communication can be wired or wireless network.

Client Server

6. Procedure/ Program:

TCP client algorithm :

1. Find IP address and protocol number of server


2. Allocate a socket
3. Specify that the connection needs an arbitrary, unused protocol port on local
machine and allow TCP to select one
4. Connect the socket to the server
5. Communicate with the server using application-level protocol
6. Close the connection

18
TCP Server Algorithm:

1. Create a socket and bind to the well-known address for the service being offered
2. Place the socket in passive mode
3. Accept the next connection request from the socket, and obtain a new socket for the
connection
4. Repeatedly read a request from the client, formulate a response, and send a reply
back to the client according to the application protocol
5. When finished with a particular client, close the connection and return to step 3 to
accept a new connection

7. Conclusion and Discussion:

Hence we conclude following about socket programming:


• Connection based
• Guaranteed reliable and ordered
• Automatically breaks up your data into packets for you
• Makes sure it doesn't send data too fast for the internet connection to handle (flow
control)
• Easy to use, you just read and write data like its a file

8. QUIZ / Viva Questions:


 What is socket programming?
 Explain in brief how chat application is working.

9. References:
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles and
Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
and Design” (4th Edition), Addison Wesley/Pearson Education.

19
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press
 Java Complete Reference, Herbert Schild

20
Distributed Computing

Experiment No. : 2

Program to implement RMI


Application using Java

21
Experiment No. 2
1. Aim: Program to implement RMI Application using Java.

2. Objective: Student will understand

 The RMI (Remote Method Invocation) is an API that provides a mechanism to create
distributed application in java.
 The RMI allows an object to invoke methods on an object running in another JVM.

3. Outcomes:

From this experiment, the student will be able to


 Implement the middleware technologies that support distributed applications
using RPC, RMI and object based middleware.

4. Hardware/Software Required : JDK 1.6

5. Theory:

Java Remote Method Invocation (Java RMI) enables the programmer to create
distributed-Java technology-based to Java technology-based applications, in which the
methods of remote Java objects can be invoked from other Java virtual machines,
possibly on different hosts. RMI uses object serialization to marshal and unmarshal
parameters and does not truncate types, supporting true object oriented polymorphism.

Java RMI is a mechanism that allows one to invoke a method on an object that exists in
another address space. The other address space could be on the same machine or on a
different one. The RMI mechanism is basically an object-oriented RPC mechanism.

There are three processes that participate in supporting RMI:

1. The Client is the process that is invoking a method on a remote object.


2. The Server is the process that owns the remote object. The remote object is an
ordinary object in the address space of the server process.
3. The Object Registry is a name server that relates objects with names. Objects
are registered with the Object Registry. Once an object has been registered, one can

22
use the Object Registry to obtain access to a remote object using the name of the
object.

6. Procedure/ Program:

Create Registry

1. Create your own interface which extends Remote interface.

2. Declare all the methods signature in it

3. Save and start registry.

Create Sever Program

1. Define a class Server which extends UnicastRemoteObject class and


implements an interface.

2. Define all the methods declared in own interface.

3. Create registry and bind server with it.

Create Client Program

1. Define a class Client.

2. Create instance of an interface.

3. Lookup the registry for a server

4. Make a call to methods using instance of an interface.

7. Conclusion and Discussion:


• Hence we have studied and implemented Remote Method Invocation supporting the
distributed computing in JAVA.

8. QUIZ / Viva Questions:

 What is the application of RMI in distributed computing.


 Explain RMI mechanism.

23
9. References:
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles and
Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
and Design” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press
 JAVA Complete Reference, Herbert Schildt.

24
Distributed Computing

Experiment No. : 3

Programto demonstrate Bully Election


Algorithm using Java

25
Experiment No. 3
1. Aim: Program to demonstrate Bully Election Algorithm using Java
2. Objective:

 To understand the election algorithm .


 To understand the principle and application of the bully algorithm

3. Outcomes:

From this experiment, the student will be able to learn


 Analyze the various techniques used for clock synchronization and mutual
exclusion

4. Software Required: JDK 1.6


5. Theory

Communication in networks is implemented in a process on one machine


communicating with a process on another machine. A distributed algorithm is an
algorithm, run on a distributed system, that does not assume the previous existence of a
central coordinator. A distributed system is a collection of processors that do not share
memory or a clock. Each processor has its own memory, and the processors
communicate via communication networks. Considering two problems requiring
distributed algorithms, the coordinator election problem and the value agreement
problem (Byzantine generals problem).

Election Algorithms

The coordinator election problem is to choose a process from among a group of


processes on different processors in a distributed system to act as the central
coordinator.
An election algorithm is an algorithm for solving the coordinator election problem.
By the nature of the coordinator election problem, any election algorithm must be a
distributed algorithm.
-a group of processes on different machines need to choose a coordinator

26
-peer to peer communication: every process can send messages to every other process.

-Assume that processes have unique IDs, such that one is highest

-Assume that the priority of process Pi is i

6. Procedure/Program

Any process Pi sends a message to the current coordinator; if no response in T time


units, Pi tries to elect itself as leader.

Algorithm for process Pi that detected the lack of coordinator

1. Process Pi sends an “Election” message to every process with higher priority.


2. If no other process responds, process Pi starts the coordinator code running
and sends a message to all processes with lower priorities saying “Elected Pi”
3. Else, Pi waits for T’ time units to hear from the new coordinator, and if there is
no response à start from step (1) again.

Algorithm for other processes (also called Pi)

If Pi is not the coordinator then Pi may receive either of these messages f


rom Pj

if Pi sends “Elected Pj”; [this message is only received if i < j]

Pi updates its records to say that Pj is the coordinator.

Else if Pj sends “election” message (i > j)

Pi sends a response to Pj saying it is alive

Pi starts an election.

7. Conclusion

Hence from above experiment student understood the working of Bully election
algorithm for coordinator election in distributed system.. This method requires atmost
five stages, and the probability of detecting a crashed process during the execution of
algorithm is lowered in contrast to other algorithms.

27
8. QUIZ / Viva Questions:
 Explain Bully algorithm in brief.
 What are different election algorithms?

9. References:
 M.R. Bhujade, “Parallel Computing”, 2nd edition, New Age International
Publishers 2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems:
Principles and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN:
0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems:
Concepts andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”,
IEEE computer society press
 JAVA Complete Reference, Herbert Schild.

28
Distributed Computing

Experiment No. : 4

Program to demonstrate Berkeley


Clock Synchronization Algorithm using
Java

29
Experiment No. 4
1. Aim: Program to demonstrate Berkeley Clock Synchronization Algorithm using
Java
2. Objective: Student will understand

 Clock synchronization deals with understanding the temporal ordering of


events produced by concurrent processes.
3. Outcomes:
From this experiment, the student will be able to learn
 Analyze the various techniques used for clock synchronization and mutual
exclusion

4. Software Required: JDK 1.6


5. Theory

The Berkeley algorithm is a method of clock synchronisation in distributed


computing which assumes no machine has an accurate time source. Computer
systems normally avoid rewinding their clock when they receive a negative
clock alteration from the master. Doing so would break the property of
monotonic time, which is a fundamental assumption in certain algorithms in
the system itself or in programs such as make. A simple solution to this
problem is to halt the clock for the duration specified by the master, but this
simplistic solution can also cause problems, although they are less severe. For
minor corrections, most systems slow the clock (known as "clock slew"),
applying the correction over a longer period of time.

6. Procedure/ Program:

Algorithm

Unlike Cristian's algorithm the server process in Berkeley algorithm, called


the master periodically polls other slave process. Generally speaking the
algorithm is as follows:

30
1. A master is chosen via an election process such as Chang and Roberts
algorithm.
2. The master polls the slaves who reply with their time in a similar way to
Cristian's algorithm.
3. The master observes the round-trip time (RTT) of the messages and estimates
the time of each slave and its own.
4. The master then averages the clock times, ignoring any values it receives far
outside the values of the others.
5. Instead of sending the updated current time back to the other process, the
master then sends out the amount (positive or negative) that each slave must
adjust its clock. This avoids further uncertainty due to RTT at the slave
processes.
With this method the average cancels out individual clock's tendencies to drift.

7. Conclusion
A physical clock is not present in distributed system for synchronization. To achieve the
synchronization all the machines timestamp is collected and the server broadcast the
appropriate time to all using berkely algorithm.

8. QUIZ / Viva Questions:


 What clock synchronisation?
 Explain in brief Berkeley algorithm.

9. References:
 M.R. Bhujade, “Parallel Computing”, 2nd edition, New Age International Publishers
2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles and
Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
and Design” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press
 JAVA Complete Reference, Herbert Schild.

31
Distributed Computing

Experiment No. : 5

Program to implement Token Ring Algorithm for


distributed mutual exclusion

32
Experiment No. 5
1. Aim: Program to implement Token Ring Algorithm for distributed mutual
exclusion.

2. Objective:
Understand the different approach to achieving mutual exclusion in a distributed
system.

3. Outcomes:
From this experiment, the student will be able to learn
 Analyze the various techniques used for clock synchronization and mutual
exclusion
4. Software Required: JDK 1.6

5. Theory
Mutual exclusion:
 Concurrent access of processes to a shared resource or data is executed in mutually
exclusive manner.
 Only one process is allowed to execute the critical section (CS) at any given time.
 In a distributed system, shared variables (semaphores) or a local kernel cannot be
used to implement mutual exclusion.
 Two basic approaches for distributed mutual exclusion:
1. Token based approach
2. Non-token based approach

Token-based approach:

 A unique token is shared among the sites.


 A site is allowed to enter its CS if it possesses the token.
 Mutual exclusion is ensured because the token is unique
Non-token based approach:
 Two or more successive rounds of messages are exchanged among the sites to
determine which site will enter the CS next.

Suzuki Kasami Algorithm:

If a site wants to enter the CS and it does not have the token, it broadcasts a REQUEST message for
the token to all other sites. A site which possesses the token sends it to the requesting site upon the
receipt of its REQUEST message. If a site receives a REQUEST message when it is executing the
CS, it sends the token only after it has completed the execution of the CS.
33
This algorithm must efficiently address the following two design issues:
(1) How to distinguish an outdated REQUEST message from a current REQUEST message:
Due to variable message delays, a site may receive a token request message after the corresponding
request has been satisfied. If a site can not determined if the request corresponding to a token request
has been satisfied, it may dispatch the token to a site that does not need it. This will not violate
the correctness, however, this may seriously degrade the performance.
(2) How to determine which site has an outstanding request for the CS: After a site has finished
the execution of the CS, it must determine what sites have an outstanding request for the CS so
that the token can be dispatched to one of them.

6. Procedure/ Program:

Requesting the critical section


(a) If requesting site Si does not have the token, then it increments its sequence number, RNi [i], and
(b) sends a REQUEST(i, sn) message to all other sites. (‘sn’ is the updated value of RNi [i].)
(c) When a site Sj receives this message, it sets RNj [i] to max(RNj [i], sn). If Sj has the idle token,
(d) then it sends the token to Si if RNj [i]=LN[i]+1.

Executing the critical section

(c) Site Si executes the CS after it has received the token.

Releasing the critical section Having finished the execution of the CS, site Si takes the following actions:
(d) It sets LN[i] element of the token array equal to RNi [i].
(e) For every site Sj whose id is not in the token queue, it appends its id to the token queue
if RNi [j]=LN[j]+1.
(e) If the token queue is nonempty after the above update, Si deletes the top site id from the token
queue and sends the token to the site indicated by the id.

7. Conclusion:

Here we have studied working of Suzuki Kasami Algorithm algorithm

8. QUIZ / Viva Questions:


 What is Mutual Exclusion in distributed operating system?
 Differentiate between Non-Token Based and Token Based algorithm?
 Explain the Logic of Suzuki Kasami Algorithm algorithm

34
9. References:
 M.R. Bhujade, “Parallel Computing”, 2nd edition, New Age International
Publishers 2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles
and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press

35
Distributed Computing

Experiment No. : 6

Program to implement Non-Token


Based Algorithm for distributed mutual
exclusion

36
Experiment No. 6
10. Aim: Program to implement Non-Token Ring Algorithm for distributed mutual
exclusion.

11. Objective:
12. Outcomes:
From this experiment, the student will be able to learn
 Analyze the various techniques used for clock synchronization and mutual
exclusion
13. Software Required: JDK 1.6

14. Theory
Mutual exclusion:
 Concurrent access of processes to a shared resource or data is executed in mutually
exclusive manner.
 Only one process is allowed to execute the critical section (CS) at any given time.
 In a distributed system, shared variables (semaphores) or a local kernel cannot be
used to implement mutual exclusion.
 Two basic approaches for distributed mutual exclusion:
1. Token based approach
2. Non-token based approach

Token-based approach:

 A unique token is shared among the sites.


 A site is allowed to enter its CS if it possesses the token.
 Mutual exclusion is ensured because the token is unique
Non-token based approach:
 Two or more successive rounds of messages are exchanged among the sites to
determine which site will enter the CS next.

Ricart–Agrawala algorithm is an algorithm to for mutual exclusion in a distributed system


proposed by Glenn Ricart and Ashok Agrawala. This algorithm is an extension and
optimization of Lamport’s Non-token based Distributed Mutual Exclusion Algorithm. Like
Lamport’s Algorithm, it also follows permission based approach to ensure mutual exclusion.
In this algorithm:

37
 Two type of messages ( REQUEST and REPLY) are used and communication channels
are assumed to follow FIFO order.
 A site send a REQUEST message to all other site to get their permission to enter
critical section.
 A site send a REPLY message to other site to give its permission to enter the critical
section.
 A timestamp is given to each critical section request using Lamport’s logical clock.
 Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of critical section
request is always in the order of their timestamp.

15. Procedure/ Program:

 To enter Critical section:


 When a site Si wants to enter the critical section, it send a
timestamped REQUEST message to all other sites.
 When a site Sj receives a REQUEST message from site Si, It sends
a REPLY message to site Si if and only if
 Site Sj is neither requesting nor currently executing the critical
section.
 In case Site Sj is requesting, the timestamp of Site Si‘s request is
smaller than its own request.
Otherwise the request is deferred by site Sj.
 To execute the critical section:
 Site Si enters the critical section if it has received the REPLY message
from all other sites.
 To release the critical section:
 Upon exiting site Si sends REPLY message to all the deferred requests.

16. Conclusion:

In distributed environment there is no centralized control over the distributed resources. The
machine which wants to acquire a resource must hold a token. But if a token is lost, it should
be regenerated using election algorithm. Here we have studied working of Ricart–Agrawala
algorithm which requires invocation of 2(N – 1) messages per critical section execution.
These 2(N – 1) messages involves (N – 1) request messages and (N – 1) reply messages

17. QUIZ / Viva Questions:


 What is Mutual Exclusion in distributed operating system?
 Differentiate between Non-Token Based and Token Based algorithm?
 Explain the Logic of Ricart-Agrawala algorithm

38
18. References:
 M.R. Bhujade, “Parallel Computing”, 2nd edition, New Age International
Publishers 2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles
and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press

39
Distributed Computing

Experiment No. : 7

Program to Simulate Load Balancing


Algorithm using Java

40
Experiment No. 7
3. Aim: Program to Simulate Load Balancing Algorithm using Java

4. Objective:
Understand the different approach of load balancing in distributed system.

3. Outcomes:
From this experiment, the student will be able to learn
 Analyze the various techniques used for load balancing in distributed system
4. Software Required: JDK 1.6,Python
5. Theory

Load balancing is the way of distributing load units (jobs or tasks) across a set of processors
which are connected to a network which may be distributed across the globe. The excess load
or remaining unexecuted load from a processor is migrated to other processors which have
load below the threshold load. Threshold load is such an amount of load to a processor that
any load may come further to that processor. In a system with multiple nodes there is a very
high chance that some nodes will be idle while the other will be over loaded. So the
processors in a system can be identified according to their present load as heavily loaded
processors (enough jobs are waiting for execution), lightly loaded processors (less jobs are
waiting) and idle processors (have no job to execute). By load balancing strategy it is possible
to make every processor equally busy and to finish the works approximately at the same time.
A load balancing operation consists of three rules. These are location rule, distribution rule
and selection rule

Benefits of load balancing


a) Load balancing improves the performance of each node and hence the overall system
performance
b) Load balancing reduces the job idle time
c) Small jobs do not suffer from long starvation
d) Maximum utilization of resources
e) Response time becomes shorter
f) Higher throughput
g) Higher reliability
h) Low cost but high gain
i) Extensibility and incremental growth

6. Procedure/ Program:

41
routine Load_balance(n, p)
// We have list of n nodes initialized to 0 and is returned at the end of the algorithm.
//Round Robin Algorithm is used to balance the load with time quantum as 1 process.
 Create a list of n nodes with each node having 0 processes allocated currently.
 Consider i processes, and assign j<-0
 while i not equals 0, do
add a process to jth node(considering 1 process as time quantum of Round
Robin Algo )
j<-(j+1)%n
decrement i
 Return the list.
 Main routine:
 User inputs the n nodes and p processes
 Call route Load_balance(n,p) to get the balanced list
 MENU
add a new node, call routine Load_balance(n+1, p)
remove a node, call routine Load_balance(n-1, p)
add a new process, call routine Load_balance(n, p+1)
remove a process, call routine Load_balance(n, p-1)
 QUIT
 Display the returned list

7. Conclusion:
In computing load balancing is a technique which improves the
workloads distribution through multiple resources, as computers, clusters, servers, disks.
Thus we have studied and implemented of load balancing technique to optimize the use of
resources available, maximize throughput, minimize response time, and avoid overload of
any single resource.

8. QUIZ / Viva Questions:

 What is Load Balancing?


 Where are load balancers typically placed?

42
 What are the benefits of using load balancing?

 What are some of the Load Balancing Algorithms?

9. References:
 (2012). The Study on Load Balancing Strategies in Distributed Computing System.
International Journal of Computer Science & Engineering Survey (IJCSES).
Vol.3,. 19-30. 10.5121/ijcses.2012.3203.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles
and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press

43
Distributed Computing

Experiment No. : 8

Program to implement Deadlock


management in Distributed System

44
Experiment No. 8
1. Aim: Program to implement Deadlock management in distributed system.

2. Objective:
 To equip students with skills to analyze and design distributed applications
 To study deadlock situation in operating system.
 To understand Bankers algorithm for deadlock avoidance and detection.
 Construct a resource allocation graph for a deadlock condition and verify using the
simulator.
3. Outcomes:
From this experiment, the student will be able to learn
 Demonstrate the concepts of Consistency and Replication Management
4. Software Required: JDK 1.6

5. Theory

A deadlock is a condition in a system where a set of processes (or threads) have requests
for resources that can never be satisfied. Essentially, a process cannot proceed because it
needs to obtain a resource held by another process; but, it itself is holding a resource that the
other process needs. There are four conditions to be met for a deadlock to occur in a system:
1. Mutual exclusion: A resource can be held by at most one process.

2. Hold and wait: Processes that already hold resources can wait for another resource.

3. Non-preemption: A resource, once granted, cannot be taken away.

4. Circular wait: Two or more processes are waiting for resources held by one of the other
processes.

The banker's algorithm is a resource allocation and deadlock avoidance algorithm used in
distributed system. The Banker's algorithm is a resource allocation and deadlock avoidance
algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of
predetermined maximum possible amounts of all resources, and then makes an "s-state"
check to test for possible deadlock conditions for all other pending activities, before deciding
whether allocation should be allowed to continue.The Banker's algorithm is run by the
operating system whenever a process requests resources.The algorithm avoids deadlock by

45
denying or postponing the request if it determines that accepting the request could put the
system in an unsafe state. When a new process enters a system, it must declare the maximum
number of instances of each resource type that it may ever claim; clearly, that number may
not exceed the total number of resources in the system. Also, when a process gets all its
requested resources it must return them in a finite amount of time.

6. Procedure/ Program:
The Banker’s Algorithm is as follows:
STEP 1: initialize
Work := Available;
for i = 1,2,...,n
Finish[i] = false
STEP 2: find i such that both
a. Finish[i] is false
b. Need_i <= Work
if no such i, goto STEP 4
STEP 3:
Work := Work + Allocation_i
Finish[i] = true
17
goto STEP 2
STEP 4:
if Finish[i] = true for all i, system is in safe state
Procedure:
1) Enter number of processes with their need.
2) Find out whether allocated resources are greater than required resources.
3) If allocated resources are greater than required resources then it in safe state or
else it an unsafe state.
7. Conclusion:

 Visualize and Investigate Deadlocked in distributed system.


 apply various techniques for avoiding, preventing or resolving process deadlocks which
will allow the system to run as efficiently as possible

8. QUIZ / Viva Questions:


 What is deadlock?

46
 What necessary conditions can lead to a deadlock situation in a system?
 What is Concurrency? Explain with example Deadlock and Starvation.
 What are your solution strategies for “Dining Philosophers Problem”?
 What is Bankers Algorithm?

9. References:
 M.R. Bhujade, “Parallel Computing”, 2nd edition, New Age International
Publishers 2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles
and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press

47
Distributed Computing

Experiment No. : 9

Case study of Google file system

48
Experiment No. 9
1. Aim: Case Study of Google File System
2. Objective:
To understand the approaches for designing a Google file system and understand its
architecture.
3. Outcomes:
Student will learn
 Apply the knowledge of Distributed File System to analyse various file systems
and experience in building large-scale distributed applications.
4. Software Required: Microsoft Office, Internet
5. Theory
The Google File System, developed in late 1990s, uses thousands of storage
systems built from inexpensive commodity components to provide petabytes of
storage to a large user community with diverse need.
Some of the most important aspects of this analysis reflected in the GFS design
are:
• Scalability and reliability are critical features of the system; they must be
considered from the beginning, rather than at later design stages.
• The vast majority of files range in size from a few GB to hundreds of TB.
• The most common operation is to append to an existing file; random write
operations to a file are extremely infrequent.
• Sequential read operations are the norm.
• Users process the data in bulk and are less concerned with the response time
. • To simplify the system implementation the consistency model should be relaxed
without placing an additional burden on the application developers.
As a result of this analysis several design decisions were made:
1. Segment a file in large chunks.
2. Implement an atomic file append operation allowing multiple applications
operating concurrently to append to the same file.
3. Build the cluster around a high-bandwidth rather than low-latency
interconnection network. Separate the flow of control from the data flow; schedule
the high-bandwidth data flow by pipelining the data transfer over TCP connections

49
to reduce the response time. Exploit network topology by sending data to the
closest node in the network.
4. Eliminate caching at the client site; caching increases the overhead for
maintaining consistency among cashed copies at multiple client sites and it is not
likely to improve performance.
5. Ensure consistency by channeling critical file operations through a master
controlling the entire system.
6. Minimize master’s involvement in file access operations to avoid hot-spot
contention and to ensure scalability.
7. Support efficient checkpointing and fast recovery mechanisms.
8. Support efficient garbage collection mechanisms. GFS files are collections of
fixed-size segments called chunks; at the time of file creation each chunk is
assigned a unique chunk handle. A chunk consists of 64 KB blocks and each block
has a 32-bit checksum. Chunks are stored on Linux files systems and are replicated
on multiple sites; a user may change the number of the replicas, from the standard
value of three, to any desired value. The chunk size is 64 MB; this choice is
motivated by the desire to optimize the performance for large files and to reduce
the amount of metadata maintained by the system.

Figure 11.1 Architecture of Google File System


The architecture of a GFS cluster is illustrated in Figure 11.1 handle and the
location of the chunk. Then the application communicates directly with the chunk
server to carry out the desired file operation. The consistency model is very
effective and scalable. Operations, such as file creation, are atomic and are handled
by the master. To ensure scalability, the master has a minimal involvement in file

50
mutations, operations such as write or append which occur frequently. In such
cases the master grants a lease for a particular chunk to one of the chunk servers
called the primary; then, the primary creates a serial order for the updates of that
chunk. When data of a write operation straddles chunk boundary, two operations
are carried out, one for each chunk.
The following steps of a write request illustrate the process which buffers data and
decouples the control flow from the data flow for efficiency:
1. The client contacts the master which assigns a lease to one of the chunk servers
for the particular chunk, if no lease for that chunk exists; then, the master replies
with the Ids of the primary and the secondary chunk servers holding replicas of the
chunk. The client caches this information.
2. The client sends the data to all chunk servers holding replicas of the chunk; each
one of the chunk servers stores the data in an internal LRU buffer and then sends
an acknowledgment to the client.
3. The client sends the write request to the primary chunk server once it has
received the acknowledgments from all chunk servers holding replicas of the
chunk. The primary chunk server identifies mutations by consecutive sequence
numbers.
4. The primary chunk server sends the write requests to all secondaries.
5. Each secondary chunk server applies the mutations in the order of the sequence
number and then sends an acknowledgment to the primary chunk server.
6. Finally, after receiving the acknowledgments from all secondaries, the primary
informs the client.
The system supports an efficient checkpointing procedure based on copy-on-write
to construct system snapshots.
6. Conclusion:
From this experiment we have studied the concept of GFS which demonstrates the
qualities essential for supporting large scale data processing workloads on commodity
hardware.

7. QUIZ / Viva Questions:


 What are the applications of Google file system?

51
 List the different features of Google file system.

8. References:
 Chapter 7 - Cloud Applications, Editor(s): Dan C. Marinescu, Cloud Computing
(Second Edition), Morgan Kaufmann,2018,Pages 237-279,ISBN 9780128128107.
 R. Bhujade, “Parallel Computing”, 2nd edition, New Age International Publishers
2009.
 Andrew S. Tanenbaum and Maarten Van Steen, “Distributed Systems: Principles
and Paradigms, 2nd edition, Pearson Education, Inc., 2007, ISBN: 0-13-239227-5.
 George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems: Concepts
andDesign” (4th Edition), Addison Wesley/Pearson Education.
 Pradeep K Sinha, “Distributed Operating Systems : Concepts and design”, IEEE
computer society press.

52

You might also like