KEMBAR78
Client Server CAT | PDF | Client–Server Model | Computer Network
0% found this document useful (0 votes)
8 views6 pages

Client Server CAT

Uploaded by

Ishmael Makhembu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views6 pages

Client Server CAT

Uploaded by

Ishmael Makhembu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

JOMO KENYATTA UNIVERSITY OF AGRICULTURE AND TECHNOLOGY

JKUAT

CLIENT SERVER SYSTEMS AND COMPUTING


BCT 2312

CAT

NAME: BRIAN MWANGI MAKHEMBU

REG NO: CS282-6772/2013


1. Discuss trends in Client Server architecture (2-3 pages)

The client-server architecture is a part of a distributed system. A distributed system consists of


several computers that communicate over a network to coordinate the actions and processes of a
common application. Well- established techniques such as interprocess communication and
remote invocation, naming services, cryptography security, distributed file systems, data
replication and distributed transaction mechanisms provide the run-time infrastructure supporting
today’s networked applications.

However, application development for distributed systems now relies more and more on
middleware support through the use of software frameworks (e.g. CORBA) that provide higher-
level abstractions such as distributed shared objects and on services including secure
communication, authentication, yellow pages and persistent storage mechanisms.

In the near future, distributed application frameworks will support mobile code, multimedia data
streams, user and device mobility and spontaneous networking. Scalability, quality of service
and robustness with respect to partial component failures will become key issues.

A shift towards large-scale systems has occurred in recent years: not only is the pure Internet
with its basic protocols, but the higher –level World Wide Web becoming a standard platform
upon which some distributed applications are being realized. Here, the Internet and its resources
are viewed as the global environment in which the computations take place. Consequently, high-
level protocols and standards, such as XML, enter the focus of distributed system research while
low-level issues (such as operating system peculiarities) become less important.

Rapidly evolving network and computer technology, coupled with the exponential growth of the
services and information will be available on the Information, will soon bring us to the point
where hundreds of millions of people will have fast, pervasive access to a phenomenal amount of
information through desktop machines at work, school, and home. The challenge of distributed
system technology is to provide flexible and reliable infrastructures for such large-scale systems
that meet the demands of developers, users and service providers.

Looking further into the future, essential techniques of distributed systems will be incorporated
into an emerging new area called Ubiquitous Computing. The vision of Ubiquitous Computing
(or pervasive computing) is in some sense a projection of Internet phenomenon and the mobile
phone proliferation we observe today into the future, envisioning billions of communicating
smart devices forming a world-wide distributed system several orders of magnitude larger than
today’s Internet.
Trends in Communication Paradigms
There are many ways that application software components residing on different machines can
communicate with one another over a network. One low-level technique is to directly use the call
interfaces of the network layer, such as the socket mechanism, together with a custom
communication protocol.

An already classical alternative that fits well with the client-server model is Remote Procedure
Call (RPC). In this model, a component acts as a client when it makes a request of another
component. It acts as a server when it responds to a request from a client. RPC makes calling an
external procedure that resides in a different network node almost as simple as calling a local
procedure.

While RPC is reasonably well suited for the procedural programming paradigm, it is not directly
applicable to the object-oriented programming style that has gained much popularity in the
recent years. Here, Remote Method Invocation (RMI), a newer technique for Java-based systems
enters the scenes. RMI is similar to RPC but disintegrates the distributed object model into the
Java language in a natural way. Here, remote objects can be passed as parameters in remote
method calls, a feature that RPC systems usually do not have.

Trends in Software Infrastructures and Middleware for Distributed Systems


Middleware and software infrastructures for distributed systems (such as DCE, CORBA or Jini)
provide basic communication facilities to application components and handle issues such as
platform heterogeneity that are due to differing hardware systems, operating systems or
programming languages.

An early middleware system was DCE, which is still being used in many large applications.
Communication is DCE is based on RPC, and it essentially provides directory services, security
services, and a distributed file service.

Another of the widely used infrastructures for distributed systems is CORBA (Common Object
Request Broker Architecture). In contrast to DCE, it’s based on an object-oriented model. It was
introduced in 1991 and it has undergone continual and significant revisions ever since.

The central component of the CORBA system is the so-called Object Request Broker (ORB).
The ORB provides a mechanism for transparently communicating client requests to target object
implementations. It simplifies distributed programming by decoupling the client from the details
of the method invocations.
Trends in Ubiquitous Computing
The vision of Ubiquitous Computing is grounded in the firm belief among the scientific
community that Moore’s Law will hold for another 15 years. This means that in the next few
years, microprocessors will become so small that they can be embedded in almost everything.
They can also be equipped with network capabilities and thus have access to any information or
provide access to any service on the Internet. When they are all connected together and
exchanging appropriate information, they form powerful systems.

Applications for Ubiquitous Computing will be found in areas where the Internet already plays
an important role, such as mobile commerce, telematics, and entertainment. But without a doubt,
newly emerging traditional areas such as healthcare and education will benefit from Ubiquitous
Computing technologies.

2. Describe client Server communication (1 page)

The client-server model is the most common networking relationship. The model contains three
components: a client, a server, and a service. A service is a task that a machine can perform
(such as offering files over a network or the ability to execute a command). A server is a
machine that performs the task (the machine that offers the service). A client is a machine that is
requesting the service. These titles are generally used in the context of a particular service rather
than labeling a machine.

Client-server computing is a distributed computing model in which client applications request


services from server processes. Distributed systems refer to a set of independent computers
connected by a communication network in order to execute different functions. Client and
servers run on different computers interconnected by a computer network.

To offer a service, a server must get a transport address for a particular service. This is a well-
defined location (similar to a telephone number) that will serve to identify the service. The server
associates the service with this address before clients can communicate with it. The client,
wishing to obtain a service from the server, must obtain the transport address. There are several
ways to do this: it may be hard-coded in an application or it may be found by consulting a
database (similar to finding a number in a phone book). The database may be as simple as a
single file on a machine or as complex as accessing a distributed directory server.

The client is a process or program that sends messages to a server via the network. Those
messages ask the server to perform a specific task such as checking for a record in a database.
The server process/program listens for client requests that are transmitted via the network.
Servers receive those requests and perform actions such as database queries and reading files.
We depend on transport providers to transmit data between machines. A transport provider is a
piece of software that accepts a network message and sends it to a remote machine. There are
two categories of transport protocols:
 Connection-oriented Protocols – these are analogous to placing a phone call
i. First, you establish a connection (dial a phone number)
ii. Possibly negotiate a protocol (decide which language to use)
iii. Communicate
iv. Terminate the connection (hang up)
This form of transport is known as virtual-circuit service. Messages are guaranteed to arrive in order.
 Connectionless Protocols – these are analogous to sending mail:
i. There is no connection setup.
ii. Data is transmitted when ready (drop a letter in the mailbox).
iii. There is no termination because there was no call setup.
This transport is known as datagram service. With this service, the client is not positive whether the
message arrived at the destination. It’s a cheaper but less reliable service than virtual circuit service.
Assignment
 Describe the different ways kernel protection can be implemented
Kernel Patch Protection is a feature of 64-bit editions of Microsoft Windows that prevents patching the
kernels. Patching the kernel refers to the unsupported modification of the central component or kernel of
the Windows operating system. Such modifications have never been supported by Microsoft because it
can greatly reduce system security, reliability, and performance.
The various ways of implementing kernel protection are:
Strict kernel memory permissions
When all of the kernel memory is writable, it becomes trivial for attacks to redirect execution flow. To
reduce the availability of these targets the kernel needs to protect its memory with a tight set of
permissions.
Executable code and read-only data must not be writable
Any areas of the kernel with executable memory must not be writable. While this obviously includes the
kernel text itself, we must consider all additional places too: kernel modules, JIT memory etc. In support
of those are CONFIG-STRICT_KERNEL_RWX and CONFIG_STRICT_MODULE_RWX, which seek
to make sure that code is not writable, data is not executable and read-only data is neither writable nor
executable.
Function pointers and sensitive variable must not be writable
Vast areas of kernel memory contain function pointers that are looked up by the kernel and used to
continue execution e.g. operation structures, vector/descriptor tables etc. The number of these variables
must be reduced to an absolute minimum.
Segregation of kernel memory from user space memory
The kernel must never execute user space memory. The kernel must also never access user space
memory without explicit expectation to do. By blocking user-space memory in this way, execution and
data parsing cannot be passed to trivially controlled userspace memory, forcing attacks to operate
entirely in kernel memory.

You might also like