KEMBAR78
Fundamentals of Computer Unit 4 | PDF | Computer File | Operating System
0% found this document useful (0 votes)
8 views45 pages

Fundamentals of Computer Unit 4

Uploaded by

saisahari2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views45 pages

Fundamentals of Computer Unit 4

Uploaded by

saisahari2007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Operating Systems: - Introduction - History of Operating Systems - Functions of Operating Systems -

Process Management - Memory Management – File Management - Device Management - Security


Management - Types of Operating Systems - Providing User Interface - Popular Operating Systems.
Programming Languages: - Introduction - History of Programming Languages - Generations of
Programming Languages - Characteristics of a Good Programming Language - Categorisation of High-
level Languages - Popular High-level Languages – Factors Affecting the Choice of a Language -
Developing a Program - Running a Program

INTRODUCTION

An operating system (OS) is a software that makes the computer hardware to work. While the hardware
provides ‘raw computer power’ , the OS is responsible for making the computer power useful for the users.
As discussed in the previous chapter, the OS is the main component of system software and therefore must
be loaded and activated before we can accomplish any other task.
The operating system provides an interface for users to communicate with the computer. It also manages
the use of hardware resources and enables proper implementation of application programs. In short,the
operating system is the master control program of a computer. Figure 4.1 shows the different roles performed
by an operating system. The main functions include:
• Operates CPU of the computer.
• Controls input/output devices that provide the interface between the user and the computer.
• Handles the working of application programs with the hardware and other software systems.
• Manages the storage and retrieval of information using storage devices such as disks.
Every computer, irrespective of its size and application,
needs an operating system to make it functional and useful.
Operating systems are usually prewritten by the
manufacturers and supplied with the hardware and are
rarely developed in-house owing to its technical
complexity. There are many operating systems developed
during the last few decades but the popular among them
are MS-DOS, Windows 2000, Windows XP, Windows
Server 2003, UNIX and Linux.
In this chapter we shall discuss in detail the various
functions of operating systems, different types of operating
systems and their services, and the types of user interfaces
available. Fig. 4.1 The roles of an operating system

HISTORY OF OPERATING SYSTEMS

A series of developments in the computer architecture led to the evolution of the operating system during the
later half of the 20th century. During the 1940’s, there was no operating system and assembly language was
used to develop programs that could directly interact with the hardware. The computer systems during this
period were mainly used by the researchers, who were both the programmers as well as the end users ofthe
computer system.
During the 1950s, more number of people started using the computer systems. This led to a repetition of
tasks as everyone started developing their own programs and device drivers. Different people created device
drivers for the same input and output devices. To avoid this repetition of tasks, various batch processing
operating systems such as BKY, CAL and Chios were developed during this period. FORTRAN Monitor
System, General Motors Operating System and Input Output System are the other operating systems
developed in the 1950s. The operating systems developed during this period were capable of performing only
a single task at a time
During the 1960s, multi-tasking operating systems were developed. These operating systems ensured
better utilisation of resources by allowing the multiple tasks to be performed simultaneously. They also
allowed multiple programs to remain in memory at the same time. Central Processing Unit (CPU) executed
multiple processes at a single time and also handled the hardware devices attached to the computer
system. These operating systems used the concepts of spooling and time-sharing model for achieving the
multi-tasking functionality. The various operating systems developed during the 1960s include Admiral,
Basic Executive System, Input Output Control System and SABRE (Semi-Automatic Business Related
Environment).
During the 1970s, a major breakthrough was achieved in the development of operating system with the
introduction of UNIX by AT&T Bell Labs. UNIX supported a multi-user environment where multiple users
could work on a computer system. The core functionality of UNIX resided in a kernel that was responsible
for performing file, memory and process management. UNIX also came bundled with utility programs for
performing specific tasks. The other operating systems that were introduced in the 1970s include DOS/VS,
OS/VS1and OpenVMS.
During the 1980s, some key operating systems were developed including MS-DOS, HP-UX and
Macintosh. MS-DOS was developed by Microsoft and could be installed on desktop Personal Computers
(PCs), such as Intel 80x86 PCs. HP-UX was similar to UNIX and was developed by Hewlett Packard.
This operating system could be installed on the HP PA RISC computer systems. Macintosh was developed
by Apple computers and could be installed on the desktop PCs such as Motorola 680x0. MS DOS and
Macintosh became quite popular in the 1980’s and are still in use.
A number of operating systems were developed during the 1990s including Windows 95, Windows 98,
Windows NT, FreeBSD and OS/2. Windows 95, Windows 98 and Windows NT were GUI based operating
systems developed by Microsoft. FreeBSD was similar to UNIX and was available free of cost. OS/2 was
introduced by IBM and could be installed on Intel/AMD Pentium and Intel 80x86 based computer systems.
The decade of 1990 revolutionised the way of computing through robust GUI-based operating systems and
fast processing devices.
The first decade of 21st century has seen the development of operating systems such as MAC OS X,
Windows 2000, Windows Server 2003, Windows ME and Windows XP. With the advent of Internet, security
has been the prime focus of the operating systems of this era.

FUNCTIONS OF OPERATING SYSTEMS


The main function of an operating system is to manage the resources such as memory and files of a computer
system. The operating system also resolves the conflicts that arise when two users or programs request the
same resource at the same time. Therefore, the operating system is also called the resource manager of a
computer system. The currently used operating systems such as Windows 2000, Windows Server 2003 and
Linux also support networking that allows the sharing of files and resources such as printerand scanner. The
following are some of the important functions of an operating system:
• Process management It manages the processes running in a computer system. A process is basically a
program that is being currently run by a user on a computer system. For example, a word processor
application program such as Microsoft Word runs as a process in a computer system.
• Memory management It manages the memory resources of a computer system. There are various
memory resources of a computer system including primary memory or Random Access Memory
(RAM) and secondary memory like hard disk and Compact Disk (CD). All the programs
are loaded in the main memory before their execution. It is the function of the operating system to
determine how much memory should be provided to each process.
• File management It manages the files and directories of a computer system. A file can be defined as a
collection of information or data that is stored in the memory of a computer system. Every filehas a
unique name associated with it. The organisation of files and directories in a computer systemis
referred as file system. An operating system allows us to create, modify, save, or delete files in a
computer system.
• Device management This function of operating system deals with the management of peripheral
devices, such as printer, mouse and keyboard attached to a computer system. An operating system
interacts with the hardware devices through specific device drivers. The primary task of an operating
system is to manage the input/output operations performed by the end users.
• Security management It ensures security for a computer system from various threats such as
virus attacks and unauthorised access. An operating system uses various techniques, such as
authentication, authorisation, cryptography, etc for ensuring security of a computer system.
Figure depicts the various functions of an operating system.

Fig. 4.2 The functions of an operating system

PROCESS MANAGEMENT

Process management involves the execution of various tasks such as creation of processes, scheduling of
processes, management of deadlocks and termination of processes. When a process runs in a computer
system, a number of resources such as memory and CPU of the computer system are utilised. It is the
responsibility of an operating system to manage the running processes by performing tasks such as resource
allocation and process scheduling. The operating system also has to synchronise the different processes
effectively in order to ensure consistency of shared data.
Generally, only a single process is allowed to access the CPU for its execution at a particular instant
of time. When one process is being processed by the CPU, the other processes have to wait until the execution
of that particular process is complete. After the CPU completes the execution of a process, the resources
being utilised by that process are made free and the execution of the next process is initiated. All the processes
that are waiting to be executed are said to be in a queue. In some cases, a computer system supports parallel
processing allowing a number of processes to be executed simultaneously.
A process consists of a set of instructions to be executed called process code. A process is also associated
with some data that is to be processed. The resources that a process requires for its execution are called
process components. There is also a state associated with a process at a particular instant of time called
process state. Similar to these concepts, there are a number of concepts associated with the process
management function of an operating system. Some of these key concepts are:
• Process state
• Process Control Block (PCB)
• Process operations
• Process scheduling
• Process synchronisation
• Interprocess communication
• Deadlock
11.4.1 Process State
A process state can be defined as the condition of a process at a particular instant of time. There are
basically seven states of a process:
• New It specifies the time when a process is created.
• Ready It specifies the time when a process is loaded into the memory and is ready for execution.
• Waiting It specifies the time when a process waits for the allocation of CPU time and other
resources for its execution.
• Executing It is the time when a process isbeing
executed by the CPU.
• Blocked It specifies the time when a processis
waiting for an event like I/O operation to complete.
• Suspended It specifies the time when a process
is ready for execution but has not been placed in
the ready queue by the operating system.
• Terminated It specifies the time when a process
is terminated and the resources being utilised by
the process are made free.
Figure illustrates the various process states.
The Fig. shows that a process is initially inthe new
state when it is created. After the process has been
created, the state of the process changes from new to
ready state where the process is loaded into the memory.
The state of the process changes from readyto the
waiting state when the process is loaded into the memory.
The process state changes from waiting to the executing
state after the CPU time and other resources
are allocated to it and the process starts running. After the Fig. The different states of a process
process has executed successfully, it is terminated and its
state changes to terminated.
Process Control Block (PCB)
PCB is a data structure associated with a process that provides complete information about the process.
PCB is important in a multiprogramming environment as it captures information pertaining to a number of
processes running simultaneously. PCB comprises of the following:
• Process id It is an identification number that uniquely identifies a process.
• Process state It refers to the state of a process such as ready and executing.
• Program counter It points to the address of the next instruction to be executed in a process.
• Register information It comprises of the various registers, such as index and stack that areassociated
with a process.
• Scheduling information It specifies the priority information pertaining to a process that isrequired
for process scheduling.
• Memory related information This section of the PCB comprises of page and segment tables.
It also stores the data contained in base and limit registers.
• Accounting information This section of the PCB stores the
details relate to CPU utilisation and execution time of a process.
• Status information related to I/O This section of the PCB
stores the details pertaining to resource utilisation and the files
opened during process execution.
The operating system maintains a table called process table,which
stores the PCBs related to all the processes. Figure 4.4shows the
structure of PCB.

Process Operations
The process operations carried out by an operating system are
primarily of two types, process creation and process termination.
Process creation is the task of creating a new process. There are
different situations in which a new process is created. A new process
can be created during the time of initialisation of operating system
or when system calls such as create-process and fork( ) are initiated
by other processes. The process, which creates a new process using Fig. 4.4 The structure of a PCB
system calls, is called parent process while the new process that is
created is called child process. The child processes can further create new processes using system calls.
A new process can also be created by an operating system based on the request received from the user. Figure
4.5 shows the hierarchical structure of multiple processes running in a computer system.
The process creation operation is very common in a running computer system because corresponding
to every task that is performed there is a process associated with it. For instance, a new process is created
every time a user logs on to a computer system, an application program such as MS Word is initiated, or
when a document is printed.
Process termination is an operation in which a process is terminated after it has executed its last instruction.
When a process is terminated, the resources that were being utilised by the process are released by the
operating system. When a child process terminates, it sends the status information back to the parent
process before terminating. The child process can also be terminated by the parent process if the task
performed by the child process is no longer needed. In addition, when a parent process terminates, it has
to terminate the child process as well because a child process cannot run when its parent process has been
terminated.

Fig. 4.5 The hierarchical structure of processes

The termination of a process when all its instructions have been executed successfully is called normal
termination. However, there are instances when a process terminates due to some error. This is called
abnormal termination of a process.

Process Scheduling
Process scheduling is the task performed by an operating system for deciding the priority in which the
processes, which are in ready and waiting states, are allocated the CPU time for their execution. Process
scheduling is very important in multiprogramming and multitasking operating systems where multiple
processes are executed simultaneously. Process scheduling ensures maximum utilisation of CPU because
a process is always running at a specific instant of time. At first, the processes that are to be executed are
placed in a queue called job queue. The processes, which are present in the main memory and are ready for
CPU allocation, are placed in a queue called ready queue. If a process is waiting for an I/O device then that
process is placed in a queue called device queue.
An operating system uses a program called scheduler for deciding the priority in which a process is
allocated the CPU time. Scheduler is of three types:
• Long term scheduler It selects the processes that are to be placed in the ready queue. The longterm
scheduler basically decides the priority in which processes must be placed in the main memory.
• Mid term scheduler It places the blocked or suspended processes in the secondary memory of a computer
system. The task of moving a process from the main memory to the secondary memory is referred as
swapping out. The task of moving back a swapped-out process from the secondary memory to the main
memory is referred as swapping in. The swapping of processes is performed to ensure the best utilisation
of main memory.
• Short term scheduler It decides the priority in which processes in the ready queue are allocatedthe
CPU time for their execution. The short term scheduler is also referred as CPU scheduler.
An operating system uses two types of scheduling policies for process execution, preemptive and
non preemptive. In the preemptive scheduling policy, a low priority process has to suspend its execution if
a high priority process is waiting in the queue for its execution. However in the non preemptive scheduling
policy, processes are executed in first come first serve basis, which means the next process is executed only
when currently running process finishes its execution. The selection of the next process, however, may be
done considering the associated priorities. Operating systems perform the task of assigning priorities to
processes based on certain scheduling algorithms. Some of the key scheduling algorithms are:
• First Come First Served (FCFS) scheduling In this scheduling algorithm, the first process ina
queue is processed first.
• Shortest Job First (SJF) scheduling In this scheduling algorithm, the process that requires shortest
CPU time is executed first.
• Priority scheduling In this scheduling algorithm, a priority is assigned to all the processes andthe
process with highest priority is executed first. Priority assignment of processes is done on the basisof
internal factors such as CPU and memory requirements or external factors such as user’s choice. The
priority scheduling algorithm can support either preemptive or non-preemptive scheduling policy.
• Round Robin (RR) scheduling In this scheduling algorithm, a process is allocated the CPU fora
specific time period called time slice or time quantum, which is normally of 10 to 100 milliseconds.If
a process completes its execution within this time slice then it is removed from the queue otherwiseit
has to wait until the next time slice.

Process Synchronisation
Process synchronisation is the task of synchronising the execution of processes in such a manner that no
two processes have access to the same shared data or resource. When multiple processes are concurrently
running then they may attempt to gain access to the same shared data or resource. This can lead to
inconsistency in the shared data as the changes made by one process in the shared data may not be reflected
when another process accesses the same shared data. In order to avoid such inconsistency of data, it is
important that the processes are synchronised with each other.
One of the important concepts related to process synchronisation is that of critical section problem.
Each process contains a set of code called critical section through which a specific task, such as changing the
value of a global variable and writing certain data to a file, is performed. To ensure that only a single process
enters its critical section at a specific instant of time, the processes need to coordinate with other by sending
requests for entering the critical section. When a process is in its critical section no other process is allowed
to enter the critical section.
Peterson’s solution is one of the solutions to critical section problem involving two processes. Peterson’s
solution states that when one process is executing its critical section then the other process executes the rest
of the code and vice versa. This ensures that only one process is in the critical section at a particular instant
of time.
Locking is another solution to critical section problem in which a process acquires a lock before entering
its critical section. When the process finishes executing its critical section, it releases the lock. The lock is
then available for any other process that wants to enter the critical section. The locking mechanism also
ensures that only one process is in the critical section at a particular period of time.
Another solution to the critical section problem is that of Semaphore. It is basically a synchronisation tool
in which the value of an integer variable called semaphore is retrieved and set using wait and signal
operations. Based on the value of the Semaphore variable, a process is allowed to enter its critical section.
Interprocess Communication
Interprocess communication is the method of communication between processes through which processes
interact with each other for gaining access to shared data and resources. There are two methods of
interprocess communication, shared memory and message passing.
In the shared memory method, a part of memory is shared between the processes. A process can write
the data that it wants to share with other processes in to the memory. Similarly, another process can read the
data that has been written by another process. Figure 4.6 shows the shared memory method of interprocess
communication.
In Fig. 4.6, P1 and P2 represent the two processes. P1 writes the data that it needs to share with P2 in the
shared memory. P2 then reads the data written by P1 from the shared memory.
In the message passing method, a process sends a message to another process for communication.
This method allows the sharing of data between processes in the form of messages. Figure 4.7 shows the
message passing method of interprocess communication.
In Fig. 4.7, P1 sends the shared data in the form of a message to the kernel and then the kernel sends
the message sent by P1 to P2.

Fig. 4. 6 The shared memory methodof Fig. 4.7 The message passing method forthe
interprocess communication interprocess communication

Deadlock
Deadlock is a condition that occurs when multiple processes wait for each other to free up resources and
as a result all the processes remain halted. Let us understand the concept of deadlock with the help of
an example. Suppose there are two processes P1 and P2 running in a computer system. P1 requests for a
resource, such as printer that is being utilised by the P2 process. As a result, the P1 process has to wait till
the time P2 completes its processing and frees the resource. At the same time, the P2 process requests for
a resource, such as shared data that has been locked by the process P1. Thus, both the processes end up
waiting for each other to free up the required resources. This situation is called a deadlock.
The following are some of the reasons due to which a deadlock situation may arise.
• Mutual exclusion In mutual exclusion, processes are not allowed to share resources with each other.
This means that if one process has control over a resource, then that resource cannot be used byanother
process until the first process releases the resource.
• Hold and wait In this condition, a process takes control of a resource and waits for some other resource
or activity to complete.
• No preemption In this condition, a process is not allowed to force some other process to releasea
resource.
There are a number of methods through which the deadlock condition can be avoided. Some of these
methods are:
• Ignore deadlock In this method, it is assumed that a deadlock would never occur. There is a good
chance that a deadlock may not occur in a computer system for a long period of time. As a result, the
ignore deadlock method can be useful in some cases.
• Detect and recover from deadlock In this method, the deadlock is first detected using allocation/
request graph. This graph represents the allocation of resources to different processes. After a deadlock
has been detected, a number of methods can be used to recover from the deadlock. One way is
preemption in which a resource held by one process is provided to another process. The second way is
rollback in which the operating system keeps a record of the process states and makesa process roll
back to its previous state; thus eliminating the deadlock situation. The third way is to kill one or more
processes to overcome the deadlock situation.
• Avoid deadlock In this method, a process requesting a resource is allocated the resource only if there
is no possibility of deadlock occurrence.

MEMORY MANAGEMENT

Memory management function of an operating system helps in allocating the main memory space to the
processes and their data at the time of their execution. Along with the allocation of memory space, memory
management also perform the following activities:
• Upgrading the performance of the computer system
• Enabling the execution of multiple processes at the same time
• Sharing the same memory space among different processes
Memory management is one of the most important functions of operating system because it directly affects
the execution time of a process. The execution time of a process depends on the availability of data in the
main memory. Therefore, an operating system must perform the memory management in such a manner that
the essential data is always present in the main memory. An effective memory management system ensures
accuracy, availability and consistency of the data imported from the secondary memory to the main memory.
An effective memory management system must ensure the following:
• Correct relocation of data The data should be relocated to and from the main memory in such
a manner that the currently running processes are not affected. For example, if two processesare
sharing a piece of data then the memory management system must relocate this data only after ensuring
that the two processes are no longer referencing the data.
• Protection of data from illegal change The data present in the main memory should be protected
against unauthorised access or modifications. The memory management system should ensure that a
process is able to access only that data for which it has the requisite access and it should be prohibited
from accessing data of other processes.
• Provision to share the information An ideal memory management system must facilitate sharing
of data among multiple processes.
• Utilisation of small free spaces A memory management system should be able to apply appropriate
defragmentation techniques in order to utilise small chunks of scattered vacant spaces in the main
memory.
Segmentation, paging and swapping are the three key memory management techniques used by
an operating system.

Segmentation
Segmentation refers to the technique of
dividing the physical memory space into
multiple blocks. Each block has a specific
length and is known as a segment. Each
segment has a starting address called the
base address. The length of a segment
determines the available memory spaces
in the segment. Figure 4.8 shows the
organisation of segments in a memory unit.
The location of data values stored in a
segment can be determined by the distance
of the actual position of data value from
the base address of the segment. The
distance between the actual position of data
and the base address of segment is known
as displacement or offset value. In other Fig. 4.8 Memory unit having segments
words, whenever it is required to obtaindata
from the segmented memory then the

actual address of the data is calculated by adding the base address of the segment and with offset value.
The base address of the segment and the offset value is specified in a program instruction itself. Figure 4.9
shows how the actual position of an operand in a segment is obtained by adding the base address and the
offset value.
Fig. 4.9 Obtaining the actual address of data

Paging

Paging is a technique in which the main memory of the computer system is organised in the form of equal
sized blocks called pages. In this technique, the addresses of the occupied pages of the physical memory are
stored in a table, which is known as page table.
Paging enables the operating system to obtain data from the physical memory location without specifying
its lengthy memory address in the instruction. In this technique, a virtual address is used tomap the
physical address of the data. The length of the virtual address is specified in the instruction andis smaller
than the physical address of the data. It consists of two different numbers, first number is the address of a
page called virtual page in the page table and second number is the offset value of the actual data in the page.
Figure 4.10 shows how the virtual address is used to obtain the physical address of an occupied page of
physical memory using a page table.

Fig. 4.10 Obtaining data from a page using the paging technique
Swapping
Swapping is a technique used by an operating system for efficient management of memory space of a
computer system. Swapping involves performing two tasks called swapping in and swapping out. The task
of placing the pages or blocks of data from hard disk to the main memory is called swapping in. On the other
hand, the task of removing pages or blocks of data from main memory to hard disk is called swappingout.
The swapping technique is useful when a large program has to be executed or some operations have to be
performed on a large file.
The main memory in a computer system is limited.
Therefore, to run a large program or to perform some
operation on a large file, the operating system swaps
in certain pages or blocks of data from the hard disk.
To make space for these pages or blocks of data in
the main memory, the operating system swaps out the
pages or blocks of data that are no longer required
in the main memory. The operating system placesthe
swapped out pages or blocks of data in a swapfile.
A swap file is the space in the hard disk that is used as
an extension to the main memory by the operating
system. Figure 4.4. shows the technique
of swapping used by the operating system for memory Fig. 4.4. Swapping of pages
management.

FILE MANAGEMENT

File management is defined as the process of manipulating files in a computer system. A file is a collection
of specific information stored in the memory of the computer system. File management includes the process
of creating, modifying and deleting the files. The
following are some of the tasks performed by the file
management function of operating system:
• It helps in creating new files and placing them at
a specific location.
• It helps in easily and quickly locating the files in
the computer system.
• It makes the process of sharing the files among
different users easy.
• It helps store the files in separate folders known as
directories that ensure better organisation of data.
• It helps modify the content as well as the name
of the file as per the user’s requirement.
Figure 4.12 shows the general hierarchy of file
storage in an operating system.
In Fig. 4.12, the root directory is present at the
Fig. 4.12 The general hierarchy of file
highest level in the hierarchical structure. It includes all
storage in an operating system
the subdirectories in which the files are stored. Subdirectory is a directory present inside another directory
in the file storage system. The directory based storage system ensures better organisation of files in the
memory of the computer system.
The file management function of OS is based on the following concepts:
• File attributes It specifies the characteristics, such as type and location that completely describea
file.
• File operations It specifies the tasks that can be performed on a file such as opening and closinga
file.
• File access permissions It specifies the access permissions related to a file such as read
and write.
• File systems It specifies the logical method of file storage in a computer system. Some of the
commonly used file systems include FAT and NTFS.

File Attributes
File attributes are the properties associated with a file that specify different information related to a file. The
following are some of the key file attributes:
• Name It specifies the name of a file given by the user at the time of saving it.
• File type It specifies the type of a file such as a Word document or an Excel worksheet.
• Location It specifies the location of a file where it is stored in the memory.
• Size It specifies the size of the file in bytes.
• Date and time It specifies the date and time when the file was created, last modified and last
accessed.
• Read-only It specifies that the file can be opened only for reading purpose.
• Hidden If this attribute of a file is selected, then the file is hidden from the user.
• Archive If this attribute of a file is selected, then the back up of a file is created.
File Operations
File operations are the various tasks that are performed on files. A user can perform these operations
by using the commands provided by the operating system. The following are some of the typical file
operations:
• Creating It helps in creating a new file at the specified location in a computer system. The new file
could be a Word document, an image file or an Excel worksheet.
• Saving It helps in saving the content written in a file at some specified location. The file can be
saved by giving it a name of our choice.
• Opening It helps in viewing the contents of an existing file.
• Modifying It helps in changing the existing content or adding new to an existing file.
• Closing It helps in closing an already open file
• Renaming It helps in changing the name of an existing file.
• Deleting It helps in removing a file from the memory of the computer system.
File Access Permissions
File access permissions help specify the manner in which a user can access a file. These are the access rights
that allow us to read, write or execute a file. The following are some of the typical file access permissions:
• Read It allows a user to only read the content of an existing file.
• Write It allows a user to only modify the content of an existing file.
• Execute It allows a user to run an existing file stored in the computer system.
File Systems
File systems are used by an operating system to store and organise the various files and their information
on a hard disk. The following are the two different file systems that are used to organise files in a computer
system:
• File Allocation Table (FAT) It is a method used for organising the files and folders in the formof
a table, which is known as FAT. This type of system is used for disks that are smaller in size and contain
simple folders. The different types of FAT systems are FAT12, FAT16 and FAT32.
• New Technology File System (NTFS) This file system is specifically designed for large hard disks for
performing basic file operations, such as reading, writing, modifying, saving, etc., quickly and
efficiently. NTFS overcomes the drawbacks of the FAT system.

DEVICE MANAGEMENT

Device management is another important function of the operating system. Device management is
responsible for managing all the hardware devices of the computer system. It may include the management
of the storages devices as well as the management of all the input/output devices of the computer system. It
is the responsibility of the operating system to keep a track of the status of all the devices in the computer
system. The status of any computer device, internal or external, may be either free or busy. If a device
requested by a process is free at a specific instant of time, the operating system allocates it to the process.
An operating system manages the devices in a computer system with the help of device controllers and
device drivers. Each device in the computer system is equipped with a corresponding device controller.
For example, the various device controllers in a computer system may be disk controller, printer controller,
tape-drive controller and memory controller. All these device controllers are connected with each other
through a system bus. The device controllers are actually the hardware components that contain some
buffer registers to store the data temporarily. The transfer of data between a running process and the various
devices of the computer system is accomplished only through these device controllers.
Apart from device controllers, device drivers also help the operating system to manage and allocate the
devices to different processes in an efficient manner. Device drivers are the software programs that are used
by the operating system to control the functioning of various devices in a uniform manner. An operating
system communicates with the device controllers with the help of device drivers while allocating the devices
to the various processes running on the computer system. The device drivers may also be regarded as the
software programs acting as an intermediary between the processes and the device controllers.
Figure 4.13 shows the mechanism of allocating a device to a process by the operating system.

Fig. 4.13 Device management

Figure 4.3 shows how a device or resource can be allocated to an application program by the operating
system. If an application program or process, during its execution, requires a device to perform a specific
task, it generates a request. This request is handled by the operating system through the corresponding device
driver and device controller.
The device controller used in the device management operation usually includes three different registers:
command, status and data. The command register is used to contain the information regarding the request
issued by an application program. It is the only register of device controller that specifies the type of service
requested by the application program. The status register is used to keep track of the status of the device.
It contains information about whether or not the device is free to service the request of an application
program. The data register holds the necessary data required to be transferred between the application
program and the corresponding device.
Apart from handling the commands issued by the application program, the other major responsibility of
the device management function is to implement the Application Programming Interface (API). API is the
only interface through which the application programs running on the computer system can communicate
with the kernel of the operating system to issue the various device-related requests. It can also be regarded
as the set of functions or routines that can be called by the application programs to access various services
required for their proper execution. This API provides a uniform interface for accessing all the device
controllers in the computer system.
SECURITY MANAGEMENT

The security management function of an operating system helps in implementing mechanisms that
secure and protect the computer system internally as well as externally. Therefore, an operating system is
responsible for securing the system at two different levels, which are internal security and external security.
Internal security refers to the protection of activities of one process from the activities of another
process. The term internal security may also be regarded as system protection. The internal security of the
computer system also ensures the reliability of the computer system. There may be a number of processes
running in the computer system concurrently. It is the responsibility of the operating system to make sure
that only one process at a time has access to a particular resource of the computer system. Most of the
operating systems use the concept of least privilege to implement internal security. In this concept, each
process or program running in the computer system is assigned enough privileges to access the various
resources of the computer system. If two processes running on the computer system send a request to the
operating system for the allocation of same device or resource, the kernel of the operating system checks
the privileges of both the processes. The process with more privileges will be serviced by the operating
system. The process with fewer privileges will be blocked by the operating system from gaining access to
the particular computer device.
External security refers to the implementation of mechanisms for securing the data and programs stored
in the computer system as well as the various resources of the computer system against unauthorised access.
The term external security may also be regarded as system security. External security is particularly required
when a computer system is either on a network or connected to the Internet. The system security can be
desecrated either intentionally or accidentally. It is easier for the operating system to implement the security
mechanisms for accidental security breaches. Most of the external security mechanisms implemented by the
operating system are only to prevent the computer system against accidental misuse. The various external
threats, accidental or intentional, to the computer system may include reading, writing or deletion of computer
data by an unauthorised user and accessing of computer resources or devices by an unauthorised user. It is
not possible to prevent the computer system from external threats only at the operating system level. Apart
from the operating system, the three other major levels at which external security should be implemented are
physical, human and network. The most common external security mechanism employed by most operating
systems is a software firewall. The software firewall is software included in the operatingsystem that is
specially designed to prevent the computer system against unauthorised users or programs from gaining
access to the data and programs stored in the computer system.

TYPES OF OPERATING SYSTEMS

Many different types of operating systems have evolved till date. As the computers have improved in terms
of speed, reliability, and cost so have the operating systems in terms of their capabilities. The operating
systems supported by first generation computers were not very powerful. They were only designed and
developed to cater the needs of a single user at a time. Also, the users of these operating systems were capable
of performing only one task at a time. However, there has been a tremendous amount of improvement in
operating systems in the recent years. The modern-day operating systems allow multiple users to carry out
multiple tasks simultaneously. Based on their capabilities and the types of applications supported, the
operating systems can be divided into the following six major categories:
• Batch processing operating systems
• Multi-user operating systems
• Multitasking operating systems
• Real-time operating systems
• Multiprocessor operating systems
• Embedded operating systems
Batch Processing Operating Systems
The batch processing operating systems are capable of executing only one job at a time. The jobs or the
programs submitted by different users are grouped into batches and one batch of jobs is provided as input to
the computer system at a time. The jobs in the batch are processed on the first-come-first-serve basis. After
getting an appropriate command from the operator, the batch processing operating system starts executing
the jobs one-by-one. The execution of a particular job generally involves three major activities, which are
reading the job from the input device, executing the job by the system and printing the calculated result on
to the output device. After the execution of one job is complete, the operating system automatically fetches
the next job from the batch without any human intervention.
The following are some of the advantages of batch processing operating systems:
• The computer systems employing the batch processing operating systems were very efficient
computer systems of their times because the idle time for these systems was very small.
• These operating systems facilitated the execution of jobs in an organised manner.
The following are some of the disadvantages of batch processing operating systems:
• The jobs are processed only in the order in which they are placed in a batch and not as per their
priority.
• The debugging of a program at execution time is not possible in these operating systems.
• The executing jobs may enter an infinite loop, as each job is not associated with a proper timer.
Multi-user Operating Systems
The multi-user operating systems enable multiple users to use the resources of a computer system at the same
time. In other words, a multi-user operating system allows a number of users to work simultaneously on the
same computer system. These types of operating systems are specially designed for the multi-user systems.
A multi-user system is usually implemented by following the multi-terminal configuration. In this type of
configuration, a single powerful computer system is connected to multiple terminals though serial ports. This
computer system is responsible for processing the different requests generated by the various terminals at the
same time. The devices connected with the various terminals are keyboard, mouse, and monitor. The central
computer system is equipped with a fast processor and a memory of large capacity for catering to the multiple
requests of the end users. Examples of multi-user operating system include Unix, Linux, Windows 2000 and
VM-386
The following are some of the advantages of the multi-user operating systems:
• It allows the resources of the computer system to be utilised in an efficient manner.
• It enhances the overall productivity of the various users by providing simultaneous access to the various
computer resources.
The following are the disadvantages of the multi-user operating systems:
• The configuration of the computer system employing multi-user operating system is complex and
hence, is difficult to handle and maintain.
• This type of system may result in an inconsistent data if the activities of one user are not protected
from another user.
• This type of operating system is required to have robust security mechanisms.
Multitasking Operating Systems
The multitasking operating systems allow a user to carry out multiple tasks at the same time on a single
computer system. The multitasking operating systems are also known as by several other names, such
as multiprocessing, multiprogramming, concurrent or process scheduling operating systems. The first
multitasking operating systems evolved during 1960s. The number of tasks or processes that can beprocessed
simultaneously in this type of operating system depends upon various factors, such as the speed of the CPU,
the capacity of the memory, and the size of the programs.
In this type of operating system, the different processes are executed simultaneously by implementing
the concept of time slicing. According to this concept, a regular slice of CPU time is provided to each of the
processes running in the computer system. Multitasking operating systems can be of two different types,
which are preemptive multitasking operating systems and cooperative multitasking operating systems. In
preemptive multitasking operating system, slices of CPU time are allocated to the various processes on some
priority basis. These priorities are assigned to the various processes in such a manner that the overall
efficiency of the system is maintained. In cooperative multitasking operating system, it strongly depends
upon the processes whether or not to relinquish CPU control for other running processes. Examples of
multitasking operating system include Unix, Linux, Windows 2000, and Windows XP.
The following are some of advantages of multitasking operating systems:
• It helps in increasing the overall performance of the computer system.
• It helps in increasing the overall productivity of the user by performing a number of tasks at the same
time.
The following are some of the disadvantages of multitasking operating systems:
• A large amount of memory is required to execute several programs at the same time.
• Some mechanism needs to be implemented to ensure that the activities of one process do not interfere
with the activities of another process.

Real-time Operating Systems


The real-time operating systems are similar to multitasking operating systems in their functioning. However,
these operating systems are specially designed and developed for handling real-time applications or
embedded applications. The real time applications are those critical applications that are required to be
executed within a specific period of time. Therefore, time is the major constraint for these applications. The
different examples of real-time applications are industrial robots, spacecrafts, industrial control applications
and scientific research equipments.
The real-time operating systems can be of two different types, hard real-time operating system, and soft
real-time operating system. In the hard real-time operating system, it is necessary to perform a task in the
specified amount of time, i.e. within the given deadline. On the other hand, in the soft real-time operating
system, a task can be performed even after its allocated time has elapsed.
The following are some of the examples of real-time operating system:
• MTOS
• Lynx
• RTX
The following are some of the advantages of the real-time operating systems:
• It is easy to design and develop and execute real-time applications under real-time operating system
as compared to other types of operating systems.
• The real-time operating systems are usually more compact as compared to other operating systems.
Thus, these systems require less memory space.
The following are some of the disadvantages of real-time operating systems:
• It is primarily focused on optimising the execution time of an application and thus, it sometimes
overlooks some of the other critical factors related to the overall efficiency of the computer system.
• It is only used for providing some dedicated functionality, and thus, cannot be used as a general-
purpose operating system.

Multiprocessor Operating Systems


The multiprocessor operating system allows the use of multiple CPUs in a computer system for executing
multiple processes at the same time. By using more than one CPU, the processes are executed in a faster
manner as compared to the computer systems performing multiprocessing with a single CPU.
The following are some of the examples of the multiprocessor operating system:
• Linux
• Unix
• Windows 2000
The following are some of advantages of multiprocessor operating systems:
• It helps in improving the overall performance and throughput of the computer system.
• It helps in increasing the reliability of the computer system. If one CPU of the computer system fails,
the other CPU takes control and executes the currently running process.
The following are some of disadvantages of the multiprocessor operating systems:
• The cost of the computer systems employing multiprocessor operating systems is very high.
• A large amount of memory is required for running and executing several user programs.
Embedded Operating Systems
The embedded operating systems are somewhat similar to real-time operating systems. The embedded
operating system is installed on an embedded computer system, which is primarily used for performing
computational tasks in electronic devices. These operating systems provide limited functionality that
is required for the corresponding embedded computer system. The other common functions that a usual
operating system supports are not found in these operating systems.
The following are some of the examples of embedded operating systems:
• Palm OS
• Windows CE
The following are some of the advantages of embedded operating systems:
• These operating systems allow the implementation of embedded systems in an efficient manner.
• The computer system with embedded operating system is easy to use and maintain.
The following are some of the disadvantages of embedded operating systems:
• It is only possible to perform some specific operations with these operating systems.
• These operating systems cannot be used in frequently changing environments.

PROVIDING USER INTERFACE

User Interface (UI) facilitates communication between an application and its users by acting as an
intermediary between them. Each application including the operating system is provided with a specific
UI for effective communication. The two basic functions of the UI of any application are to take the inputs
from the user and to provide the outputs to the user. However, the types of inputs taken by the UI and the
types of outputs provided by the UI may vary from one application to another.
The UI of any operating system can be classified into one of the following types:
• Graphical User Interface (GUI)
• Command Line Interface (CLI)
Graphical User Interface
GUI is a type of UI that enables the users to interact with the operating system by means of point-and-click
operations. GUI contains several icons representing pictorial representation of the various objects such as
file, directory and device. The graphical icons provided in the UI can be manipulated to perform different
types of tasks. These UI graphical icons are usually manipulated by the users using a suitable pointing device,
such as mouse, trackball, touch screen and light pen. The other input devices like keyboard can alsobe used
to manipulate these graphical icons. GUIs are considered to be very user-friendly interfaces becauseeach
object is represented with a corresponding icon. Unlike the other UIs, the users need not provide text
commands for executing tasks.
Figure 4.14 shows the GUI of Windows XP operating system.

Fig. 4.14 The user interface of Windows XP operating system

The following are some of the advantages of GUI-based operating systems:


• The GUI interface is easy to understand and even the new users can operate on them on their own.
• The GUI interface visually acknowledges and confirms each type of action performed by the users. For
example, when the user deletes a file in the Windows operating system, then the operating systemasks
for the confirmation before deleting it.
• The GUI interface enables the users to perform a number of tasks at the same time. This feature of
operating system is also known as multitasking.
Command Line Interface
CLI is a type of UI that enables the users to interact with the operating system by issuing some specific
commands. In order to perform a task in this interface, the user needs to type a command at the command
line. As soon as the user presses the Enter key, the command is received by the command line interpreter.
The command line interpreter is a software program that is responsible for receiving and processing the
commands issued by the user. After processing the command, the command line interpreter displaysthe
command prompt again along with the output of the previous command issued by the user. The disadvantage
of the CLI is that the user needs to remember a lot of commands to interact with the operatingsystem.
Therefore, these types of interfaces are not considered very friendly from the user’s perspective.

Fig. 4.15 The command line interface of MS-DOS

Figure 4.15 shows the command line user interface of MS-DOS. In order to perform a task, we need to
type a command at the command prompt denoted by C:\>. For example, to copy a text file, say, a1.txt, from the
C drive of our computer system. to the D drive, we need to type the copy command at the command
prompt, as shown in Fig. 4.16.
Figure 4.16 shows that a text file from the C drive of the computer system has been copied to theD
drive of the computer system. However, before typing the copy command at the command prompt, we need
to make sure that the file, a1.txt, exists in the C drive of the computer system.

POPULAR OPERATING SYSTEMS

To date, many operating systems have been developed that suit different requirements of the users. Some of
these operating systems became quite popular while others did not do well. The following are some of the
popular operating systems:
• MS-DOS
• UNIX
• Windows

Fig. 4.16 Copying a file in MS-DOS

MS-DOS
MS-DOS was developed and introduced by Microsoft in 1981. It is a single-user and single-tasking
operating system developed for personal computers. MS-DOS was specifically designed for the family of
Intel 8086 microprocessors. This operating system provides a command line user interface, which means that
a user needs to type a command at the command line for performing a specific task. The CLI of MS-DOS is
more commonly known as DOS prompt. The user interface of MS-DOS is very simple to use but not very
user-friendly because of its non-graphical nature. The user has to issue a command to carry outeven a simple
task.
The command prompt of MS-DOS only allows the execution of the files with the extensions: .COM
(Command files), .BAT (Batch files) and .EXE (Executable file). The structure of MS-DOS comprises the
following programs:
• IO.SYS It is an important hidden and read only system file of MS-DOS that is used to start the computer
system. It is also responsible for the efficient management and allocation of the hardware resources
through the use of appropriate device drivers.
• MSDOS.SYS It is another hidden and read only system file that is executed immediately after the
execution of IO.SYS file is finished. MSDOS.SYS acts as the kernel of MS-DOS. It is responsible for
managing the memory, processors and the input/output devices of the computer system.
• CONFIG.SYS It is a system file that is used to configure various hardware components of the computer
system so that they can be used by the various applications.
• COMMAND.COM It is the command interpreter that is used to read and interpret the various
commands issued by the users.
• AUTOEXEC.BAT It is a batch file consisting of a list of commands that is executed automatically as
the computer system starts up.
The use of various features of MS-DOS is discussed in Chapter 12.

UNIX
UNIX is an operating system that allows several users to perform a number of tasks simultaneously. The first
version of UNIX was introduced during the 1970s. However, since then, it is in constant development phase
for further improving its functionality. UNIX operating system provides a GUI that enables its users to work
in a more convenient environment. UNIX is most suitable for the computers that are connectedto a Local
Area Network (LAN) for performing scientific and business related operations. It can alsobe
implemented on personal computers. The following are the core components of the UNIX operating system:

• Kernel It is the central part of the UNIX operating system that manages and controls the communication
between the various hardware and software components of the computer system. Theother major
functions performed by the kernel are process management, memory management and device
management.

• Shell It is the user interface of the UNIX operating system that acts as an intermediary betweenthe
user and the kernel of the operating system. Shell is the only program in UNIX operating system that
takes the commands issued by the users and interprets them in an efficient manner to produce thedesired
result.

• Files and processes The UNIX operating system arranges everything in terms of files and processes.
The directory in this operating system is also considered as a file that is used to houseother files
within it. The process is usually a program executed under the UNIX operating system. Several
processes can be executed simultaneously in this operating system and are identified by a unique
Process Identifier (PID) assigned to them.
Figure 4.17 shows the directory structure of UNIX operating system.
The UNIX operating system supports hierarchical directory structure in the form of a tree for arranging
different files in the computer system. The root of the tree is always denoted by slash (/). The current working
directory of the user is denoted by home. There can be several home directories correspondingto the
different users of the UNIX operating system. All the files and directories under the home directory belong
to a particular user. The path of any file or directory in UNIX operating system always starts withthe root
(/). For example, the full path of the file word.doc can be represented as /home/its/ag2/mmdata/ word.doc.
The following are some of the significant features of UNIX operating system:
• It allows multiple users to work simultaneously.
• It allows the execution of several programs and processes at the same time to ensure efficient
utilisation of the processor.
• It implements the concept of virtual memory in an efficient manner. This feature enables the UNIX
operating system to execute a program whose size is larger than the main memory of the computer
system.

Fig. 4.17 UNIX directory structure

Windows
Microsoft has provided many operating systems to cater the needs of different users. Microsoft is a well
known name in the development of operating system as well as various software applications. Initially,
Microsoft introduced Windows 1.x, Windows 2.x and Windows 386 operating systems. However, these
operating systems lacked certain desirable features, such as networking and interactive user interface.
Microsoft continued to work towards developing an operating system that met the desirable features of
users and came up with a new operating system in the year 1993, which was known as Windows NT 3.1.
This operating system was specially designed for the advanced users performing various business and
scientific operations. After the release of Windows NT 3.1, several other operating systems were introduced
by Microsoft in the successive years with their own unique features. Table 4.1 lists some of other important
Windows operating system introduced by Microsoft with their release dates and significant features.
Table 4.1 Microsoft Windows operating system

Name of operating system Date of release Significant features


Windows 95 August, 1995 ∑ 32-bit file system
∑ Multitasking
∑ Object Linking and Embedding (OLE)
∑ Plug and play
∑ Optimised memory management
Windows 98 June, 1998 ∑ 32-bit Data Link Control (DLC) protocol
∑ Improved GUI
∑ Improved online communication through vari-
ous tools, such as outlook express, personal web
server and web publishing wizard
∑ Multiple display support
∑ Windows update
Windows 2000 February, 2000 ∑ More reliable against application failures
∑ Improved Windows explorer
∑ Secure file system using encryption
∑ Microsoft Management Console (MMC)
∑ Improved maintenance operations
Windows ME September, 2000 ∑ System restoration against failure
∑ Universal plug and play
∑ Automatic updates
∑ Image preview
Windows XP October, 2001 ∑ Attractive desktop and user interface
∑ System restore
∑ Windows firewall
∑ Files and settings transfer wizard
Windows Server 2003 April, 2003 ∑ Enhanced Internet Information Services (IIS)
∑ Enhanced Microsoft Message Queuing (MSMQ)
∑ Enhanced active directory support
∑ Watchdog timer
Windows Vista November, 2006 ∑ Multilingual user interface
∑ Enhanced search engine
∑ Enhanced Internet explorer
∑ Enhanced Windows media player
∑ Enhanced Windows update
∑ Windows system assessment tool

13.1 NTRODUCTION
Computers can perform a variety of tasks. However,
they cannot perform any of them on their own.As
we know, computers have no commonsense andthey
cannot think. They need clear-cut instructions to tell
them what to do, how to do and when to do. A set of
instructions to carry out these functions is called a
computer program.
The communication between two parties,
whether they are machines or human beings, always
needs a common language or terminology. The
language used in the communication ofinstructions
to a computer is known as computer language or
programming language. There are many different
types of languages available today. A computer
program can be written using any of the
programming languages depending upon the
task to be performed and the knowledge of the person developing the program. The process of writing
instructions using a computer language is known as programming or coding. The person who writes such
instructions is referred as a programmer.
We know that natural languages such as English, Hindi or Tamil have a set of characters and use some
rules known as grammar in framing sentences and statements. Similarly, set of characters and rules known
as syntax that must be adhered to by the programmers while developing computer programs.
Although, during the intial years of computer programming. All the instructions were written in the
machine language, a large number of different type of programming languages have been developed during
the last six decades. Each one of them has its own unique features and specific applications. In this chapter,
we shall discuss briefly the various types of programming languages, their evolution and characteristics and
how they are used to solve a problem using a computer.

13.2 HISTORY OF PROGRAMMING LANGUAGES

The history of programming languages is interlinked with the evolution of computer systems. As the
computer systems became smaller, faster and cheaper with time, the programming languages also became
more and more user friendly. Ada Augusta Lovelace, a companion of Charles Babbage, was considered as
the first computer programmer in the history of programming languages. In the year 1843, Ada Augusta
Lovelace wrote a set of instructions to program the analytical engine designed by Charles Babbage.
This computer program was used to transform the data entered by the users into binary form before
being processed by the computer system. This program increased the efficiency and the productivity of
the analytical engine by automating various mathematical tasks. Later, in the year 1946, Konrad Zuse, a
German engineer, developed a programming language known as Plankalkul. It was developed to target
the various scientific, business and engineering needs of the users. It was considered as the first complete
programming language that supported various programming constructs and the concept of data structures as
well. The various programming constructs were implemented in this programming language with the help
of Boolean algebra.
During the 1940s, machine languages were developed to program the computer system. The machine
languages which used binary codes 0s and 1s to represent instructions were regarded as low-level
programming languages. The instructions written in the machine language could be executed directly
by the CPU of the computer system. These languages were hardware dependent languages. Therefore, it was
not possible to run a program developed for one computer system in another computer system. Thisis
because of the fact that the internal architecture of one computer system may be different from that of another.
The development of programs in machine languages was not an easy task for the programmers. One was
required to have thorough knowledge of the internal architecture of the computer system before developing
a program in machine language.
During the 1950s, assembly language, which is another low-level programming language, was developed
to program the computer systems. The assembly language used the concept of mnemonics to write the
instructions of a computer program. Mnemonics refer to the symbolic names that are used to replace the
machine language code. The programmers enjoyed working with this programming language because it was
easy to develop a program in the assembly language as compared to the machine language. However, unlike
machine language programs, assembly language programs could not be directly executed by the CPU of the
computer system and required some a software program to convert these programs into machine
understandable form.
During the period between 1950 and 1960, many high-level programming languages were developed
to cater to the needs of the users of various disciplines, such as business, science and engineering. In
1951, Grace Hopper, an American computer scientist, started working towards designing a compiler called
A-0, and in the year 1957, developed a high-level programming language known as MATH-MATIC. In 1952,
another programming system known as AUTOCODE was developed by Alick E. Glennie. Grace Hopper
was considered as the first person who had put some serious efforts towards the development of a high-level
programming language.
In the year 1957, another popular high-level programming language known as FORTRAN (FORmula
TRANslation) was developed. During its era, it was the only high-level programming language thatbecame
hugely popular among its users. FORTRAN was developed by John Backus and his team at International
Business Machines (IBM). FORTAN was best suited for solving problems related to scientificand numerical
analysis field. Another high-level programming language known as ALGOL (Algorithm Language) was
developed in the year 1958. Some other high-level languages that evolved during thisera were LISt
Processing (LISP) in 1958, Common Business Oriented Language (COBOL) in 1959 and ALGOL 60 in
1960.
In the next decade, from 1960 to 1970, more high-level programming languages evolved. In the year 1964,
the Beginners All-Purpose Symbolic Instruction Code (BASIC) was designed by John G. Kemeny and
Thomos E. Kurtz at Dartmouth college. It was a general-purpose programming language that was very simple
to use. In the same year, another powerful high-level programming language, PL/1 with many rich
programming features such as complex data type and methods, was designed for developing engineering and
business applications. PL/I was considered to have the best features of its ancestor programming languages:
COBOL, FORTRAN, and ALGOL 60. The other programming languages that evolved during this era were
Simula I, Simula 67, Algol 68 and APL.
The period between 1970 and 1980 was actually the golden era for the development high-level
programming languages. This period saw the birth of many general-purpose and powerful high-level
programming languages. In the early 1970s, a procedural programming language, Pascal was developed
by Niklaus Wirth. This programming language was provided with strong data structures and pointers,
which helped in utilizing the memory of the computer system in an efficient manner. In the year 1972, Dennis
Ritchie developed a powerful procedural and block structured programming language known as
C. C is still very popular among the users for developing system as well as application software. In 1974,
IBM developed Structured Query Language (SQL) that was used for performing various operations on the
databases, such as creating, retrieving, deleting and updating. Apart from these programming languages,
some other high-level programming languages that evolved during this era were Forth, Smalltalk, and
Prolog.
During the next decade, from 1980 to 1990, the focus of development of high-level programming
languages shifted towards enhancing the performance and design methodology. The languages of this period
used modular approach for designing large-scale applications. The modular approach of program design can
be regarded as a design methodology, which divides the whole system into smaller parts that could be
developed independently. This approach of designing software applications is still employed by modern
programming languages. Some of the high-level programming languages that evolved during this era include
Ada, C++, Perl and Eiffel.
The high-level programming languages developed and designed in the 1990s are considered as the
fifth generation programming languages. During this period, Internet technology evolved tremendously.
Therefore, the basic purpose of the programming languages of this period was to develop web-based
applications. However, these languages could also be used for the development of desktop applications. The
important high-level programming languages of this era are Java, VB and C#. Most of the programming
languages of this era employed object-oriented programming paradigm for designing and developing robust
and reliable software applications.
Table 4.1 summarises the history of development of programming languages:
Table 4.1 The evolution of programming languages

Period of employment Programming language Characteristics


1940s Machine language ∑Machine dependent
• Faster execution
• Difficult to use and understand
• More prone to errors

1950s Assembly language ∑Machine dependent


• Faster execution
• More prone to errors
• Relatively simple to use
1950–1970 FORTRAN, LISP, COBOL, ALGOL 60, ∑High-level languages
BASIC, APL
∑Easy to develop and understand programs
• Less prone to errors

1970–1990 C, C++, Forth, Prolog, Smalltalk, Ada, ∑Very high-level languagesPerl,


SQL
∑Easier to learn
• Highly portable
1990s Java, HTML, VB, PHP, XML, C # ∑Internet-based languages
• Object-oriented languages
• More efficient
• Reliable and robust

GENERATIONS OF PROGRAMMING LANGUAGES

Programming languages have been developed over the years in a phased manner. Each phase of development
has made the programming languages more user-friendly, easier to use and more powerful. Each phase of
improvement made in the development of the programming languages can be referred as a generation. The
programming languages, in terms of their performance, reliability and robustness can be grouped into five
different generations.
• First generation languages (1GL)
• Second generation languages (2GL)
• Third generation languages (3GL)
• Fourth generation languages (4GL)
• Fifth generation languages (5GL)
First Generation: Machine Languages
The first generation programming languages are also called low-level programming language because they
were used to program the computer systems at a very low level of abstraction, i.e., at the machine-level.
The machine language, also referred as the native language of the computer system, is the first generation
programming language. In the machine language, a programmer can issue the instructions to the computer
system in the binary form only. Therefore, machine language programming only deals with two numbers,
0 and 1. The machine language programs are entered into the computer system by setting the appropriate
switches available in the front panel system. These switches are actually the devices used to alter the course
of the flow of electric current. The enable state of the switch represents the binary value, 1 and the disable
state of the switch represents the binary value, 0. The programs written in the machine language are directly
executed by the CPU of the computer system and therefore, unlike the modern programming languages, there
is no need of using a translator in a machine language. Figure 4.1 shows the typical instruction format of the
machine language instruction.

Fig. 4.1 Machine instruction format

As seen in the figure, the instruction in the machine language is made up of two parts only, opcode
and operand. The opcode part of the machine language instruction specifies the operation to be performed by
the computer system and the operand part of the machine language instruction specifies the data on which
the operation is to be performed. However, the instruction format of any instruction in the machine language
strongly depends upon the CPU architecture.
The advantages of the first generation programming languages are:
• They are translation free and can be directly executed by the computers.
• The programs written in these languages are executed very speedily and efficiently by the CPU of the
computer system.
• The programs written in these languages utilise the memory in an efficient manner because it is
possible to keep track of each bit of data.
There are many disadvantages of using the first generation programming languages. They include:
• It is very difficult to develop a program in the machine language.
• The programs developed in these languages cannot be understood very easily by a person, who has
not actually developed these programs.
• The programs written in these languages are so prone to frequent errors that they are very difficult to
maintain.
• The errors in the programs developed in these languages cannot be detected and corrected easily.
• A programmer has to write a large number of instructions for executing even a simple task in these
languages. Therefore, we can say that these languages result in poor productivity while developing
programs.
• The programs developed in these languages are hardware dependent and thus they are non-portable.
Due to these limitations, machine languages are very rarely used for developing application programs.
Second Generation: Assembly Languages
Like the first generation programming languages, the second generation programming languages also
belong to the category of low-level programming languages. The second generation programming languages
comprise of assembly languages that use the concept of mnemonics for writing programs. Similarto the
machine language, the programmer of assembly language needs to have internal knowledge of the CPU
registers and the instructions set before developing a program. In the assembly language, symbolic names
are used to represent the opcode and the operand part of the instruction. For example, to move the contents
of the CPU register, a1 to another CPU register, b1 the following assembly language instruction can be used:

mov b1, a1
The above code shows the use of symbolic name, mov in an assembly language instruction. Thesymbolic
name, mov instructs the processor to transfer the data from one register to another. Using this symbolic name,
a value can also be moved to a particular CPU register.
The use of symbolic names made these languages little bit user-friendly as compared to the first
generation programming languages. However, the second generation languages were still machine-
dependent. Therefore, one was required to have adequate knowledge of the internal architecture of the
computer system while developing programs in these languages.
Unlike the machine language programs, the programs written in the assembly language cannot be directly
executed by the CPU of the computer system because they are not written in the binary form. As a result,
some mechanism is needed to convert the assembly language programs into the machine understandable
form. A software program called assembler is used to accomplish this purpose. An assembler is a translator
program that converts the assembly language program into the machine language instructions. Figure 4.2
shows the role of an assembler in executing an assembly language program.

Fig. 4.2 Functioning of an assembler

An assembler acts as an intermediary between the assembly language program and the machine language
program. It takes a program written in the assembly language as input and generates the corresponding
machine language instructions as output.
The following are some of the advantages of second generation programming languages:
• It is easy to develop, understand and modify the programs developed in these languages as compared
to those developed in the first generation programming languages.
• The programs written in these languages are less prone to errors, and therefore can be maintained
with great ease.
• The detection and correction of errors is relatively easy in these languages in comparison to the first
generation programming languages.
The following are some of the disadvantages of the second generation programming languages:
• The programs developed in these languages are not executed as quickly as the programs developed
in the machine language. This is because of the fact that the computer system needs to convert these
programs into machine language before executing them.
• The programs developed in these languages are not portable as these languages are machine
dependent.
• The programmer of these languages needs to have thorough knowledge of the internal architecture of
the CPU for developing a program.
• The assembly language programs, like the machine language programs, still result in poor
productivity.

Third Generation: High-level Languages


The third generation programming languages were designed to overcome the various limitations of the
first and second generation programming languages. The languages of the third and later generations are
considered as high-level programming languages because they enable the programmer to concentrate only
on the logic of the program without concerning about the internal architecture of the computer system. In
other words, we can also say that these languages are machine independent languages.
The third generation programming languages are also quite user-friendly because they relieve the
programmer from the burden of remembering operation codes and instruction sets while writing a program.
The instructions used in the third and later generations of languages can be specified in English- like
sentences, which are easy to comprehend for a programmer. The programming paradigm employedby
most of the third generation programming languages was procedural programming, which is also
known as imperative programming. In the procedural programming paradigm, a program is divided into
a large number of procedures, also known as subroutines. Each procedure contains a set of instructions
for performing a specific task. A procedure can be called by the other procedures while a program is being
executed by the computer system.
The third generation programming languages were considered as domain-specific programming languages
because they were designed to develop software applications for a specific field. For example,the third
generation programming language, COBOL, was designed to solve a large number of problems related to the
business field only.
Unlike the assembly language, the programs developed in the third and the later generation of
programming languages were not directly executed by the CPU of the computer system. These programs
require translator programs for converting them into machine language. There are two types of translator
programs, namely, compiler and interpreter. Figure 4.3 shows the translation of a program developed in the
high-level programming language into the machine language program.

Fig. 4.3 Functioning of a compiler and an interpreter

A program written in any high-level language can be converted by the compiler or the interpreter into
the machine-level instructions. Both the translator programs, compiler and interpreter, are used for the same
purpose except for one point of difference. The compiler translates the whole program into the machine
language program before executing any of the instructions. If there are any errors, the compiler generates
error messages which are displayed on the screen. All errors must be rectified before compiling again.
On the other hand, the interpreter executes each statement immediately after translating it into the machine
language instruction. Therefore, the interpreter performs the translation as well as the executionof the
instructions simultaneously. If any error is encountered, the execution is halted after displaying the error
message.
The following are some of the popular third generation programming languages:
• FORTRAN
• ALGOL
• BASIC
• COBOL
• C/C++
The following are some of the advantages of the third generation programming languages:
• It is easy to develop, learn and understand the programs.
• The programs developed in these languages are highly portable as compared to the programs developed
in the first and second generation programming languages. Hence, we can also say that the third
generation programming languages are machine independent programming languages.
• The programs written in these languages can be developed in very less time as compared to the first
and second generation programming languages. This is because of the fact that the third generation
programming languages are quite user-friendly and provide necessary inbuilt tools required for
developing an application.
• As the programs written in these languages are less prone to errors, they are easy to maintain.
• The third generation programming languages provide more enhanced documentation and debugging
techniques as compared to the first and the second generation programming languages.
The following are some of the disadvantages of the third generation programming languages:
• As compared to the assembly and the machine language programs, the programs written in the third
generation programming languages are executed slowly by the computer system.
• The memory requirement of the programs written in these programming languages is more ascompared
to the programs developed using the assembly and machine languages.

Fourth Generation: Very High-level Languages


The languages of this generation were considered as very high-level programming languages. The process of
developing software using the third generation programming languages required a lot of time and effort that
affected the productivity of a programmer. Moreover, most of the third generation programming languages
were domain-specific. The fourth generation programming languages were designed and developed to reduce
the time, cost and effort needed to develop different types of software applications. Most of the fourth
generation programming languages were general-purpose programming languages.This means that most
of the fourth generation programming languages could be used to develop software applications related to
any domain. During this generation, the concept of Database Management System (DBMS) also evolved
tremendously. Therefore, most of the fourth generation programming languages had database related features
for working with databases. These languages have simple, English-like syntax rules. Since 4GLs are non-
procedural languages, they are easier to use and therefore more user-friendly.We need to specify WHAT
is required rather than specifying How to do it.
The following are some of the fourth generation programming languages:
• PowerBuilder
• SQL
• XBase++
• CSS
• ColdFusion
Apart from being machine independent, the following are some of the other important advantages of the
fourth generation programming languages:
• The fourth generation programming languages are easier to learn and use as compared to the third
generation programming languages.
• These programming languages require less time, cost and effort to develop different types of software
applications.
• These programming languages allow the efficient use of data by implementing various database
concepts.
• As compared to the third generation programming languages, these languages required less number
of instructions for performing a specific task.
• The programs developed in these languages are highly portable as compared to the programs
developed in the languages of other generations.
The following are some of the disadvantages of the fourth generation programming languages:
• As compared to the programs developed in the programming languages of previous generations, the
programs developed in the 4GLs are executed at a slower speed by the CPU.
• As compared to the third generation programming languages, the programs developed in these
programming languages require more space in the memory of the computer system.

Fifth Generation: Artificial Intelligence Languages


The programming languages of this generation mainly focus on constraint programming. The constraint
programming, which is somewhat similar to declarative programming, is a programming paradigm in which
the programmer only needs to specify the solution to be found within the constraints rather than specifying
the method or algorithm of finding the desired solution. The major fields in which the fifth generation
programming languages are employed are Artificial Intelligence (AI) and Artificial Neural Network (ANN).
AI is the branch of computer science in which the computer system is programmed to have human
intelligence characteristics. It helps make computer system so intelligent that it can take decisions on its own
while solving various complicated problems. On the other hand, ANN refers to a network that isused
to imitate the working of a human brain. ANN is widely used in voice recognition systems, image recognition
systems and industrial robotics.
The following are some of the fifth generation programming languages:
• Mercury
• Prolog
• OPS5
The following are two important advantages of fifth generation programming languages:
• The fifth generation languages allow the users to communicate with the computer system in a simple
and an easy manner. Programmers can use normal English words while interacting with the computer
system.
• These languages can be used to query the databases in a fast and efficient manner.

CHARACTERISTICS OF A GOOD PROGRAMMING LANGUAGE

The popularity of any programming language depends upon the useful features that it provides to its
users. A large number of programming languages are in existence around the world but not all of them are
popular. The following are some of the important characteristics of a good programming language:
• The language must allow the programmer to write simple, clear and concise programs.
• The language must be simple to use so that a programmer can learn it without any explicit training.
• The glossary used in the language should be very close to the one used in human languages.
• The function library used in the language should be well documented so that the necessary information
about a function can be easily obtained while developing an application.
• The various programming constructs supported by the language must match well with the application
area it is being designed for.
• The language must allow the programmer to focus only on the design and the implementation of the
different programming concepts without requiring the programmer to be well acquainted with the
background details of the concepts being used.
• The programs developed in the language must make efficient use of memory as well as other
computer resources.
• The language must provide necessary tools for development, testing, debugging and maintenance
of a program. All these tools must be incorporated into a single environment known as Integrated
Development Environment (IDE), which enables the programmer to use them easily.
• The language must be platform independent, i.e., the programs developed using the programming
language can run on any computer system.
• The Graphical User Interface (GUI) of the language must be attractive, user-friendly and self
explanatory.
• The language must be object-oriented in nature so as to provide various features such as inheritance,
information hiding, and dynamic binding to its programmers.
• The language must be consistent in terms of both syntax and semantics.

CATEGORISATION OF HIGH-LEVEL LANGUAGES

The high-level languages can be categorised into different types on the basis of the application areas in which
they are employed, as well as the different design paradigms supported by them. Figure 4.4shows the
different types of high-level languages categorised on the basis of application areas and design paradigms.

Fig. 4.4 Types of high-level languages

The figure clearly shows that the high-level programming languages are designed for use in a number
of areas. Each high-level language is designed by keeping its target application area in mind. Some of the
high-level languages are best suited for business domain, while others are apt in scientific domain only.
The high-level programming languages can also be categorised on the basis of the various programming
paradigms supported by them. The programming paradigm refers to the approach employed by the
programming languages for solving the different types of problems.

Categorisation Based on Application


On the basis of application area, the high-level programming languages can be divided into the following
types:
• Commercial languages. These programming languages are dedicated to the commercial domain and
are specially designed for solving business-related problems. These languages can be usedin
organisations for processing and handling the data related to payroll, accounts payable and tax handling
applications. COBOL is the best example of the commercial-based high-level programming language
employed in the business domain. This language was developed with strong file handling capabilities
and support for business arithmetic operations. Another example of business-oriented programming
language is Programming Language for Business (PL/B), which was developed by Datapoint during
the 1970s.
• Scientific languages. These programming languages are dedicated to the scientific domain and are
specially designed for solving different scientific and mathematical problems. These languages can be
used to develop programs for performing complex calculations during scientific research. FORTRAN
is the best example of the scientific-based high-level programming language. This language is capable
of performing various numerical and scientific calculations.
• Special-purpose languages. These programming languages are specially designed for performing some
dedicated functions. For example, SQL is a high-level language specially designed to interact with
the database programs only. Therefore, we can say that the special-purpose high-level programming
languages are designed to support a particular domain area only.
• General-purpose languages. These programming languages are used for developing different types
of software applications regardless of their application area. The various examples of general-purpose
high-level programming languages are BASIC, C, C++ and Java.

Categorisation Based on Design Paradigm


On the basis of design paradigm, the high-level programming languages can be categorised into the
following types:
• Procedure-oriented languages. These programming languages are also called imperative programming
languages. In these languages, a program is written as a sequence of procedures. Each procedure
contains a series of instructions for performing a specific task. Each procedure can becalled by
the other procedures during the program execution. In this type of programming paradigm,a code once
written in the form of a procedure can be used any number of times in the program by only specifying
the corresponding procedure name. This approach also makes the program structure relatively very
simple to follow as compared to the other programming paradigms. We can also say that the major
emphasis of these languages is on the procedures and not on the data. Therefore, the procedure-oriented
languages allow the data to move freely around the system. The various examplesof procedure-oriented
languages are FORTRAN, ALGOL, C, BASIC and Ada.
• Logic-oriented languages. These languages use logic programming paradigm as the design approach
for solving various computational problems. In this programming paradigm, predicate logic is used to
describe the nature of a problem by defining relationships between rules and facts. Prolog is the best
example of the logic-oriented programming language.
• Object-oriented languages. These languages use object-oriented programming paradigm as the design
approach for solving a given problem. In this programming paradigm, a problem is divided into a
number of objects, which can interact by passing messages to each other. The other features included
in the object-oriented languages are encapsulation, polymorphism, inheritance and modularity. C++,
JAVA and C# are the examples of object-oriented programming language.

POPULAR HIGH-LEVEL LANGUAGES

Today, a large number of high-level programming languages are available for developing different types
of software applications. However, only few of these programming languages are popular among
programmers. The following are some of the popular high-level programming languages used around the
world:
• FORTRAN
• LISP
• COBOL
• BASIC
• PASCAL
• C
• C++
• Java
• Python
• C#
FORTRAN
FORTRAN is the most dominant high-level programming language employed in the science and engineering
domain. As mentioned earlier, FORTRAN was initially developed by a team led by John Backus at
IBM in the 1950s. Since then several new versions of FORTRAN have evolved. They include FORTRAN II,
FORTRAN IV, FORTRAN 77 and FORTRAN 90. FORTRAN 90, which is approvedby the International
Organisation for Standardisation, is more portable, reliable and efficient as compared toits earlier versions.
The following are some of the important applications areas where FORTRAN can be employed:
• Finding solutions to partial differential equations
• Predicting weather
• Solving problems related to fluid mechanics
• Solving problems related to physics and chemistry
Some of the most significant characteristics of FORTRAN are enumerated as under:
• It is easier to learn as compared to the other scientific high-level languages.
• It has a powerful built-in library containing some useful functions, which are helpful in performing
complex mathematical computations.
• It enables the programmers to create well-structured and well-documented programs.
• The internal computations in this language are performed rapidly and efficiently.
• The programs written in this language can be easily understood by other programmers, who have not
actually developed the programs.
LISP
LISP (List Processing) was developed by John McCarthy in the year 1958 as a functional programming
language for handling data structures known as lists. LISP is now extensively used for research in the field
of artificial intelligence (AI). Some of the versions of LISP are Standared LISP, MACLISP, Inter LISP, Zeta
LISP and Common LISP.
Some good features of LISP are:
• Easy to write and use.
• Recursion, that is a programming calling itself, is possible.
• Supports grabage collection.
• Supports interactive computing.
• Most suitable for AI applications.
Some negative aspects of LISP are:
• Poor reliability.
• Poor readability of programs.
• Not a general-purpose language.
COBOL
COBOL is a high-level programming language developed in the year 1959 by COnference on DAta SYstems
Languages (CODASYL) committee. This language was specially designed and developed for the business
domain. Apart from the business domain, COBOL can also be used to develop the programs forthe various
other applications. However, this language cannot be employed for developing various system software such
as operating systems, device drivers etc. COBOL has gone through a number of improvementphases since its
inception and, as a result, several new versions of COBOL have evolved. The most significant versions of
COBOL, which are standardised by American National Standards Institute (ANSI), are COBOL-68,
COBOL-74 and COBOL-85.
Some of the most significant characteristics of COBOL are enumerated as follows:
• The applications developed in this language are simple, portable and easily maintainable.
• It has several built-in functions to automate the various tasks in business domain.
• It can handle and process a large amount of data at a time and in a very efficient manner.
• As compared to the other business-oriented high-level programming languages, the applications can
be developed rapidly.
• It does not implement the concept of pointers, user-defined data types, and user-defined functions and
hence is simple to use.

BASIC
BASIC (Beginner’s All-purpose Symbolic Instruction Code) was developed by John Kemeny and Thomas
Kurty at Dartmaith Collge, USA in the year 1964. The language was extensively used for microcomputers
and home computers during 1970s and 1980s. BASIC was standardized by ANSI in 1978 and became popular
among business and scientific users alike. BASIC continues to be widely used because it can be learned
quickly.
During the last four decades, different versions of BASIC have appeared. These include Altair BASIC,
MBASIC, GWBASIC, Quick BASIC, Turbo BASIC and Visual BASIC. Microsoft’s Visual BASIC adds
object-oriented features and a graphical user interface to the standard BASIC.
Main features of BASIC are:
• It is the first interpreted language.
• It is a general-purpose language.
• It is easy to learn as it uses common English words.
PASCAL
PASCAL is one of the oldest high-level programming languages developed by Niklaus Wirth in the year
1970. It was the most efficient and productive language of its time. The programming paradigm employed
by PASCAL is procedural programming. A number of different versions of PASCAL have evolved since
1970 that help in developing the programs for various applications such as research projects, computer games
and embedded systems. Some of the versions of PASCAL include USCD PASCAL, Turbo PASCAL,Vector
PASCAL and Morfik PASCAL. This programming language was also used for the development of various
operating systems such as Apple Lisa and Mac.
Some of the most significant characteristics of PASCAL are enumerated as under:
• It is simple and easy to learn as compared to the other high-level programming languages of its time.
• It enables the programmers to develop well-structured and modular programs that are easy to maintain
and modify.
• The data in this language is stored and processed efficiently with the help of strong data structures.
• It enables the programmer to create the data types according to their requirements that are also referred
as user-defined data types.
• The PASCAL compiler has strong type checking capability that prevents the occurrence of data type
mismatch errors in a program.

C
C is a general-purpose high-level programming language developed by Dennis Ritchie and Brain Kernighan
at Bell Telephone Laboratories, the USA in the year 1972. C is a well-known high-level programming
language that is used for developing the application as well as system programs. It is also block-structured
and procedural, which means that the code developed in C can be easily understood and maintained. C is
the most favourite language of system programmers because of its several key characterises, that are hardly
found in other high-level programming languages. The first major system program developed in C was the
UNIX operating system. C is also regarded as a middle-level language because it contains the low-level as
well as the high-level language features.
Some of the most significant characteristics of C are:
• C is machine and operating system independent language. Therefore, the programs developed in
C are highly portable as compared to the programs developed in the other high-level programming
languages.
• It is a highly efficient programming language because the programs developed in this language are
executed very rapidly by the CPU of the computer system. Also, the memory requirement for the
storage and the processing of C programs is comparatively less. Therefore, C is considered to be
equivalent to assembly language in terms of efficiency.
• It can be used to develop a variety of applications; hence, it is considered to be quite flexible.
• It allows the programmer to define and use their own data types.
• C allows the use of pointers that allows the programmers to work with the memory of the computer
system in an efficient manner.
C++
C++ is a general-purpose, object-oriented programming language developed by Bjarne Stroustrup at Bell
Labs in the year 1979. Initially, Bjarne Stroustrup named his new language as C with classes because this
new language was the extended version of the existing programming language, C. Later, this new language
was renamed as C++. It is also regarded as the superset of the C language because it retains many of its salient
features. In addition to having the significant features of C, C++ was also expanded to include several object-
oriented programming features, such as classes, virtual functions, operator overloading, inheritance and
templates.
Some of the most significant characteristics of C++ are as follows:
• It uses the concept of objects and classes for developing programs.
• The code developed in this language can be reused in a very efficient and productive manner.
• Like C, C++ is also a machine and operating system independent language. Therefore, the programs
developed in this language are highly portable.
• It is a highly efficient language in terms of the CPU cycles and memory required for executing
different programs.
• The number of instructions required to accomplish a particular task in C++ is relatively lesser as
compared to some of the other high-level programming languages.
• It follows the modular approach of developing the programs for the different types of applications.
Therefore, the programs developed in C++ can be understood and maintained easily.
• C++ is highly compatible with its ancestor language, i.e., C because a program developed in C can be
executed under the C++ compiler with almost no change in the code.

JAVA
JAVA is an object-oriented programming language introduced by Sun Microsystems in the year 1995. It was
originally developed in the year 1991 by James Gosling and his team. The syntax and the semanticsof
JAVA are somewhat similar to C++. However, it is regarded as more powerful than C++ and the other high-
level programming languages. In the current scenario, JAVA is the most dominant object-oriented
programming language for developing web-based applications. Apart from the web-based applications,
JAVA can also be employed to develop other types of applications, such as desktop applications and
embedded systems applications.
JAVA is a highly platform independent language because it uses the concept of just-in-time compilation.
In this type of compilation, the JAVA programs are not directly compiled into the native machine code.
Instead, an intermediate machine code called bytecode is generated by the JAVA compiler that can be
interpreted on any platform with the help of a program known as JAVA interpreter.
Some of the most significant characteristics of JAVA are enumerated as under:
• It is a highly object-oriented and platform independent language.
• The programs written in this language are compiled and interpreted in two different phases.
• The programs written in this language are more robust and reliable.
• It is more secure as compared to the other high-level programming languages because it does not
allow the programmer to access the memory directly.
• It assists the programmers in managing the memory automatically with a feature called garbage
collection.
• It also implements the concept of dynamic binding and threading in a better and efficient manner as
compared to other object-oriented languages.
Python
Python is a high-level and object-oriented programming language developed by Guido Van Rossum in the
year 1991. It is a general-purpose programming language that can be used to develop software for a variety
of applications. Python is also regarded as the successor language of ABC programming language. ABC was
a general-purpose programming language developed by a team of three scientists, leo Geurts, Lambert
Meertens, and Steven Pemberton. Several versions of Python have been evolved since 1991. Some of the
versions of Python are Python 0.9, Python 1.0, Python 1.2, Python 1.4, Python 1.6 and Python 2.0.
Python has a strong built-in library for performing various types of computations. This built-in library also
makes Python simple and easy to learn. Python is an interpreted language and its interpreter as well as other
standard libraries are freely available on the Internet. The programs developed in this language canbe run
on different platforms and under different operating systems. Hence, Python is regarded as platform
independent language.
Some of the salient features of Python are:
• It is an interpreted and object-oriented programming language.
• It implements the concept of exception handling and dynamic binding better than the other languages
of its time.
• The syntax and the semantics of this language are quite clear and concise.
• It is a platform independent language.
C#
C#, pronounced as “C-sharp” is a new object-oriented programming language developed by Microsoft late
in the 1990s. It combines the power of C++ with the programming ease of Visual BASIC. C# is directly
descended from C++ and contains features similar to those of JAVA.
C# was specially designed to work with Microsoft’s .NET platform launched in 2000. This platform offers
a new software-development model that allows applications developed in different languages to
communicate with each other. C# includes several modern programming features that include:
• concise, lean and modern language
• object-oriented visual programming
• component-oriented language
• multimedia (audio, animation and video) support
• very good exception handling
• suitable for Web-based applications
• language interoperability
• more type safe than C++.
As C# has been built upon widely used languages such as C and C++, it is easy to learn. Using the
Integrated Development Environment (IDE), it is very easy and much faster to develop and test C# programs.

FACTORS AFFECTING THE CHOICE OF A LANGUAGE

A large number of programming languages are available for developing programs for different types of
applications. To develop software for a specific application, one needs to carefully choose a programming
language so as to ensure that the programs can be developed easily and efficiently in a specific period of time.
There are certain factors that must be considered by a programmer while choosing a programming language
for software development. These factors are described as follows:
• Purpose It specifies the objective for which a program is being developed. If a commercial application
is to be developed, some business-oriented programming language such as COBOLis preferred.
Similarly, if some scientific application is to be developed, then it is best to use some scientific-oriented
language such as FORTRAN. The programs related to the AI field can be developed efficiently in the
LISP or Prolog programming languages. Some object-oriented language should be preferred for
developing web-based applications. A middle-level language such as C shouldbe chosen for developing
system programs.
• Programmer’s experience If more than one programming language is available for developing the
same application, then a programmer should choose a language as per his comfort level. Generally, the
programmer should go for the language in which he has more experience. For this, the programmer can
also compromise with the power of the programming language.
• Ease of development and maintenance The programmer should always prefer the languagein which
programs can be easily developed and maintained. Generally, the object-oriented languages are
preferred over the procedural-oriented programming languages because the code developed in these
languages can be reused and maintained with great ease.
• Performance and efficiency These are the two important factors, which need to be considered while
selecting a programming language for software development. The language in which programs can be
developed and executed rapidly should always be preferred. In addition, the languages, which require
less amount of memory for the storage of programs, should be chosen.
• Availability of IDE The language with an IDE (Integrated Development Environment) of well-
supported development, debugging and compilation tools should be preferred. A powerful IDE helps
in increasing the productivity of a programmer.
• Error checking and diagnosis These two factors involve finding the errors and their causes in a
program. A programmer must choose a programming language, which contains efficient error handling
features. For example, JAVA provides an efficient error handling mechanism of try/catch block. The
try/catch block in JAVA programs can be used to handle the unexpected errors thatmay occur
during the execution of a program. Error checking and diagnosis is very important for developing
quality and error free programs. A programming language with efficient and robust error detection and
correction mechanism eases the task of code development and testing.

DEVELOPING A PROGRAM
Developing a program in any programming language refers to the process of writing the source code for
the required application by following the syntax and the semantics of that language. The syntax and the
semantics refer to a set of rules that a programmer needs to adhere while developing a program.
Before actually developing a program, the aim and the logic of the program should be very clear to
the programmer. Therefore, the first stage in the development of a program is the detailed study of the
objectives of the program. The objectives make the programmer aware of the purpose of the program for
which it is being developed. After determining the objectives of the program, the programmer prepares a
detailed theoretical structure of the steps to be followed for the development of the program. This detailed
structure is known as algorithm.
The programmer may also use a graphical model known as flowchart to represent the steps needed to
perform a specific task. It can also be regarded as the pictorial representation of the steps defined in the
algorithm of the program.
After the logic of the program has been developed either by an algorithm or a flowchart, the next step
is to choose a programming language for the actual development of the program. The factors, which we have
discussed in the previous section, should be taken into consideration while selecting the programming
language.
Each programming language is provided with an IDE, which contains the necessary tools for developing,
editing, running and debugging a computer program. The tool to develop and edit a computer program in the
IDE is usually known as source code editor. The source code editor is a text editor containing various features
such as search and replace, cut, copy and paste, undo, redo, etc. For example, C is provided with a strong and
powerful IDE to develop, compile, debug and run the programs. Figure 4.5 shows the IDE ofC language.
Most of the programming languages uncover the syntactical errors in the program during its compilation.
However, some programming languages also provide a feature known as background compilation, in which
we can check the errors related to the syntax of the programming language while developing a program in
the source code editor.

Fig. 4.5 The IDE of C Language

Suppose we need to develop a program for calculating the percentage of marks of two subjects of
a student and display the output. The first step in the development of a program for this problem is the
preparation of an algorithm. The following code shows the algorithm for calculating the percentage of marks
in two different subjects:

//Algorithm for calculating the percentage and displaying the result


Step 1 – Input the marks for first subject. (mark1)
Step 2 – Input the marks for second subject. (mark2)
Step 3 – Calculate the percentage.
percentage = (mark1 + mark2)/2
Step 4 – If percentage > 40,
Display Pass
Step 5 – Else,
Display Fail
Figure 4.6 shows the flowchart for the algorithm.

Fig. 4.6 Flowchart for calculating the percentage of marks and displaying the result

After developing the algorithm and flowchart, the actual development of the program can be started in the
source code editor of C language. The following code shows the C language program for calculating the
percentage of marks in two different subjects of a student.

void main()

float mark1,mark2;
float percentage;
printf(“\n Enter marks of first subject:”);
scanf(“\n %f”, &mark1);

printf(“\n Enter marks of second subject:”);


scanf(“\n %f”, &mark2);

percentage =(mark1+mark2)/2;
if(percentage>40)

printf(“\n The student is passed”);


else

Figure 4.7 shows the actual development of the program in the source code editor of C language.

Fig. 4.7 Developing a program in the source code editor of C language

RUNNING A PROGRAM

After developing a program, the next step to be carried out in the program development process is to compile
the program. The program is compiled in order to find the syntactical errors in the program code. Ifthere are
no syntax errors in the source code, then the compiler generates the object code. It is the machine language
code that the processor of the computer system can understand and execute.
Once the corresponding object code or the executable file is built by the compiler of the language, the
program should be run in order to check the logical correctness of our program and generate the output.
The logical errors also called semantic errors might cause the program to generate undesired outputs. The
programming languages provide various mechanisms such as exception handling for handling these logical
errors. If the output generated by the program corresponding to the given inputs matches with the desired
result, then the purpose of developing the program is solved. Otherwise, the logic of the program should be
checked again to obtain the correct solution for the given problem
Figure 4.8 shows the output of the program developed in the Fig. 4.7.

Fig. 4.8 Running a program

You might also like