KEMBAR78
Itim Notes Module - 1 | PDF | Client–Server Model | Cloud Computing
0% found this document useful (0 votes)
56 views12 pages

Itim Notes Module - 1

Uploaded by

Sachin Saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views12 pages

Itim Notes Module - 1

Uploaded by

Sachin Saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ITIM (Information Technology & Infrastructure Management)

Subject Code - BCS 302

Module -1 ( INTRODUCTION)
ITIM Definition- ITIM is the process of overseeing & controlling the technology components of an
organization. This include H/W and S/W systems, as well as the policies,processes & people needed to
keep IToperations running smoothly.

EVOLUTION OF COMPUTER
At the start of civilisation, any computation was done with the help of fingers, counting boards, pebbles and by putting
rope knots. As people became more and more civilised, computing demand also grew and led to the development of modern
high speed computers.

The evolution of computers started in late 1930s. Its history goes back to the invention of the first mechanical
computing device for addition, called Abacus. John Napier's invention of logarithm and William Oughtred's invention of slide
rules are also considered as significant developments in the evolution of computers from the early computing devices such as
Abacus. Binary arithmetic has always remained at the core of the computers. The journey of modern computer began in 1937
when Alan Turing invented Turing Machine, which paved the way to the idea of machines that could complete a defineg task by
being supplied with programs. In subsequent years, great developmend happened in the field of computing. Konrad Zuse in
1938 designed the first binary digital relay computer In 1943, the first general-purpose digital computer ENIAC-was developed.
World's first stored-program electronic digital computer- Manchester Baby-was successfully built in 1948. This type of digital
computer included the stored-program concept to utilise Random Access Memory, not only to hold the data involved in
calculations but also to hold the program instructions. This enabled the instructions to be read successively at electronic speed.
Also, the execution of a different program was possible in this computer only by resetting part of the memory, using a simple
keyboard rather than reconfiguring the electronic circuitry, which typically took days on ENIAC machine.

First real-time processing computer was built by MIT in 1955, while in the next year, IBM developed the disk memory
system. The integrated circuit chip was introduced in the following year. In 1959, IBM introduced the first desktop computer.
This led to the development of first real microcomputer of Digital Equipment Corporation (DEC) in 1965.

In the computer evolution, first generation computers were characterised by the use of vacuum tubes. These
computers were very bulky and expensive. They were based on machine language and were capable of solving only one problem
at a given point of time. There was no concept of multitasking. Second generation of computers were based on transistors. In
the 1960s, vacuum tubes-based machines were replaced by transistor-based computers. This brought significant advancement
in computers and made them cheaper, smaller and lighter. This also made computers efficient in terms of power consumption.
However, there was a problem with these computers. They used to emit large amounts of heat from their circuitry because of
which they were subjected to damage. Second generation computers used punched cards for taking input and assembly
language for programming.The integrated circuits led to the third generation of computers which used small transistors called
semiconductors on silicon chips. Use of semiconductors improved the speed as well as efficiency of computers. In the third
generation computers, operating systems became the human interface for computing, while keyboards and monitors were used
as the input and output devices respectively. Introduction of microprocessors was the characteristic of fourth generation
computers. In these computers, thousands of integrated circuits were put on a silicon chip, building a microprocessor.The fifth
generation computers are intended to use the concepts of artificial intelligence and natural language processing. These
computers are in the development stage and developers are aiming to make these computers capable of organising themselves.

COMPUTER BASICS

A computer is an information processing machine. It manipulates data based on the set of instructions supplied to it.
Set of instructions to solve a problem is commonly referred as a program, and is provided by the programmer. Although there
exist mechanical computers since long in human history, the first electronic computer could only be developed in 1940s. These
computers were very large in size and used to consume huge amount of power. Modern computers are based on integrated
circuits, take very little space and are much faster than the earlier machines.

Computer Hardware- A computer can be divided into two components, namely hardware and software. Hardware does
all the physical work of computers. The second component, software. dictates to the hardware what is to be done and how it is
to be done. Modem computers are based on Von-Neumann architecture, which follows stored program concept and defines a
computing model that uses a processing unit and a storage unit. The processing unit is used to carry out the execution of
instructions while storage unit is used to keep the instructions and data. Von-Neumann architecture Implements a Universal
Turing Machine and follows a sequential architecture.

Figure shows the design of the Von-Neumann architecture with various components These components are explained below-

Processing unit

Storage unit

Fig.- Von-Neumann architecture design of computer

Central Processing Unit (CPU)- It is an electronic circuit that can execute computer programs and is considered as the brain of
the computer. It is responsible for reading the instructions from the memory and executing them. CPU performs most of the
computation enabling a computer to function. It has three main components:

Control Unit (CU)-It is responsible for fetching the instructions from the primary memory and for decoding them to find out the
operations to be performed. After decoding, it instructs arithmetic and logic unit to perform the desired operation, It also
controls input and output devices and monitors the overall functioning of the other units of the computer.

Arithmetic Logic Unit (ALU)- It is responsible for the actual execution of instructions. It performs the arithmetic and logic
operations on data. It gets the Signals from the CU regarding the type of operations to be done, then it takes the data from the
memory and executes that operation, and stores the results either in the internal storage or transfers it to the primary memory.
ALU also has internal storage in the form of registers to store the intermediate results during the course of calculation.

Memory- This unit holds the running program and the data. It is sometimes referred to as the primary memory or main
memory. The task of memory is to take the data from input device and to store it until the computer is ready to process it. I t is
also used to hold the intermediate and final computing results. Once the processing is over, data from memory can be
transferred to an output device.

Input-Output Unit -It is used to input instructions and data from the user and to communicate the results to user. Input-output
devices are the medium of communication between user and computer. Some of the common input devices are keyboard,
mouse and light pen, while output devices are printer, monitor, plotter, etc.

Permanent Storage With above mentioned components, computers also have permanent storage-usually called secondary
storage for storing the information and data permanently. Permanent storage provides stored information to other units as and
when required. Some common permanent storage devices are hard disk, floppy disk, CD, DVD, pen drive, etc.

Computer Software-Computer software is a program which instructs computer hardware regarding the steps to be
carried out. It is usually developed by computer programmers with the help of a programming language. Software is often used
in a broader context and it refers to anything which is not hardware but which is used with hardware such as tapes, films, and
records. Computer software can be divided into three primary categories: system softwares, programming software and
application software.

System Software They are the programs which allow the hardware to run properly. Typical example of system software is an
operating system which interfaces with hardware to provide the necessary services for application software. Other examples of
system softwares are device drivers, utilities, windowing systems, etc.

Programming Software It includes tools that help a programmer in writing a computer program or software, using different
programming languages in a convenient way. These tools commonly include compilers, debuggers, linker, editors, etc.

Application Software It represents any program which allows users to create some application, in addition to simply running the
hardware. Some examples of application software are word processors, graphics tools, medical softwares, database packages,
computer games, business softwares, etc.

NETWORK AND INTERNET

A computer network is a group of interconnected computers. The internet refers to a worldwide system of
interconnected computer networks. As technological developments are taking place, they are causing a revolution in our
society, economy and technological systems. Internet came into existence in late 1970s as a development of the ARPANET, a
Department of Defence project. This network employed a new packet-switching concept for interconnecting computers and was
initially deployed to link computers at universities and other institutions in the United States of America and in some other
countries. At that point of time, the ARPANET was the only wide-area computer network with a base of several organisations.
Most of the growth of internet took place in 1990s and it rapidly spread to most of the countries in the world during this time.

Internet, which is a network of networks, consists of millions of private and public networks of local as well as global scope from
academic, business, and government arenas. Usually, communication between two computers in the internet uses a
standardised Internet Protocol called TCP/IP. Connection between computers on the internet is commonly provided using
transmission mediums such as copper wires, fiber-optic cables and wireless connections.

Internet Applications-The internet provides a huge amount of information resources and services. Some of them are
discussed here -

Hypertext Documents Among the various services provided by the internet, hosting of inter-linked hypertext documents over
the World Wide Web (WWW) is the most noted one. Hypertext is the text which appears on computer with references (called
hyperlinks) to other documents that a reader fed access with a mouse click or by pressing keys. Hypertext documents, often
called as web pages, provide a convenient way to gather related pages together and to consider them as a single multi-page
hypertext document. Hypertext Transfer Protocol (HTTP) is typically used to transfer displayable web pages and related files.

Electronic Mail Often abbreviated as e-mail, it refers to the transmission of messages over networks. The messages can be notes
entered through keyboard or electronic files already available in the computer. Generally, it takes very little time for an e-mail to
reach its destination. E-mail serves a very effective means to communicate within a group since it can broadcast a message or
document to everyone in the group since once. Almost all online services and Internet Service Providers (ISPs) offer e-mail
service, and most of them also support gateways which allow users to communicate with the users of other systems. Simple
Mail Transfer Protocol (SMTP) is generally used to transfer e-mail.

Telnet It is a communication program which enables a computer to function as a terminal program in a TCP/IP network. The
telnet program runs on a user's computer and connects his personal computer to a server on the network. Once a connection
between a personal computer and a server is established, commands can be entered through the telnet program and can be
executed as if they are entered directly on the server console. This utility enables the remote control of servers." Usually in the
beginning of a telnet session, a valid username and password are used to connect to remote server.

File Transfer Over Internet Usually, file transfer over internet is done using a standard internet protocol called File Transfer
Protocol (FTP). It is the simplest way to exchange files between computers on the internet. Similar to other protocols like HTTP
and SMTP, FTP is also an application protocol which runs on top of the TCP/IP protocols. It is usually used to download programs
and other files from other servers. It is also used to transfer web pages and other related files from the user's computer to
server, which makes pages visible to everyone on the internet.

Voice Over Internet Protocol (VOIP) It is a technology which allows voice calls to be made using an internet connection instead
of a regular (or analog) phone line. VoIP service converts speaker's voice into a digital signal and transmits it over the internet. If
the call is made to a regular analog phone, the signal is converted to an analog signal before it reaches the destination
telephone. VoIP allows a call to be made directly from a computer, a special VoIP phone, or a traditional phone connected to a
special adapter.

Miscellaneous Other services provided by internet include online text chat, voice/ video chat, electronic trading, news groups,
mailing lists, etc.

COMPUTING RESOURCES

Great developments have been observed in computing since 1960. The recent development in microprocessor
performance provides an appealing vehicle for improved computing. The availability of high speed networks and fast
microprocessors makes clustered workstations a cost-effective choice for computing. Some of the current computing
technologies are Client-Server based computing, Supercomputing, Cluster computing, Grid computing and Cloud computing.

Client-Server Based Computing-Several years ago, most organisations had mainframe-based computing, in which mainframe
computers were connected to dumb terminals. These terminals were unable to do any independent computing and had to
totally rely on the mainframe for data processing. As the years passed, development of personal computers led to the
replacement of dumb terminals. However, the main processing work continued to be performed on the mainframe computer.
The improved computing capability of personal computers was ignored to a great extent. After due course of time, many
organisations started to realise the potential of personal computers and started to think about the possibility of using it either
by sharing, or by splitting some of the processing demands between the mainframe and the personal computers. Client- server
technology is the outcome of this development.Client/server refers to a computing architecture in which software components
interact with each other to provide a system that can be designed for and used by multiple users. It provides a way of separating
the functions of an application into two or more different parts. Typically in client-server architecture, the client is a personal
computer while the server is a high performance based mainframe machine; however, the same machine can work both as
client and server. In practice, it is a very common practice to install a server at one end of a Local Area Network (LAN) and the
clients at the other ends. The client is the requesting machine and the server works as a supplying machine. Client-server
technology allows distributed computation, analysis and presentation between personal computers and one or more high
computing machines on a network. In this architecture, each function of an application stays on the computer (client or serve r)
which is more competent of managing that particular function. Many services used nowadays over the internet are based on
client-server architecture. For example, File Transfer Protocol (FTP) uses client-server technology to exchange files between
systems. In this, a FTP client requests a file (that exists on another computer) and FTP server (the system where the file exists)
handles the request by supplying the file to the client.

Client-server architecture is generally of two types: 2-tier architecture and 3-tier architecture.

2-Tier Architecture Client-server applications initially started with 2-tiered architecture which composed of a client and an
application server. Majority of First generation client-server systems are built on 2-tier architectures. The database server
responds with the grades of all students. Now client application uses all this data to calculate the concerned student's grad e,
and displays it as per his request.

3-Tier Architecture 2-tier architecture works very well when database size is small but it performs very poorly in case of large
database. This is due to the fact that, for every query generated by a client, the data server has to work out large queries for the
client application to manipulate. This is a very large drain on network resources. The 3-tier architecture attempts to overcome
some me of the limitations (such as poor network utilization) of the 2-tier schemes by making presentation, processing, and data
separate and distinct entities.

The components of 3-tiered architecture can be thought of in the form of three tiers such as a presentation tier, functionality
tier, and data tier which can be logically separated as shown in Fig. 1.4. Client sits at presentation tier and generates all requests.
Functionality tier is the middle tier and implements protocols used for processing the request. Data tier implements back-end
database and sends responses to the client. Typically, middle-tier functionality servers are multithreaded and can be accessed by
multiple clients simul- taneously even though the clients are running separate applications. Web applications are good examples
of 3- tier architecture.

Supercomputing-It usually refers to the computation intensive software applications performed on the world's fastest
computers called supercomputers. Supercomputers are very expensive and are typically employed for specialised applications
that require large amount of mathematical computations. Some common applications where supercomputers are being used
are weather forecasting, animated graphics, fluid dynamic calculations, nuclear energy research, petroleum exploration,
molecular dynamics for drug design, and aero dynamics for car and aircraft design, Computing speed of supercomputers in
usually measured in FLOPs. As of May 2008, the IBM Roadrunner' is the fastest supercomputer in the world, with 1.71 petaflops
peak speed.

Sometimes supercomputers are confused with mainframes. The main difference between a supercomputer and a mainframe is
that a supercomputer utilises all its computational power for executing a few programs as fast as possible, whereas a mainframe
uses its power to execute many programs concurrently.

PARAM Padma Supercomputer- It is India's next generation high performance scalable computing cluster with a peak
computing power of one Teraflop and is developed by Centre for Development of Advance Computing (C-DAC). Figure 1.5 shows
the picture of PARAM Padma super computer. The hardware environment of PARAM Padma is powered by compute nodes
based on Power4 RISC processors. These nodes are connected using PARAMNet-II, a high performance System Area Network,
designed and developed by C-DAC with a Gigabit Ethernet as a backup network. The PARAM Padma uses C-DAC's scalable HPCC
software environment. The storage system of PARAM Padma has been designed to provide a primary storage of 5 terabytes,
which is scalable up to 22 terabytes.

Cluster Computing

It refers to the use of a group of tightly coupled computer systems which work together closely to accomplish
computational tasks. Clusters can be viewed as a single system with centralised management and scheduling system. Usually,
computers participating in a cluster are connected to each other through a fast local area network. Cluster computing helps to
improve the performance and availability of computing over that provided by a single computer and is more cost-effective
compared to a single computer having similar speed.

Grid Computing
It is a form of distributed computing and refers to the use of several computers to solve a single problem at the same
time. It uses networked, loosely coupled computers acting simultaneously to perform very large tasks. In grid computing, a
program is generally divided into many parts and these parts are allocated to several computers, often up to many thousands.
Grid computing is used to solve scientific and technical problems which require a large amount of computing or access to large
amounts of data. Grid computing can be considered as distributed and large-scale cluster computing. It can also be thought of as
a distributed parallel processing system.

Difference between Grid Computing and Cluster Computing

Grid computing is often confused with cluster computing. It is focused on the supporting computation across the
domains which make it different from traditional computer clusters or traditional distributed computing. The major difference s
are listed below:

Heterogeneous vs. Homogeneous The important difference between grid computing and cluster computing is that, in grid
computing, grid is formed using heterogeneous computers. Moreover, grid is frequently built using general-purpose grid
software libraries and middleware. In cluster computing, computer clusters are homogenous. The computers that are part of a
grid can use different operating systems and have different hardware whereas the computers forming clusters should have the
same hardware and operating system.

Loosely Coupled vs. Tightly Coupled Grid is formed using loosely coupled machines, and it can make use of spare computing
power on a desktop computer while the machines in a cluster are tightly coupled and dedicated to work only as a single unit.

Global Positioning of the Machines Grids are inherently distributed in nature over a network and use geographically scattered
machines. On the other hand, the computers in the cluster are normally present in a single location.

Cloud Computing
Cloud computing is a term used for delivering hosted services over the internet. It is a style of computing where
dynamically scalable and often virtualised computing resources are provided as a service over the internet. In many cases, cloud
computing services provide common business applications online that can be accessed using a web browser while storing
software and data on the servers. Cloud computing services are broadly categorised into three categories:

Fig.- Some of the companies providing cloud computing services

Infrastructure-as-a-Service (IaaS) A good example of this kind of service is web service. For example, Amazon Web Services
provides virtual server instances with unique IP addresses and blocks of storage on demand. In this kind of service, customer s
use the service provider's application program interface to control and configure their virtual servers and storage.

Platform-as-a-Service (PaaS) It is defined as a set of software and product development tools hosted on the service provider's
infrastructure. Developers can create applications on the service provider's platform over the internet. GoogleApps is a good
example of PaaS.

Software-as-a-Service (SaaS) In this model, service provider supplies the hardware infrastructure, software development tools
and interacts with the user through a front-end. This type of service can range from web-based e-mail services to inventory
control and database processing. Since this service hosts both application and data on the server, the service user can use the
service from anywhere.

NOTE- The name 'cloud computing' has been inspired by the typical cloud symbol which is frequently used to depict the
internet in computer network diagrams and is an abstraction for the complex infrastructure that it conceals. In cloud
computing, users need not have any knowledge of, expertise in, or control over the technology resources used in the cloud
that supports them.
INFORMATION TECHNOLOGY

Information Technology (IT) is the branch of technology which is concerned with the dissemination, processing, and
storage of information, particularly with the help of computers. It deals with the design, development, deployment, support and
management of computer-based information systems, especially computer hardware and software programs. IT supports the
use of computers to store, process, convert, transmit, protect and later retrieve information, if necessary. Due to the basic shift
in computing technology and paperless workplaces, IT has become a popular phrase.The term Information Technology in current
time has become very recognisable, covering many fields. IT professionals are engaged in various kinds of tasks which include
development of software applications, installation of softwares, designing large computer networks, development of reliable
database systems, designing of security solutions, automation of machines, management computers, etc. Some of the other
tasks performed by IT professionals include data management, designing computer hardware, web designing, etc.

IT INFRASTRUCTURE
IT INFRASTRUCTURE MANAGEMENT - In any information system, infrastructure refers to basic support system that is shared
among all the users. Particularly, Information Technology (IT) infrastructure of an organization is the basic set of components
which are shared by all IT business applications. It not only refers to a set of hardware devices but also includes aspects such as
software applications, information and its processing and working business practices. IT infrastructure management deals with
the management of these essential components which are very necessary to run an organization and in providing services to its
customers. It includes components such as equipment, data, human resources, processes, organizational policies and external
contacts.

IT infrastructure management process consists of two important parts viz. service delivery and service support. Service
delivery includes processes such as IT service level management, financial management, IT service continuity management,
service management, capacity management, and availability management. Common management processes involved in service
support process are configuration management, incident management, problem management, change management, and
release management. IT infrastructure management process also includes storage management and security management.

DESIGN ISSUES OF IT ORGANISATIONS AND IT INFRASTRUCTURE

Success of any IT organisation depends on the suitability of its design with the business needs and the availability of
effective and efficient IT infrastructure support. To support an operating environment smoothly, it is necessary to have good
organisational design which matches with the business requirements, necessary infrastructure, good strategy for the
deployment and technology, and clearly defined accountability plan for the use and application of technology.

Design of IT Organisation

Organisational design refers to the way in which an IT organisation divides its work force into different tasks and
operates by coordinating these tasks. While carrying out design of an organisation, major factors influencing organisation al
design must be looked into carefully. Also, when the design is over, there should be some mechanism to estimate how the
organisational design is effective and some way to identify strengths and weaknesses of the organisation. Designing an effective
organisational structure is a real challenge. For IT organisational design, there is no single proven optimal design strategy which
can be used but rather there is a set of practices that are conformed through learning and benchmarking processes for it.
Keeping this challenge in mind, IT leaders always try to find out a perfect IT organisational model that addresses all problems in
their current structure.

IT SYSTEMS MANAGEMENT PROCESS

IT systems management helps in designing, implementing, and managing IT infrastructures. It commonly refers to
enterprise-wide administration of distributed computer systems, etc. It assists in managing any IT infrastructure to achieve
optimum efficiency, stability, reliability, availability and support. It also helps in leveraging any IT organisation in a great ways by
understanding and utilising proven systems management techniques. IT system management includes complete details of how
to implement each key discipline in the places such as mainframe data centers, mid- range shops, client-server environments
and Web-enabled systems.
IT SERVICE MANAGEMENT PROCESS

IT service refers to the delivery of information processing capabilities at an agreed quality level using a combination of
software, hardware, people, networks, etc. The way IT organisations serve their customers, and the quality and value of the
services they offer to their customers, continues to be a focus for companies worldwide. Quality can be defined in terms of
capacity, security, availability of services, performance, etc. To deliver the IT services to the end user at agreed quality level, it is
required that all processes engaged in providing services should be managed properly.

IT service management is the overall methodology for linking the various management processes necessary to ensure
consistent supply of quality IT services. It emphasises customer-centric approach of IT management and business interaction in
contrast to the technology-centric approaches. IT service management focuses on the quality of services that an organisation
offers and concentrates on the relationship of the organisation with the customers rather than only focusing on technology an d
organisational issues. In the current business scenario, it has become an integral part of an organisation and is seen as an
innovative way which can be used to prove the business value of IT services, to cut costs and improve service quality. IT ser vice
management needs an effective mechanism which allows the effective interaction of IT personnel with users of their services.
The main goals of IT service management is to align IT services with the critical needs of the business, to manage services to
ensure appropriate IT support for critical business priorities, to minimise Total Cost of Ownership (TCO) and to improve Return
On Investment (ROI).

Service Delivery

Service delivery refers to the management of the IT services. It involves a number of management practices to ensure
that IT services are actually provided as agreed between the service provider and the customer. These management practices
are discussed briefly here -

Service Level Management It offers service-delivery management across business units and helps in successfully delivering,
maintaining and improving IT services up to the expected level through a constant cycle of agreeing, monitoring and reporting to
meet the customers' requirements and objectives. Major steps that are followed in implementation of service level
management are preparing service catalogue, defining service and operational level agreements and formulating service quality
plan.

Financial Management Its main emphasis is on managing the monetary resources of an IT organisation to achieve organisational
goals. It offers cost-effective management of the IT assets and resources used in providing IT services. A good financial
management process greatly helps IT managers in making decisions for planning and investment. Usually, financial management
activities includes IT cost accounting, budgeting for IT services and activities, project investment appraisal, cost recovery and IT
charging and billing activities.

IT Service Continuity Management Business organisations are expected to continue to operate and provide services in an
uninterrupted fashion. IT service continuity management process helps them in this regard and ensures that all IT services are
capable of providing value to the customer in an event when normal availability of solutions fails. It manages risks and ensu res
that an IT infrastructure of an organisation can continue to provide services in an unlikely or unexpected event. Major processes
involved in IT service continuity management are collection of service level requirements, proposing contingent solution,
formalising operation level agreement and formalising contingency plan.

Capacity Management Prime objective of capacity management is to ensure that IT capacity meets current and future business
requirements of an organisation in a cost-effective manner, and IT infrastructure of the organisation is used in an efficient
manner. Capacity management involves planning, analysing, sizing and optimising capacity to fulfill the demand in a timely
manner and at a reasonable cost.

Availability Management The goal of availability management is to ensure that all IT services deliver the level of availability that
the customer requires consistently and cost-effectively. It optimises the capability of the IT infrastructure, services and
supporting organisation to deliver a cost effective and sustainable service availability that meets stringent business objectives.
Processes involved in availability management are defining service level requirements, proposing availability solutions and
formalising operational level agreements.
Service Support

It talks about a framework that enables effective IT Services. Various management practices involved in service support are
discussed here briefly-

Configuration Management It deals with identifying and defining configuration items in a system and further monitoring the
status of these items, processing requests for change and verifying-the.completeness and correctness of configuration items.
Configuration management offers a logical model of the IT infrastructure by identifying, maintaining and verifying the version of
all configuration items. Configuration management is mainly responsible for identifying Configuration items, finding relation ship
among configuration items and planning, designing and managing a Configuration Management Database (CMDB).

Incident Management The goal of incident management is to ensure that restoration of normal service operations is done as
quickly as possible with the least possible impact on either the business or the user and minimum interruption in services, in a
cost-effective way, It helps in maintaining continuity of the service levels and underlying service desk function.

Problem Management It ensures that all possible problems and known errors affecting the IT infrastructure are identified and
recorded properly. It investigates and resolves the underlying root causes of incidents and prevents similar incidents from
happening again. Problem management also provides valuable inputs such as recording problems and known errors to other
service management processes like incident management, change management and service desk. The major activities of
problem management includes problem control, error control and report generation.

Change Management The goal of change management is to ensure standardisation of methods and procedures so that it
minimises the impact of any change on service quality, It offers a way to introduce the required changes to the IT environment
with minimal disruption to ongoing operations. Change management offers reduced impact of changes, better cost estimation
of changes, better information management of changes and improved personnel productivity.

Release Management The objective of release management is to formulate efficient mechanisms of building and releasing new
software versions. Release management ensures the quality of the production environment by using formal procedures and
checks while implementing new versions. Release management is responsible for activities such as planning, coordination and
implementation, designing and implementation of efficient procedures for the distribution and installation of changes to IT
systems, management of release of software into the live environment and its distribution, gathering users' feedback and
maintaining Definitive Software Library (DSL) and Configuration Management Database (CMDB).

IT INFRASTRUCTURE LIBRARY

IT organisations are continuously enforced to deliver better IT services at lower cost. To provide guidelines to achieve
this goal and cope up with the stringent challenges, several management frameworks have been developed; one of the best
known frameworks is the Information Technology Infrastructure Library® (ITIL). It is the most widely accepted approach to IT
service management worldwide. It is a customisable framework of best practices developed oped to promote quality services in
the IT sector. Developed in the late 1980s by the CCTA (now known as OGCS), it became popular worldwide and de facto
standard in service management in mid 1990s. It was originally designed to serve as a set of standards to be followed by serv ice
providers to deliver IT services to the British government. After its inception, public companies have realised the benefits and
implemented parts of ITIL in their internal II departments. ITIL has now become acceptable to almost everyone as it is a public
domain framework with scalable property.

As an IT service management framework, ITIL provides a systematic approach to manage IT services, from their inception
through design, implementation, operation and continual improvement. The processes identified and described within ITIL are
supplier and platform independent and apply to all aspects of IT infrastructure. ITIL consists of set of concepts, policies and
practices used for managing IT infrastructure, development and operations. It provides a comprehensive description of a
number of important IT practices with detailed catalogue, procedures and tasks that an IT organisation can adapt for its need. It
provides business with a customisable framework of best practices to achieve quality service and overcome problems associated
with the growth of IT systems.ITIL is published in a series of books, each of which give details of an IT management topic. ITIL has
grown up to three versions by now-

ITIL version 1 It is the initial version of IT infrastructure library which has expanded over 30 volumes. At the beginning, ITIL
version 1 was projected as a set of formal methods, which was later changed and published as set of guidelines.
ITIL version 2 Originally, the ITIL was published as a collection of series of books each of which covered a particular practice of
IT service management. Number o books in the initial publication of ITIL (ITIL version 1) has grown up to 31 volumes To make
ITIL more approachable and financially manageable, ITIL version consolidates the volumes of ITIL version 1 into logical sets by
grouping the relate process guidelines of IT management, applications, and services. The eight book volumes of ITIL version 2
are grouped into three parts as follows:

1. The IT service management set

-Service delivery

-Service support

2. Operational guidance set

-ICT infrastructure management

-Security management

-The business perspective

-Application management

-Software asset management

3. Implementation guidelines set

-Planning to implement service management

4. Supplementary set

-ITIL small-scale implementation (it has been published later, not part of original eight publications)

ITIL version 3 It updates the ITIL version 2 by expanding the scope of ITIL in the domain of service management. ITIL version 3
comprises of five key volumes which are listed below:

Service Strategy This volume is the main strength of the new ITIL library which focuses on helping IT organisations to improve
and develop over the long term. It introduces the service lifecycle and encourages the development of a business perspective.
This volume guides both the service provider as well as the business customer, through choices that they need to achieve
service excellence. The key topics which are present in this volume include business case development, service assets, service
value definition, market analysis and service provider types. The processes which are included are IT financial management,
service portfolio management and demand management.

Service Design This volume provides good practice guidance on design of IT services and processes to create valuable IT service
assets for an organisation within business constraints such as time and money. It gives a framework for service design which
considers the customer's present and future requirements, while firmly maintaining the business view. Processes which are
included in this volume are service level management, capacity management, availability management, IT service continuity
management, supplier management, information security management and service catalogue management.

Service Transition This ITIL volume provides guidance on managing the many aspects of service changes, preventing undesired
consequences while allowing for innovation. It is essential reading for anyone seeking to deliver IT change with the best possible
benefit to the business. Topics covered in the volume are transition planning and support, service asset and configuration
management and change management, release and deployment management and knowledge management. It also states the
key roles and responsibilities of staff involved in service transition.
Service Delivery Process

Service delivery process is concerned with the management of IT services and involves several management practices
to ensure that IT services are provided as agreed between the service provider and the customer. It is one of the two important
components of IT service management process. This chapter deals with service delivery process which involves five sub
processes. These sub-processes concentrate on long-term planning and improvement of IT services.

Service Level Management

Service level management deals with various issues related to service delivery across business units and helps in the
management of services that an organisation provides to its customers in a cost-effective manner. It assists in delivering
maintaining and improving IT services up to the desired level through a constant cycle of agreeing, monitoring and reporting to
achieve the customers' requirements,expectations and objectives.

Main objectives of service level management are to align and manage IT services through a process of definition, agreement,
operation measurement and review. An organisation which wishes to implement service level management must first analyse
the types of services that it provides to the customers and finds out types of existing service contracts that are currently in place
for these services. This exercise gives the full insight of the services that an organisation can provide to its customer and can help
in developing a good service level management process. Before going into the implementation details of service level
management into business, let us first define the two terms: service level agreement and operational level agreement.

Service Level Agreement The scope of service level management includes defining the IT services for the organisation and
establishing service level agreements (SLA) for them. SLA is the written negotiated agreement documenting the required levels
of services to be provided by the organisation to the customer. It is the part of service contract. It clearly states common
understanding about services,priorities,unambiguous and measurable service targets, responsibilities, guarantees and
warranties, IT service quality requirements (for example, performance and security) and communication structures.

FINANCIAL MANAGEMENT

The objective of financial management is to manage the monetary resources of the organisation to achieve
organisational goals. It provides cost-effective management of the IT assets and resources used to provide IT services. A properly
functioning financial management process helps IT managers in making decisions for planning and investment. In common
practice, financial management for IT includes the following:

-IT cost accounting

-Budgeting for IT services and activities

-Project investment appraisal

-Cost recovery

-IT charging and billing activities.

IT SERVICE CONTINUITY MANAGEMENT

In the current scenario, companies are expected to continue to operate and provide services at all times. The objective
of the IT service continuity management is to help in ensuring that all IT services are capable of providing value to the cus tomer
at the time of failure of normal availability solutions. It is concerned with managing risks to ensure that an IT infrastructure of an
organisation can continue by providing services in the event of an unlikely or unexpected event. It supports the overall business
continuity management by ensuring that the required IT infrastructure and IT services, including support and the service desk ,
can be restored within desired and agreed amount of time if such a disaster happens. Availability of the IT services or their
absence has much influence on customer's satisfaction and can quickly affect the overall reputation and success of the
organisation. The task of IT service continuity management is achieved by a process that analyses business processes, their
impact on the organisation and the IT infrastructure susceptibility that these processes face from many possible risks.There are
many factors that influence the availability of an IT service. Among them, hardware failure, environmental issues and human
error are important. A hardware failure, such as a broken power supply or disk drive, is one of the most critical factors to
consider. For example, mere failure of power supply to a server might cause the whole IT services to be discontinued. Dual
redundant power supplies attached to the server can be employed to minimise some of these kinds of risks. If power to the
whole computer room or data center is interrupted, battery backup systems can be used to cover the time it might take to star t
up a standby generator. These kinds of problems are referred to as availability risks, and the actions taken to minimise these
problems are called countermeasures. If any of the countermeasures fails, more drastic actions must be taken. These actions are
outlined in a document called the contingency plan. The need of a contingency plan can be understood by the following
example. Consider an organisation having an arrangement that when power fails, generator should be used as a backup system.

CAPACITY MANAGEMENT

The primary goal of capacity management is to ensure that IT capacity meets current and future business requirements
of the organisation in a cost-effective way. It is the process of planning, analysing, sizing, and optimising capacity to satisfy
demand in a timely manner and at a reasonable cost. It provides the required capacity for data processing and storage at the
right time and in appropriate volume. It also ensures the efficient use of IT infrastructure. Improper planning for capacity may
lead to wastage of resources, resulting in unnecessary cost or may lead to shortage of resources which may be responsible for
poor performance or the unavailability of an IT service.

Capacity management is an important quantitative aspect of an IT service management because it includes both a business and
an end-user focus on capacity requirements. It continuously tries to optimise existing and future IT resource demands and
supply, and ensures the balance between them. It proactively adds components, space or people in a cost-effective manner
while assuring that the performance is at acceptable level for new additions and older components. Capacity management is
very much aligned with the business case and planning process. There are three main elements of capacity management:

* Inputs Usual inputs include business plans, processes, technology and related information used by each of the sub-processes.

* Sub-processes These are the levels of analysis where capacity is considered.

* Outputs Capacity plans, databases, reports, changes and recommendations are the main outputs ts of the sub-processes.

Advantages of Capacity Management -

The implementation of capacity management in an organisation provides following advantages:

Reduced Risk As the capacity management effectively manages the resources, it reduces the risks associated with existing
service and monitors the performance of the resources continuously.

Reduced Cost It helps in making the decision of any investment at the appropriate time, neither too early nor too late, which
results in reduced service cost. Hence, purchasing process does not have to do last-minute purchases or over purchases of
capacity too far in advance of need.

Greater Efficiency Since demand and supply are balanced at an early stage, this results in a greater degree of efficiency.

Reduced Business Disruption Capacity management has close connection with change management. So it reduces business
disruptions by preventing urgent changes resulting from inadequate or incorrect capacity estimates.

Improved Customer Satisfaction Capacity management ensures better customer satisfaction. It consults the customers at an
early stage and anticipates the require- ments efficiently.

Better Validation of IT Spending Use of capacity management ensures avoidance of incorrect capacity sizing which results in
appropriate use of resources and sufficient capacity availability in time to meet production workload needs.

Improvement of Relationship with Suppliers Further, purchasing, delivery, installation and maintenance agreements can also
be planned more productively and effectively.

Cost of Capacity Management

The costs of setting up capacity management in an IT organisation must be estimated during the preparation. Following is the
list of various components where investment is needed to be done:
Cost of Software and Hardware This includes the cost of purchase of hardware and software tools. Common tools needed are
monitoring tools, trending and modelling tools for simulations and statistical analysis, reporting tools, capacity Management
database etc.

Project Management Cost It is the cost associated with the implementation of the project management process.

Cost of Training This includes the cost incurred in providing personnel training and to develop support base.

Facilities and Services This includes the cost of facilities used and cost of services provided.

AVAILABILITY MANAGEMENT

Availability refers to the ability of a service or component to perform the function as desired at a stated instant or over
a stated period of time. The goal of the availability management is to make sure that any given IT service delivers its functions
consistently and cost-effectively at the level of service availability that the customer requires. It optimises the capability of the IT
infrastructure, services and supporting organisation to deliver a cost effective and sustained level of service availability that
meets stringent business objectives.

Availability management needs to ensure that the processes used for the support of critical IT services are mature enough and
have the necessary personnel, skills, and tools to take effectively on their responsibilities. It also makes sure that, if th ere is a
difference between supply and demand, then availability management may have to provide a solution. Furthermore, availability
management ensures that the achieved availability levels are measured and, where ever necessary, are improved continuously.
This means, the process needs to include both proactive and reactive activities to accomplish this.

Advantages of Availability Management-

The IT services which implement availability management fulfil the agreed availability requirements. It aims to
maximise the availability of IT services and customer satisfaction within the defined constraints. Other benefits of availability
management include:

* Availability management ensures that new products and services fulfil the availability requirements and availability standards
that are agreed with the customer.

* The associated cost is acceptable.

It makes sure that the availability standards are monitored continuously and improved.

* In case of the unavailability of the service, it ensures that corrective action is taken when the service is unavailable an d tries to
minimise unavailability duration.

* It is easy to prove its added values for an organisation which follows availability management in providing its services.

Cost of Availability Management

In general, cost and availability are proportional to each other. That is, the cost increases as the availability increases.
Therefore, finding the optimum solution is an important job of availability management. Experience shows that, in most of the
situations, it is possible to reach an optimum with limited resources, rather than with significant investment. Th e availability
management process can contribute to the objectives of the IT organisation in these areas by providing the required services at
an acceptable and justifiable cost. The costs of availability management includes following major components:

Cost of Implementation It includes cost of implementation of availability management in the organisation.

Personnel Costs This is the cost incurred in providing training to personnel.

Facilities Costs This is the cost which is required to provide various facilities to customers.

Measuring and Reporting Tools It relates to the cost incurred in measurement and reporting tools.

You might also like