KEMBAR78
CC Notes Final | PDF | Cloud Computing | Grid Computing
0% found this document useful (0 votes)
9 views66 pages

CC Notes Final

Cloud Computing enables remote access to applications and resources over the Internet, allowing for mobile and collaborative business applications. It encompasses various deployment models (public, private, hybrid, community) and service models (IaaS, PaaS, SaaS), providing flexibility and efficiency. However, it also poses risks such as security concerns, vendor lock-in, and data management issues.

Uploaded by

Anish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views66 pages

CC Notes Final

Cloud Computing enables remote access to applications and resources over the Internet, allowing for mobile and collaborative business applications. It encompasses various deployment models (public, private, hybrid, community) and service models (IaaS, PaaS, SaaS), providing flexibility and efficiency. However, it also poses risks such as security concerns, vendor lock-in, and data management issues.

Uploaded by

Anish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Introduction to Cloud Computing:

Cloud Computing provides us means of accessing the applications as utilities over the
Internet. It allows us to create, configure, and customize the applications online.

What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that Cloud
is something, which is present at remote location. Cloud can provide services over
public and private networks, i.e., WAN, LAN or VPN.
Applications such as e-mail, web conferencing, customer relationship management
(CRM) execute on cloud.

What is Cloud Computing?


Cloud Computing refers to manipulating, configuring, and accessing the hardware
and software resources remotely. It offers online data storage, infrastructure, and
application.

Cloud computing offers platform independency, as the software is not required to be


installed locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.
Cloud Computing
-Computing model
-Evolution of Cloud computing
Cloud business

Computing model
Desktop computing

*Personal
*Professional
● -engineering
● Artist
● Authors
● Doctors
● Programmers
● Office
● Desktop publishing
Software
MS-OFFICE
AutoCAD
Photoshop
3Dstudio Max
Netbeans
Visual Studio
Client server Computing

● Banks
● Retail Stores
● Marketing & sales
● Distribution
● Automobile Companies
● Oil Companies
Software
Accounting
Sales ERP
CRM
Distribution ERP
Manufacturing ERP
SAP
Oracles Apps
Microsoft Dynamics

Cluster Computing
Grid Computing
Heterogeneous Servers
Different Operating Systems
Different Application Severs

Database Server
Mail Servers
Web Servers
Files Servers
Cloud Computing
Combination of Grid and Cluster computing

Own Cloud
Toyoto
Wall mart
TATA Group
Citi bank

Basic Concepts
There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users. Following are the working models for
cloud computing:

● Deployment Models
● Service Models

Deployment Models
Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid, and
Community.
Public Cloud

The public cloud allows systems and services to be easily accessible to the general
public. Public cloud may be less secure because of its openness.

Private Cloud

The private cloud allows systems and services to be accessible within an


organization. It is more secured because of its private nature.

Community Cloud

The community cloud allows systems and services to be accessible by a group of


organizations.

Hybrid Cloud

The hybrid cloud is a mixture of public and private cloud, in which the critical activities
are performed using private cloud while the non-critical activities are performed using
public cloud.

Service Models
Cloud computing is based on service models. These are categorized into three basic
service models which are -

● Infrastructure-as–a-Service (IaaS)
● Platform-as-a-Service (PaaS)
● Software-as-a-Service (SaaS)
Anything-as-a-Service (AaaS) is yet another service model, which includes Network-
as-a-Service, Business-as-a-Service, Identity-as-a-Service, Database-as-a-
Service or Strategy-as-a-Service.
The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the
service models inherit the security and management mechanism from the underlying
model, as shown in the following diagram:

Infrastructure-as-a-Service (IaaS)

IaaS provides access to fundamental resources such as physical machines, virtual


machines, virtual storage, etc.

Platform-as-a-Service (PaaS)

PaaS provides the runtime environment for applications, development and deployment
tools, etc.

Software-as-a-Service (SaaS)

SaaS model allows to use software applications as a service to end-users.


History of Cloud Computing
History of Cloud Computing: Cloud computing is one of today's most breakthrough technology. Then
there's a brief cloud-computing history.

The concept of Cloud Computing came into existence in the year 1950 with
implementation of mainframe computers, accessible via thin/static clients. Since
then, cloud computing has been evolved from static clients to dynamic ones and from
software to services. The following diagram explains the evolution of cloud computing:

Benefits
Cloud Computing has numerous advantages. Some of them are listed below -
● One can access applications as utilities, over the Internet.
● One can manipulate and configure the applications online at any time.
● It does not require to install a software to access or manipulate cloud
application.
● Cloud Computing offers online development and deployment tools,
programming runtime environment through PaaS model.
● Cloud resources are available over the network in a manner that provide
platform independent access to any type of clients.
● Cloud Computing offers on-demand self-service. The resources can be used
without interaction with cloud service provider.
● Cloud Computing is highly cost effective because it operates at high efficiency
with optimum utilization. It just requires an Internet connection
● Cloud Computing offers load balancing that makes it more reliable.

Risks related to Cloud Computing

Although cloud Computing is a promising innovation with various benefits in the world
of computing, it comes with risks. Some of them are discussed below:

Security and Privacy


It is the biggest concern about cloud computing. Since data management and
infrastructure management in cloud is provided by third-party, it is always a risk to
handover the sensitive information to cloud service providers.
Although the cloud computing vendors ensure highly secured password protected
accounts, any sign of security breach may result in loss of customers and businesses.
Lock In
It is very difficult for the customers to switch from one Cloud Service Provider
(CSP) to another. It results in dependency on a particular CSP for service.

Isolation Failure
This risk involves the failure of isolation mechanism that separates storage, memory,
and routing between the different tenants.

Management Interface Compromise


In case of public cloud provider, the customer management interfaces are accessible
through the Internet.

Insecure or Incomplete Data Deletion


It is possible that the data requested for deletion may not get deleted. It happens
because either of the following reasons
● Extra copies of data are stored but are not available at the time of deletion
● Disk that stores data of multiple tenants is destroyed.

Characteristics of Cloud Computing


There are four key characteristics of cloud computing. They are shown in the following
diagram:
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One
can logon to a website at any time and use them.

Broad Network Access


Since cloud computing is completely web based, it can be accessed from anywhere
and at any time.

Resource Pooling
Cloud computing allows multiple tenants to share a pool of resources. One can share
single physical instance of hardware, database and basic infrastructure.

Rapid Elasticity
It is very easy to scale the resources vertically or horizontally at any time. Scaling of
resources means the ability of resources to deal with increasing or decreasing
demand.
The resources being used by customers at any given point of time are automatically
monitored.

Measured Service
In this service cloud provider controls and monitors all the aspects of cloud service.
Resource optimization, billing, and capacity planning etc. depend on it.

Cluster Computing

A cluster is a group of inter-connected computers or hosts that work


together to support applications and middleware (e.g. databases). In a
cluster, each computer is referred to as a “node”. Unlike grid
computers, where each node performs a different task, computer
clusters assign the same task to each node.

Cluster computing defines several computers linked on a network and


implemented like an individual entity. Each computer that is linked to the
network is known as a node.

Cluster computing provides solutions to solve difficult problems by providing


faster computational speed, and enhanced data integrity. The connected
computers implement operations all together thus generating the impression
like a single system (virtual device). This procedure is defined as the
transparency of the system.
Advantages of Cluster Computing
The advantages of cluster computing are as follows −

 Cost-Effectiveness − Cluster computing is considered to be much more


costeffective. These computing systems provide boosted implementation
concerning the mainframe computer devices.
 Processing Speed − The processing speed of cluster computing is
validated with that of the mainframe systems and other supercomputers
demonstrate around the globe.
 Increased Resource Availability − Availability plays an important role
in cluster computing systems. Failure of some connected active nodes can
be simply transformed onto different active nodes on the server, providing
high availability.
 Improved Flexibility − In cluster computing, better description can be
updated and improved by inserting unique nodes into the current server.
Types of Cluster Computing
The types of cluster computing are as follows −

High Availability (HA) and Failover Clusters

These cluster models generate the availability of services and resources in an


uninterrupted technique using the system’s implicit redundancy. The basic term
of Cluster is that if a node declines, then applications and services can be made
available to different nodes. These methods of clusters deliver as the element
for critical missions, mails, documents, and application servers.

Load Balancing Clusters

This cluster allocates all the incoming traffic/requests for resources from nodes
that run the equal programs and machines. In this cluster model, some nodes
are answerable for tracking orders, and if a node declines, therefore the
requests are distributed amongst all the nodes available. Such a solution is
generally used on web server farms.

HA & Load Balancing Clusters

This cluster model associates both cluster features, resulting in boost availability
and scalability of services and resources. This kind of cluster is generally used
for email, web, news, and FTP servers.

Distributed & Parallel Processing Clusters

This cluster model boosts availability and implementation for applications that
have huge computational tasks. A large computational task has been divided
into smaller tasks and distributed across the stations. Such clusters are
generally used for numerical computing or financial analysis that needs high
processing power.
Cloud Computing architecture

Cloud Computing architecture comprises of many cloud components, which are loosely
coupled. We can broadly divide the cloud architecture into two parts:

● Front End
● Back End
Each of the ends is connected through a network, usually Internet. The following
diagram shows the graphical view of cloud computing architecture:

Front End
The front end refers to the client part of cloud computing system. It consists of
interfaces and applications that are required to access the cloud computing platforms,
Example - Web Browser.

Back End
The back End refers to the cloud itself. It consists of all the resources required to
provide cloud computing services. It comprises of huge data storage, virtual machines,
security mechanism, services, deployment models, servers, etc.

Note
● It is the responsibility of the back end to provide built-in security mechanism,
traffic control and protocols.
● The server employs certain protocols known as middleware, which help the
connected devices to communicate with each other.

Components of Cloud Computing Architecture


There are the following components of cloud computing architecture -
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical
User Interface) to interact with the cloud.
2. Application
The application may be any software or platform that a client wants to
access.
3. Service
A Cloud Services manages that which type of service you access according
to the client’s requirement.
Cloud computing offers the following three type of services:
i. Software as a Service (SaaS) – It is also known as cloud application
services. Mostly, SaaS applications run directly through the web browser
means we do not require to download and install these applications. Some
important example of SaaS is given below –
Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.
ii. Platform as a Service (PaaS) – It is also known as cloud
platform services. It is quite similar to SaaS, but the difference is that PaaS
provides a platform for software creation, but using SaaS, we can access
software over the internet without the need of any platform.
Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.
iii. Infrastructure as a Service (IaaS) – It is also known as cloud
infrastructure services. It is responsible for managing applications data,
middleware, and runtime environments.
Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE),
Cisco Metapod.
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the
virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It
provides a huge amount of storage capacity in the cloud to store and
manage data.
6. Infrastructure
It provides services on the host level, application level, and network
level. Cloud infrastructure includes hardware and software components such
as servers, storage, network devices, virtualization software, and other
storage resources that are needed to support the cloud computing model.
7. Management
Management is used to manage components such as application, service,
runtime cloud, storage, infrastructure, and other security issues in the
backend and establish coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements
a security mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact
and communicate with each other.

Grid computing
Grid computing is a computing infrastructure that combines computer resources
spread over different geographical locations to achieve a common goal. All unused
resources on multiple computers are pooled together and made available for a
single task. Organizations use grid computing to perform large tasks or solve
complex problems that are difficult to do on a single computer.
Grid computing is defined as a distributed architecture of multiple computers
connected by networks that work together to accomplish a joint task. This system
operates on a data grid where computers interact to coordinate jobs at hand. This
article explains the fundamentals of grid computing in detail.

Why is grid computing important?


Organizations use grid computing for several reasons.
Efficiency
With grid computing, you can break down an enormous, complex task into
multiple subtasks. Multiple computers can work on the subtasks concurrently,
making grid computing an efficient computational solution.
Cost
Grid computing works with existing hardware, which means you can reuse existing
computers. You can save costs while accessing your excess computational
resources. You can also cost-effectively access resources from the cloud.
Flexibility
Grid computing is not constrained to a specific building or location. You can set up
a grid computing network that spans several regions. This allows researchers in
different countries to work collaboratively with the same supercomputing power.
What are the use cases of grid computing?
The following are some common applications of grid computing.
Financial services
Financial institutions use grid computing primarily to solve problems involving
risk management. By harnessing the combined computing powers in the grid, they
can shorten the duration of forecasting portfolio changes in volatile markets.
Gaming
The gaming industry uses grid computing to provide additional computational
resources for game developers. The grid computing system splits large tasks, such
as creating in-game designs, and allocates them to multiple machines. This results
in a faster turnaround for the game developers.
Entertainment
Some movies have complex special effects that require a powerful computer to
create. The special effects designers use grid computing to speed up the production
timeline. They have grid-supported software that shares computational resources to
render the special-effect graphics.
Engineering
Engineers use grid computing to perform simulations, create models, and analyze
designs. They run specialized applications concurrently on multiple machines to
process massive amounts of data. For example, engineers use grid computing to
reduce the duration of a Monte Carlo simulation, a software process that uses past
data to make future predictions.
What are the components in grid computing?
In grid computing, a network of computers works together to perform the same
task. The following are the components of a grid computing network.

Nodes
The computers or servers on a grid computing network are called nodes. Each node
offers unused computing resources such as CPU, memory, and storage to the grid
network. At the same time, you can also use the nodes to perform other unrelated
tasks. There is no limit to the number of nodes in grid computing. There are three
main types of nodes: control, provider, and user nodes.
Grid middleware
Grid middleware is a specialized software application that connects computing
resources in grid operations with high-level applications. For example, it handles
your request for additional processing power from the grid computing system.
It controls the user sharing of available resources to prevent overwhelming the grid
computers. The grid middleware also provides security to prevent misuse of
resources in grid computing.
Grid computing architecture
Grid architecture represents the internal structure of grid computers. The following
layers are broadly present in a grid node:

1. The top layer consists of high-level applications, such as an application to perform


predictive modeling.
2. The second layer, also known as middleware, manages and allocates resources
requested by applications.
3. The third layer consists of available computer resources such as CPU, memory,
and storage.
4. The bottom layer allows the computer to connect to a grid computing network.
How does grid computing work?
Grid nodes and middleware work together to perform the grid computing task. In
grid operations, the three main types of grid nodes perform three different roles.
User node
A user node is a computer that requests resources shared by other computers in
grid computing. When the user node requires additional resources, the request goes
through the middleware and is delivered to other nodes on the grid computing
system.
Provider node
In grid computing, nodes can often switch between the role of user and provider.
A provider node is a computer that shares its resources for grid computing. When
provider machines receive resource requests, they perform subtasks for the user
nodes, such as forecasting stock prices for different markets. At the end of the
process, the middleware collects and compiles all the results to obtain a global
forecast.
Control node
A control node administers the network and manages the allocation of the grid
computing resources. The middleware runs on the control node. When the user
node requests a resource, the middleware checks for available resources and
assigns the task to a specific provider node.
What are the types of grid computing?
Grid computing is generally classified as follows.
Computational grid
A computational grid consists of high-performance computers. It allows
researchers to use the combined computing power of the computers. Researchers
use computational grid computing to perform resource-intensive tasks, such as
mathematical simulations.
Scavenging grid
While similar to computational grids, CPU scavenging grids have many regular
computers. The term scavenging describes the process of searching for available
computing resources in a network of regular computers. While other network users
access the computers for non-grid–related tasks, the grid software uses these nodes
when they are free. The scavenging grid is also known as CPU scavenging or cycle
scavenging.
Data grid
A data grid is a grid computing network that connects to multiple computers to
provide large data storage capacity. You can access the stored data as if on your
local machine without having to worry about the physical location of your data on
the grid.

Building cloud computing environments

Steps for Building a Cloud Computing


Infrastructure –
#1: First you should decide which technology will be
the basis for your on-demand application
infrastructure
#2: Determine what delivery infrastructure you will be
used to abstract the application infrastructure
#3: Prepare the network infrastructure
#4: Provide visibility and automation of management
tasks
#5: Integrate all the moving parts, such that the
infrastructure and realizes the benefits of automation,
abstraction and resource sharing
Cloud Computing Environment Application development occurs through
platforms & framework applications that provide various types of services, from
the bare metal infrastructure to custom applications that serve certain purposes.
1.Application development:
A powerful computing model that enables users to use application on demand is
provided by cloud computing. One of the most advantageous classes of
applications in this feature are Web applications. Their performance is mostly
influenced by broad range of applications using various cloud services can
generate workloads for specific user demands.
2.Infrastructure and system development:
The key technologies for providing cloud services from throughout the world
include distributed computing, virtualization, orientation and the Web 2.0. The
development of cloud-enhancing applications and systems requires knowledge of
all of these technologies.

Distributed computing
Distributed computing is a key cloud computing model since cloud systems are
distributed. Aside from administrative functions primarily Connected to the
accessibility of resources to the cloud, engineers and developers have great
challenges with the extremely dynamic Cloud Systems, whereby new nodes and
services are provided on demand. This feature is somewhat unique to cloud-based
solutions and is most often discussed on the computer system's middleware level.
Infrastructure-as-a-service offer the capability of substituting and eliminating
resources although it is up to those that use systems with knowledge and efficiency
to use these possibilities. Platform-as-a-Service solutions incorporate algorithms
and rules in their framework that control the supply and leasing of resources.
These should either be totally transparent or controlled by developers.
Computing platforms and technologies:
Cloud application development involves leveraging platforms and frameworks
which offer different services, from the bare metal infrastructure to personalized
applications that serve specific purposes.
1. Amazon web services (AWS):
Amazon Web Services (AWS) is a cloud computing platform with functionalities
such as database storage, delivery of content, and secure IT infrastructure for
companies, among others. It is known for its on-demand services namely Elastic
Compute Cloud (EC2) and Simple Storage Service (S3). Amazon EC2 and
Amazon S3 are essential tools to understand if you want to make the most of AWS
cloud.
Amazon EC2 is software for running cloud servers that is short for Elastic Cloud
compute. Amazon launched EC2 in 2006, as it allowed companies to rapidly and
easily spin servers into the cloud, instead of having to buy, set up, and manage
their own servers on the premises.
Amazon S3 is a storage service operating on the AWS cloud (as its full name,
Simple Storage Service). It enables users to store virtually every form of data in
the cloud and access the storage over a web interface,AWS Command Line
Interface, or AWS API. You need to buildwhat Amazon called a 'bucket' which is
a specific object that you use to37store and retrieve data for the purpose of using
S3. If you like, you can setup many buckets. Amazon S3 is an object storage
system which works especially well for massive, uneven or highly dynamic data
storage.
2 Google AppEngine:
The Google AppEngine (GAE) is a cloud computing service
(belonging to the platform as a service (PaaS) category) to create and host
web-based applications within Google's data centers.
GAE web applications are sandboxed and run across many redundancy servers to
allow resources to be scaled up according to currently-existing traffic
requirements.
App Engine assigns additional resources to servers to handle
increased load.
Google App Engine is a Google platform for developers and
businesses to create and run apps using advanced Google infrastructure.
These apps must be written in one of the few languages supported, namely
Java, Python, PHP and Go. This also requires the use of Google query
language and Google Big Table is the database used. The applications
must comply with these standards, so that applications must either be
developed in keeping with GAE or modified to comply.
GAE is a platform for running and hosting Web apps, whether on
mobile devices and on the Web. Without this all-in function, developers
should be responsible for creating their own servers, database software
and APIs that make everyone work together correctly. GAE takes away
the developers' pressure so that they can concentrate on the app's front end
and features to enhance user experience.
3 Microsoft Azure:
Microsoft Azure is a platform as a service (PaaS) to develop and
manage applications for using their Microsoft products and in their data
centers. This is a complete suite of cloud products that allow users to
develop business-class applications without developing their own
infrastructure.
Three cloud-centric products are available on the Azure Cloud
platform: the Windows Azure, SQL Azure & Azure App Fabric controller.
This involve the infrastructure hosting facility for the application.
In the Azure, the Cloud service role is a set of virtual platforms
that work together to accomplish basic tasks, which is managed, load-
balanced and Platform-as-a-Service. Cloud Service Roles are controlled
by Azure fabric controller and provide the perfect combination of
scalability, control, and customization.

Web Role is the role of an Azure Cloud service which is


configured and adapted to operate web applications developed on the
Internet Information (IIS) programming languages and technologies, such
as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.
Web Role is the role of an Azure Cloud service which is
configured and adapted to operate web applications developed on the
Internet Information (IIS) programming languages and technologies, such
as ASP.NET, PHP, Windows Communication Foundation and Fast CGI.
Worker role is any role for Azure that works on applications and
services that do not usually require IIS. IIS is not enabled default in
Worker Roles. They are mainly utilized to support web-based background
processes and to do tasks such as compressing uploaded images
automatically, run scripts, get new messages out of queue and process and
more, when something changes the database.
VM Role: The VM role is a type of Azure Platform role that
supports the automated management of already installed service packages,
fixes, updates and applications for Windows Azure.
The principal difference is that:
A Web Role deploys and hosts the application automatically via
IIS A Worker Role does not use IIS and runs the program independently
The two can be handled similarly and can be run on the same Azure
instances if they are deployed and supplied via the Azure Service
Platform.
For certain cases, instances of Web Role and Worker Roles work
together and are also used concurrently by an application. For example, a
web role example can accept applications from users, and then pass them
to a database worker role example.
4 Hadoop:
Apache Hadoop is an open source software framework for storage
and large-scale processing of data sets of commodity hardware clusters.
Hadoop is a top-level Apache project created and operated by a global
community of contributors and users. It is under the Apache License 2.0.
Two phases of MapReduce function, Map and Reduce. Map tasks
are concerned with data splitting and mapping of the data, while Reduce
tasks shuffle and reduce the data.
Hadoop can run MapReduce programs in a variety of languages like Java,
Ruby, Python, and C++,. MapReduce program is parallel in nature and
thus very useful for large-scale analyzes of data via multiple cluster
machines.

39

The input to each phase is key-value pairs. In addition, every


programmer needs to specify two functions: map function and reduce
function.

Principles of Parallel and Distributed Computing:

Underlying Principles of Parallel and Distributed


Computing System
● The terms parallel computing and distributed computing are used interchangeably.
● It implies a tightly coupled system.
● It is characterised by homogeneity of components (Uniform Structure).
● Multiple Processors share the same physical memory.

Parallel Processing
● Processing multiple tasks simultaneously in multiple processors is called parallel processing.
● Parallel program consists of multiple processes (tasks) simultaneously solving a given problem.
● Divide-and-Conquer technique is used.
Applications for Parallel Processing
● Science and Engineering
● Atmospheric Analysis
● Earth Sciences
● Electrical Circuit Design
● Industrial and Commercial
● Data Mining
● Web Search Engine
● Graphics Processing

Why to use parallel processing


● Save time and money: More resources at a task will shorten its time for completion, with
potential cost savings.
● Provide concurrency: Single computing resources can only do one task at a time.
● Serial computing limits: Transmission speeds depend directly upon hardware.
Parallel computing memory architecture types
1. Shared memory architecture
2. Distributed memory architecture
3. Hybrid distributed shared memory architecture
1. Shared Memory Architecture
● All processes access all memory as a global address space.
● Multiple processors can operate independently but they share same memory resources
available.
● Change in my location - affected by processor.

● Shared memory is classified into two types,


● Uniform Memory Access (UMA)
● Non-Uniform Memory Access (NUMA)
Uniform Memory Access (UMA)
● It is represented by symmetric multiprocessor machines
● Identical processes equal access and access time to memory
Non-Uniform Memory Access (NUMA)
● It can directly access the memory of another SMP.
● Equal access time to all memories.
● Memory access across the link is slower.
Advantages

● It is a global address space.


● User friendly programming prospective to memory.
● Data sharing between tasks is both fast and uniform due to the memory CPU.
Disadvantages
● Scalability between memory and CPU
● Increases traffic associated with cache or memory.
● Synchronisation constructs the correct access to global memory.

2. Distributed Memory Architecture


● A distributed system is a collection of large amount of independent computers that appear to its
users as a single coherent system.
● Distributed Systems have their own local memory
Advantages

● Memory is scalable with the number of processors


● Processes can be accessed rapidly with its own memory without any interference
● Cost effective use of commodity processors and networking
● Disadvantages
● The programmer is responsible for many of these details associated with data communication
between processes
● Non-uniform memory access time data residing in a remote node to access local data

3. Hybrid distributed shared memory


● The shared memory component can be a shared memory machine or Graphics Processing Unit
(GPU).
● Distributed memory components used in networking of multiple shared memory on GPU
machines
Advantages
● Increase scalability.
● Whatever is common to both shared and distributed memory architecture.
Disadvantages
● Increased programmer complexity is an important disadvantage.

Parallel versus distributed computing


Difference between Parallel Computing and Distributed
Computing:
S.
N Parallel Distributed
O Computing Computing
System
components are
Many operations located at
are performed different
1. simultaneously locations

Single computer Uses multiple


2. is required computers
Multiple
Multiple computers
processors perform
perform multiple multiple
3. operations operations

It may have
shared or It have only
distributed distributed
4. memory memory

Improves
Computer system
communicate scalability,
Processors with each other fault tolerance
communicate through Improves the and resource
with each other message system sharing
5. through bus passing. 6. performance capabilities

Distributed Computing vs. Parallel


Computing:
Having covered the concepts, let’s dive into the differences between them:

Number of Computer Systems Involved


Parallel computing generally requires one computer with multiple processors.
Multiple processors within the same computer system execute instructions
simultaneously.

All the processors work towards completing the same task. Thus they have
to share resources and data.

In distributed computing, several computer systems are involved. Here


multiple autonomous computer systems work on the divided tasks.
These computer systems can be located at different geographical locations
as well.

Dependency Between Processes


In parallel computing, the tasks to be solved are divided into multiple smaller
parts. These smaller tasks are assigned to multiple processors.

Here the outcome of one task might be the input of another. This increases
dependency between the processors. We can also say, parallel computing
environments are tightly coupled.

Some distributed systems might be loosely coupled, while others might be


tightly coupled.

Also Read: Microservices vs. Monolithic Architecture: A Detailed


Comparison

Which is More Scalable?


In parallel computing environments, the number of processors you can add is
restricted. This is because the bus connecting the processors and the
memory can handle a limited number of connections.

There are limitations on the number of processors that the bus connecting
them and the memory can handle. This limitation makes the parallel systems
less scalable.

Distributed computing environments are more scalable. This is because the


computers are connected over the network and communicate by passing
messages.

Resource Sharing
In systems implementing parallel computing, all the processors share the
same memory.

They also share the same communication medium and network. The
processors communicate with each other with the help of shared memory.

Distributed systems, on the other hand, have their own memory and
processors.

Synchronization
In parallel systems, all the processes share the same master clock for
synchronization. Since all the processors are hosted on the same physical
system, they do not need any synchronization algorithms.

In distributed systems, the individual processing systems do not have access


to any central clock. Hence, they need to implement synchronization
algorithms.

Where Are They Used?


Parallel computing is often used in places requiring higher and faster
processing power. For example, supercomputers.

Since there are no lags in the passing of messages, these systems have high
speed and efficiency.

Distributed computing is used when computers are located at different


geographical locations.

In these scenarios, speed is generally not a crucial matter. They are the
preferred choice when scalability is required.

Distributed Computing vs. Parallel


Computing’s Tabular Comparison

All in all, we can say that both computing methodologies are needed. Both
serve different purposes and are handy based on different circumstances.

It is up to the user or the enterprise to make a judgment call as to which


methodology to opt for. Generally, enterpr
Eras of computing
The two fundamental and dominant models of computing are :
•Sequential
•Parallel
Parallel vs. Distributed Computing •The four key elements of
computing developed during these eras are architectures,
compilers, applications, and problem-solving environments.
Elements of parallel computing
•It is now clear that silicon-based processor chips are reaching
their physical limits. Processing speed is constrained by the
speed of light, and the density of transistors packaged in a
processor is constrained by thermodynamic limitations.
•The development of parallel processing is being influenced by
many factors.
•The prominent among them include the
following :1.Computational requirements are ever increasing in
the areas of both scientific and business computing.
2.Sequential architectures are reaching physical limitations as
they are constrained by the speed of light and thermodynamics
laws.
3.Hardware improvements in pipelining, superscalar, and the
like are non scalable and require sophisticated compiler
technology.
4.Vector processing works well for certain kinds of problems.
5.The technology of parallel processing is mature and can be
exploited commercially; there is already significant R&D work
on development tools and environments.
6.Significant development in networking technology is paving
the way for heterogeneous computing.
Hardware architectures for parallel processing
•The core elements of parallel processing are CPUs.
Based on the number of instruction and data streams that can be
processed simultaneously,
computing systems are classified into the following four
categories:
1.Single-instruction, single-data (SISD) systems
2.Single-instruction, multiple-data (SIMD) systems
3.Multiple-instruction, single-data (MISD) systems
4.Multiple-instruction, multiple-data (MIMD) systems

Single-instruction, single-data (SISD) systems


An SISD computing system is a uniprocessor machine capable
of executing a single instruction, which operates on a single data
stream. Eg: C=A+B
Single-instruction, multiple-data (SIMD) systems•
An SIMD computing system is a multiprocessor machine
capable of executing the same instruction on all the CPUs but
operating on different data streams
.Eg:Ci=Ai*Bi where i=1 to n
Multiple-instruction, single-data (MISD) systems
•An MISD computing system is a multiprocessor machine
capable of executing different instructions on different
processing elements (PEs) but all of them operating on the same
data set. Eg: y=sin(x)+cos(x)+tan(x)
Multiple-instruction, multiple-data (MIMD) systems•
An MIMD computing system is a multiprocessor machine
capable of executing multiple instructions on multiple data sets.
Approaches to parallel programming
•Data parallelism
•Process parallelism
•Farmer-and-worker model
Levels of parallelism
•Large grain (or task level)
•Medium grain (or control level)
•Fine grain (data level)
•Very fine grain (multiple-instruction issue)
Taxonomy of virtualization techniques
Virtualization is mainly used to emulate execution environments, storage, and networks.

Process-level techniques are implemented on top of an existing operating system, which has full
control of the hardware.

System-level techniques are implemented directly on hardware and do not require or require a
minimum of support from an existing operating system.
The world of IT is actively venturing into virtualization more than ever.
Organizations are opting for virtualization owing to the various advantages
that it provides.

Moreover, with the advent of cloud, new businesses prefer working in a


virtual environment as it helps remote working environments.

But, like all things in this world, not everything is as rosy as it seems. There
are pros and cons to everything, and that includes virtualization.

Let’s take a look at virtualization through its various advantages and


disadvantages.

● Pros of Virtualization
1. Uses Hardware Efficiently
2. Available at all Times
3. Recovery is Easy
4. Quick and Easy Setup
5. Cloud Migration is Easier
● Cons of Virtualization
1. High Initial Investment
2. Data Can be at Risk
3. Quick Scalability is a Challenge
4. Performance Witnesses a Dip
5. Unintended Server Sprawl

Pros of Virtualization
Uses Hardware Efficiently
Most businesses spend a lot of capital setting up their systems and servers
but eventually use only a fraction of it effectively.

Instead, if they opt for virtualization, they can create multiple instances on
the same hardware and extract the maximum value out of it.

This way, they can save hardware costs and attain a high-efficiency level.

Available at all Times


One significant advantage of virtualization is the advanced features that it
provides; allowing virtual instances to be available at all times.

The biggest advantage here is the capability to move the virtual instance
from one server location to another. It can be done without having to close
and restart the processes that are already running.

It also ensures your data is not lost during the migration process. Hence, it
won’t matter if there are unplanned outages, your instance will always be up
and running at all times.

Virtualization service providers are thus providing 99.999% uptime today


owing to the same reason.

Recovery is Easy
With virtual instances on remote servers, duplication, backup, and recovery
are also easier.

With new tools available that provide near real-time data backup and
mirroring, one can be sure of zero data loss at any point in time.

In case of downtime or a crash, they can simply pick up from the last saved
position mirrored on another virtual instance and run with it.

This ensures business continuity at all times. Organizations can attain the
highest efficiency with this.
Quick and Easy Setup
Setting up physical systems and servers is a time-consuming affair. You need
to raise a purchase order and wait for it to be processed.

Once done, then await the products to be shipped and set up, which can take
hours.

After getting all the connections right, you still have to install the required
OS and software which consumes more hours.

Overall, it is a long wait worth days or even weeks for the entire setting-up
process.

On the flipside, with virtualization, you can simply get started within minutes
have a productive setup.

Cloud Migration is Easier


Many organizations are using old school methodologies even today.

They have been doing so because they had made a substantial investment
back in the day to ensure their IT systems were always up and running.

With the current digital transformation wave, organizations are looking to


move to the cloud for various advantages.

The challenge here is the migration of such a large amount of data available
on-premise.

Virtualization would have made the task much easier because most of the
data would already be available on a server.

Hence, migrating all of it to the cloud would be easier.

Also Read: Things to Consider for Virtualization Disaster Recovery

Cons of Virtualization
High Initial Investment
As helpful virtualization is, it does have some flaws, and the high initial
investment is one of the major one.

Virtualization indeed helps the business reduce operational costs. But the
initial setup cost of servers and storage is higher than a regular setup.
Hence, companies need years before they break even and then realize
savings and higher profitability with virtualization.

It is a bad bet for companies opting for a large set up at the beginning.

They could instead opt for a regular desktop setup and then gradually make
a move to desktop virtualization.

Data Can be at Risk


Working on virtual instances on shared hardware resources entails your data
is hosted on a third-party resource.

It can leave your data vulnerable to attacks or unauthorized access. This is a


challenge if your service provider does not have proper security solutions to
safeguard your virtual instance and data.

It is true, specifically in the case of storage virtualization.

Quick Scalability is a Challenge


Scaling on virtualization is a breeze, but not so much if it has to be done in a
short period of time.

In case of physical setup, one can quickly set up new hardware and scale,
even if it entails some initial setting up complications.

With virtualization, having to ensure all the requisite software, security,


enough storage, and resource availability can be a tedious task.

It consumes more time than one might expect since a third-party provider is
involved.

Moreover, the additional cost involved in increased resource use is another


challenge to manage.

Performance Witnesses a Dip


It is true that virtualization allows the optimum use of all resources.
However, it is also a challenge when you need that additional boost
sometimes, but it is not available.

Resources in virtualization are shared. The same resources that a single user
might have consumed are now shared among three or four users.

The overall available resources might not be shared equally or may be


shared in some ratio depending upon the tasks being run.
As the complexity of tasks increases, so does the need for performance from
the system. It results in a substantially higher time required to complete the
task.

Unintended Server Sprawl


Unintended server sprawl is a major cause of concern for many server
admins and users alike. Many issues that service desk persons raise are of
server sprawls.

Setting up a physical server consumes time and resources, whereas a virtual


server can be created in a matter of minutes.

Every time, instead of reusing the same virtual server, users tend to create
new servers since it allows them the chance to make a fresh start.

The server administrator who should be handling five or six servers has to
handle over 20 virtual servers. This can cause a major complication in the
smooth operations, and forced termination of certain servers can also cause
loss of data.

Conclusion
It is true that more organizations are using virtualization.

They need to understand if virtualization is really working for them. They


must not necessarily follow the market trend and opt for it. Else it could
result in being counter-productive for them.

In case they wish to opt for virtualization, they might also have to tweak
their operation processes accordingly.

But most importantly, they must consider all the pros and cons properly and
make an informed decision on how to make the most of virtualization for
their business.

Logical Network Perimeter


7
The logical network perimeter establishes a virtual network boundary that
can encompass and isolate a group of related cloud-based IT resources that
may be physically distributed. It is defined as the isolation of a network
environment from the rest of a communications network.
The logical network perimeter can be implemented to:
● isolate IT resources in a cloud from non-authorized users
● isolate IT resources in a cloud from non-users
● isolate IT resourced in a cloud from cloud consumers
● control the bandwidth that is available to isolated IT resources
Logical network perimeters are typically established via network devices that
supply and control the connectivity of a data center and are commonly
deployed as virtualized IT environments that include:
● Virtual Firewall – An IT resource that actively filters network traffic to and
from the isolated network while controlling its interactions with the
Internet.
● Virtual Network – Usually acquired through VLANs, this IT resource isolates
the network environment within the data center infrastructure.

Figure 1 – The symbols used to represent a virtual firewall (left) and a virtual
network (right).
Figure 1 introduces the notation used to denote these two IT resources.
Figure 2 depicts a scenario in which one logical network perimeter contains a
cloud consumer’s on-premise environment, while another contains a cloud
provider’s cloud-based environment. These perimeters are connected
through a VPN that protects communications, since the VPN is typically
implemented by point-to-point encryption of the data packets sent between
the communicating endpoints.
Figure 2 – Two logical network perimeters surround the cloud consumer and
cloud provider environments
Virtual servers

Virtual servers have the same capabilities as a physical server, but not the
underlying physical machinery. A physical server can create multiple,
separate virtual servers with a hypervisor or container engine using
virtualization technology, and the instances share physical server resources
like CPU and memory.

Not long ago, some feared a future of bustling data centers covering the
globe. While that sounds hyperbolic, spatial considerations have always been
a critical part of any data center or server room. Thanks to virtualization, the
expansion of physical infrastructure slowed in the last decade.

As more organizations reap the benefits of virtualization, virtual servers are


already becoming a critical component of the modern hybrid ecosystem.

This article looks at the pros and cons of virtual servers, how to implement
virtualization in your network, and the different types of virtualization, as
well as guidance for managing virtual servers.

Virtual Servers Pros and Cos

Advantages Disadvantages

Reduced costs Technical management

Spatial optimization Lagging performance

Enhanced scalability Upfront costs

Backup and recovery Less scalable


Advantages Disadvantages

Technical support Legacy applications

Increased workload efficiency Finite space

Ease of deploying new updates Reduced control

Virtual Server Advantages

● Reduced costs due to decreasing power, cooling, and overhead expenses

● Spatial optimization by condensing legacy physical servers into virtual servers

● Enhanced scalability as administrators can create new virtual servers as


needed
● Backup and recovery features for fast, reliable restoration

● Technical support from a virtualization provider for setup, configuration, and


maintenance
● Increased workload efficiency and load balancing of network demands

● Ease of deploying new updates and software to a fleet of virtual servers

Virtual Server Disadvantages

● Technical management to create, configure, monitor, and secure virtual


instances
● Lagging performance when a host’s virtual servers are at higher activity levels

● Upfront costs for purchasing the physical hosts and virtualization software
licensing
● Less scalable than cloud platforms

● Legacy applications may not be compatible with virtualization

● Finite space is usually limited to a single virtual machine or multiple containers

● Reduced control relative to managing an in-house server fleet; bound to


vendor SLA
See our picks for Best Server Virtualization Software of 2021.

How to Deploy Virtual Servers


Virtualization requires an abstraction layer between the server hardware and
software to create multiple virtual instances on a single physical server. This
capability is a few clicks away on modern computers and servers with a
hypervisor or container engine.

For medium to large enterprises, administrators can strategically implement


virtualization to optimize space and performance needs. At large, virtual
servers are available to all at remote locations through managed
colocation and private data centers. Without physically accessing the host
server, a network administrator can remotely control the virtual server’s
functionality.

Check out features and pricing for Red Hat Virtualization.

Designed by Sam Ingalls. © ServerWatch 2021.

Types of Server Virtualization


All virtualization methods help organizations optimize physical server
availability and agility. What differentiates these methods are the resources
and objectives of the network undergoing virtualization.
Full Virtualization

Full virtualization employs a hypervisor to trap and emulate virtual servers.


The software-assisted approach uses binary translation (BT) with direct
execution to implement a hypervisor. Hardware-assisted virtualization is
achievable with current x86 processors, known as a bare metal (hypervisor
type 1) or hosted approach on the operating system (hypervisor type 2).

OS-Level Virtualization

OS-level virtualization is the newest methodology in this space thanks to


virtualization capabilities embedded in modern operating systems. Like
para-virtualization, OS-level virtual servers do not emulate the host’s
hardware. With the pertinent software, the OS kernel creates separate and
lightweight instances called containers. Virtualization
and containerization are slightly different processes, as described here.

Para-Virtualization

Para-virtualization also uses a hypervisor, but the virtual servers do not fully
emulate the physical host’s hardware. Instead, an API – typically integrated
into modern servers – directly exchanges calls to host and virtual server
operating systems. The resulting virtual servers recognize their environment
as an extension of the host’s resources and neighboring virtual servers.


IBM Cloud Learn Hub
● What is Cloud Storage
Cloud Storage
By: IBM Cloud Education
24 June 2019
● Storage
● What is cloud storage?
● How does it work?
● Pros and cons
● Examples
● Cloud storage for business
● Security
● Backup
● Servers
● Open source
● Pricing
● Examples
● Cloud storage and IBM
Cloud Storage
An introduction to the important aspects of cloud storage, including how it works, its benefits,
and the different types of cloud storage that are available.

What is cloud storage?


Cloud storage allows you to save data and files in an off-site location that you access
either through the public internet or a dedicated private network connection. Data that
you transfer off-site for storage becomes the responsibility of a third-party cloud
provider. The provider hosts, secures, manages, and maintains the servers and
associated infrastructure and ensures you have access to the data whenever you need
it.
Cloud storage delivers a cost-effective, scalable alternative to storing files on on-
premise hard drives or storage networks. Computer hard drives can only store a finite
amount of data. When users run out of storage, they need to transfer files to an external
storage device. Traditionally, organizations built and maintained storage area networks
(SANs) to archive data and files. SANs are expensive to maintain, however, because as
stored data grows, companies have to invest in adding servers and infrastructure to
accommodate increased demand.
Cloud storage services provide elasticity, which means you can scale capacity as your
data volumes increase or dial down capacity if necessary. By storing data in a cloud,
your organization save by paying for storage technology and capacity as a service,
rather than investing in the capital costs of building and maintaining in-house storage
networks. You pay for only exactly the capacity you use. While your costs might
increase over time to account for higher data volumes, you don’t have to overprovision
storage networks in anticipation of increased data volume.
How does it work?
Like on-premise storage networks, cloud storage uses servers to save data; however,
the data is sent to servers at an off-site location. Most of the servers you use are virtual
machines hosted on a physical server. As your storage needs increase, the provider
creates new virtual servers to meet demand.
For more information on virtual machines, see “Virtual Machines: A Complete Guide.”
Typically, you connect to the storage cloud either through the internet or a dedicated
private connection, using a web portal, website, or a mobile app. The server with which
you connect forwards your data to a pool of servers located in one or more data
centers, depending on the size of the cloud provider’s operation.
As part of the service, providers typically store the same data on multiple machines for
redundancy. This way, if a server is taken down for maintenance or suffers an outage,
you can still access your data.
Cloud storage is available in private, public and hybrid clouds.
● Public storage clouds: In this model, you connect over the internet to a storage cloud that’s
maintained by a cloud provider and used by other companies. Providers typically make services
accessible from just about any device, including smartphones and desktops and let you scale
up and down as needed.
● Private cloud storage: Private cloud storage setups typically replicate the cloud model, but
they reside within your network, leveraging a physical server to create instances of virtual
servers to increase capacity. You can choose to take full control of an on-premise private cloud
or engage a cloud storage provider to build a dedicated private cloud that you can access with a
private connection. Organizations that might prefer private cloud storage include banks or retail
companies due to the private nature of the data they process and store.
● Hybrid cloud storage: This model combines elements of private and public clouds, giving
organizations a choice of which data to store in which cloud. For instance, highly regulated data
subject to strict archiving and replication requirements is usually more suited to a private cloud
environment, whereas less sensitive data (such as email that doesn’t contain business secrets)
can be stored in the public cloud. Some organizations use hybrid clouds to supplement their
internal storage networks with public cloud storage.
Pros and cons
As with any other cloud-based technology, cloud storage offers some
distinct advantages. But it also raises some concerns for companies, primarily over
security and administrative control.
Pros
The pros of cloud storage include the following:
● Off-site management: Your cloud provider assumes responsibility for maintaining and
protecting the stored data. This frees your staff from tasks associated with storage, such as
procurement, installation, administration, and maintenance. As such, your staff can focus on
other priorities.
● Quick implementation: Using a cloud service accelerates the process of setting up and adding
to your storage capabilities. With cloud storage, you can provision the service and start using it
within hours or days, depending on how much capacity is involved.
● Cost-effective: As mentioned, you pay for the capacity you use. This allows your organization
to treat cloud storage costs as an ongoing operating expense instead of a capital expense with
the associated upfront investments and tax implications.
● Scalability: Growth constraints are one of the most severe limitations of on-premise storage.
With cloud storage, you can scale up as much as you need. Capacity is virtually unlimited.
● Business continuity: Storing data offsite supports business continuity in the event that a
natural disaster or terrorist attack cuts access to your premises.
Cons
Cloud storage cons include the following:
● Security: Security concerns are common with cloud-based services. Cloud storage providers
try to secure their infrastructure with up-to-date technologies and practices, but occasional
breaches have occurred, creating discomfort with users.
● Administrative control: Being able to view your data, access it, and move it at will is another
common concern with cloud resources. Offloading maintenance and management to a third
party offers advantages but also can limit your control over your data.
● Latency: Delays in data transmission to and from the cloud can occur as a result of traffic
congestion, especially when you use shared public internet connections. However, companies
can minimize latency by increasing connection bandwidth.
● Regulatory compliance: Certain industries, such as healthcare and finance, have to comply
with strict data privacy and archival regulations, which may prevent companies from using cloud
storage for certain types of files, such as medical and investment records. If you can, choose a
cloud storage provider that supports compliance with any industry regulations impacting your
business.
Examples
There are three main types of cloud storage: block, file, and object. Each comes with
its set of advantages:
Block storage
Traditionally employed in SANs, block storage is also common in cloud storage
environments. In this storage model, data is organized into large volumes called
“blocks." Each block represents a separate hard drive. Cloud storage providers use
blocks to split large amounts of data among multiple storage nodes. Block storage
resources provide better performance over a network thanks to low IO latency (the time
it takes to complete a connection between the system and client) and are especially
suited to large databases and applications.
Used in the cloud, block storage scales easily to support the growth of your
organization’s databases and applications. Block storage would be useful if your
website captures large amounts of visitor data that needs to be stored.
“Block Storage: A Complete Guide” provides a wealth of information on block storage.
File storage
The file storage method saves data in the hierarchical file and folder structure with
which most of us are familiar. The data retains its format, whether residing in the
storage system or in the client where it originates, and the hierarchy makes it easier and
more intuitive to find and retrieve files when needed. File storage is commonly used for
development platforms, home directories, and repositories for video, audio, and other
files.
In the video “Block Storage vs. File Storage,” Amy Blea compares and contrasts these
two cloud storage options:
Block Storage vs. File Storage (04:03)
Object storage
Object storage differs from file and block storage in that it manages data as objects.
Each object includes the data in a file, its associated metadata, and an identifier.
Objects store data in the format it arrives in and makes it possible to customize
metadata in ways that make the data easier to access and analyze. Instead of being
organized in files or folder hierarchies, objects are kept in repositories that deliver
virtually unlimited scalability. Since there is no filing hierarchy and the metadata is
customizable, object storage allows you to optimize storage resources in a cost-
effective way.
Check out "IBM Cloud Object Storage: Built for business" to learn more about benefits
of object storage:
IBM Cloud Object Storage: Built for business (04:10)
Cloud storage for business
A variety of cloud storage services is available for just about every kind of business—
anything from sole proprietor to large enterprises.
If you run a small business, cloud storage could make sense, particularly if you don’t
have the resources or skills to manage storage yourself. Cloud storage can also help
with budget planning by making storage costs predictable, and it gives you the ability to
scale as the business grows.
If you work at a larger enterprise (e.g., a manufacturing company, financial services, or
a retail chain with dozens of locations), you might need to transfer hundreds of
gigabytes of data for storage on a regular basis. In these cases, you should work with
an established cloud storage provider that can handle your volumes. In some cases,
you may be able to negotiate custom deals with providers to get the best value.
Security
Cloud storage security is a serious concern, especially if your organization handles
sensitive data like credit card information and medical records. You want assurances
your data is protected from cyber threats with the most up-to-date methods available.
You will want layered security solutions that include endpoint protection, content and
email filtering and threat analysis, as well as best practices that comprise regular
updates and patches. And you need well-defined access and authentication policies.
Most cloud storage providers offer baseline security measures that include access
control, user authentication, and data encryption. Ensuring these measures are in place
is especially important when the data in question involves confidential business files,
personnel records, and intellectual property. Data subject to regulatory compliance may
require added protection, so you need to check that your provider of choice complies
with all applicable regulations.
Whenever data travels, it is vulnerable to security risks. You share the responsibility for
securing data headed for a storage cloud. Companies can minimize risks by encrypting
data in motion and using dedicated private connections (instead of the public internet) to
connect with the cloud storage provider.
Backup
Data backup is as important as security. Businesses need to back up their data so they
can access copies of files and applications— and prevent interruptions to business—if
data is lost due to cyberattack, natural disaster, or human error.
Cloud-based data backup and recovery services have been popular from the early days
of cloud-based solutions. Much like cloud storage itself, you access the service through
the public internet or a private connection. Cloud backup and recovery services free
organizations from the tasks involved in regularly replicating critical business data to
make it readily available should you ever need it in the wake of data loss caused by a
natural disaster, cyber attack or unintentional user error.
Cloud backup offers the same advantages to businesses as storage—cost-
effectiveness, scalability, and easy access. One of the most attractive features of cloud
backup is automation. Asking users to continually back up their own data produces
mixed results since some users always put it off or forget to do it. This creates a
situation where data loss is inevitable. With automated backups, you can decide how
often to back up your data, be it daily, hourly or whenever new data is introduced to
your network.
Backing up data off-premise in a cloud offers an added advantage: distance. A building
struck by a natural disaster, terror attack, or some other calamity could lose its on-
premise backup systems, making it impossible to recover lost data. Off-premise backup
provides insurance against such an event.
Servers
Cloud storage servers are virtual servers—software-defined servers that emulate
physical servers. A physical server can host multiple virtual servers, making it easier to
provide cloud-based storage solutions to multiple customers. The use of virtual servers
boosts efficiency because physical servers otherwise typically operate below capacity,
which means some of their processing power is wasted.
This approach is what enables cloud storage providers to offer pay-as-you-go cloud
storage, and to charge only for the storage capacity you consume. When your cloud
storage servers are about to reach capacity, the cloud provider spins up another server
to add capacity—or makes it possible for you to spin up an additional virtual machine on
your own.
“Virtualization: A Complete Guide” offers a complete overview of virtualization and
virtual servers.
Open source
If you have the expertise to build your own virtual cloud servers, one of the options
available to you is open source cloud storage. Open source means the software used in
the service is available to users and developers to study, inspect, change and distribute.
Open source cloud storage is typically associated with Linux and other open source
platforms that provide the option to build your own storage server. Advantages of this
approach include control over administrative tasks and security.
Cost-effectiveness is another plus. While cloud-based storage providers give you
virtually unlimited capacity, it comes at a price. The more storage capacity you use, the
higher the price gets. With open source, you can continue to scale capacity as long as
you have the coding and engineering expertise to develop and maintain a storage
cloud.
Different open source cloud storage providers offer varying levels of functionality, so
you should compare features before deciding which service to use. Some of the
functions available from open source cloud storage services include the following:
● Syncing files between devices in multiple locations
● Two-factor authentication
● Auditing tools
● Data transfer encryption
● Password-protected sharing
Pricing
As mentioned, cloud storage helps companies cut costs by eliminating in-house storage
infrastructure. But cloud storage pricing models vary. Some cloud storage providers
charge monthly the cost per gigabyte, while others charge fees based on stored
capacity. Fees vary widely; you may pay USD 1.99 or USD 10 for 100 GB of storage
monthly, based on the provider you choose. Additional fees for transferring data from
your network to the fees based on storage cloud are usually included in the overall
service price.
Providers may charge additional fees on top of the basic cost of storage and data
transfer. For instance, you may incur an extra fee every time you access data in the
cloud to make changes or deletions, or to move data from one place to another. The
more of these actions you perform on a monthly basis, the higher your costs will be.
Even if the provider includes some base level of activity in the overall price, you will
incur extra charges if you exceed the allowable limit.
Providers may also factor the number of users accessing the data, how often users
access data, and how far the data has to travel into their charges. They may charge
differently based on the types of data stored and whether the data requires added levels
of security for privacy purposes and regulatory compliance.
Examples
Cloud storage services are available from dozens of providers to suit all needs, from
those of individual users to multinational organizations with thousands of locations. For
instance, you can store emails and passwords in the cloud, as well as files like
spreadsheets and Word documents for sharing and collaborating with other users. This
capability makes it easier for users to work together on a project, which explains while
file transfer and sharing are among the most common uses of cloud storage services.
Some services provide file management and syncing, ensuring that versions of the
same files in multiple locations are updated whenever someone changes them. You can
also get file management capability through cloud storage services. With it, you can
organize documents, spreadsheets, and other files as you see fit and make them
accessible to other users. Cloud storage services also can handle media files, such as
video and audio, as well as large volumes of database records that would otherwise
take up too much room inside your network.
Whatever your storage needs, you should have no trouble finding a cloud storage
service to deliver the capacity and functionality you need

Cloud Usage Monitor

Cloud monitoring or cloud usage monitor is managing and reviewing operational processes in
the cloud infrastructure. This is implemented via automatic monitoring software, giving control
and central access over cloud infrastructure.

Organizations can check the health and operational status of cloud components and devices.

Depending on the cloud structure type used by the client, concerns arise. In case, you are
utilizing public cloud, then you will have visibility and control in order to monitor and manage
the cloud infrastructure. As most of the organizations are using private cloud, it provides more
flexibility and control to the IT department along with consumption advantages.

Irrespective of the cloud structure type the company is using, monitoring the security and
performance is critical.

How Cloud Monitoring Works

Cloud has plenty of moving parts, so it is very important to make sure that everything
works fine in order to enhance performance. Primarily, cloud monitoring contains
functions like –

• Website Monitoring: Tracks the traffic, processes, resource utilization, and availability of
hosted websites

• Virtual Machine Monitoring: It monitors the particular virtual machines and virtualization
infrastructure

• Database Monitoring: It monitors the queries, consumption, availability, and processes of


database resources

• Virtual Network Monitoring: In this method, devices, performance, connections and virtual
network is monitored

• Cloud Storage Monitoring: In this monitoring method, storage resources and processes related
to services, databases, virtual machines, and applications are monitored

With cloud monitoring, it is easy to identify the patterns and explore the security risks within the
infrastructure.

Key Features –

• Capability to manage huge data volumes in various distributed locations

• Increase visibility in the user, application, and file in order to find compromises or potential
attacks

• Constant monitoring to make sure modified and new files should be scanned in actual time

• Reporting and auditing capabilities in order to accomplish security compliance

• Adding monitoring tools along with numerous cloud service providers

The cloud usage monitor mechanism is a lightweight and autonomous


software program responsible for collecting and processing IT resource
usage data. Depending on the type of usage metrics they are designed to
collect and the manner in which usage data needs to be collected, cloud
usage monitors can exist in different formats. The upcoming sections
describe three common agent-based implementation formats. Each can be
designated to forward collected usage data to a log database for post-
processing and reporting purposes.

Monitoring Agent
A monitoring agent is an intermediary, event-driven program that exists as a
service agent and resides along existing communication paths to
transparently monitor and analyze dataflows (Figure 1). This type of cloud
usage monitor is commonly used to measure network traffic and message
metrics.

Resource Agent
A resource agent is a processing module that collects usage data by having
event-driven interactions with specialized resource software (Figure 1). This
module is used to monitor usage metrics based on pre-defined, observable
events at the resource software level, such as initiating, suspending,
resuming, and vertical scaling.

Figure 1 – A cloud service consumer sends a request message to a cloud


service (1). The monitoring agent intercepts the message to collect relevant
usage data (2) before allowing it to continue to the cloud service (3a). The
monitoring agent stores the collected usage data in a log database (3b). The
cloud service replies with a response message (4) that is sent back to the
cloud service consumer without being intercepted by the monitoring agent
(5).
Figure 2 – The resource agent is actively monitoring a virtual server and
detects an increase in usage (1). The resource agent receives a notification
from the underlying resource management program that the virtual server is
being scaled up and stores the collected usage data in a log database, as per
its monitoring metrics (2).

Polling Agent
A polling agent is a processing module that collects cloud service usage data
by polling IT resources. This type of cloud service monitor is commonly used
to periodically monitor IT resource status, such as uptime and downtime
(Figure 3).
Figure 3 – A polling agent monitors the status of a cloud service hosted by a
virtual server by sending period polling request messages and receiving
polling response messages that report usage status “A” after a number of
polling cycles, until it receives a usage status of “B” (1), upon which the
polling agent records the new usage status in the log database (2).


Resource Replication
The creation of multiple instances of the same resource, it enables data from
one resource to be replicated to one or more resources. That is typically performed
when a resource’s availability and performance need to be enhanced. AU57: Pict
Element 1 Learn more in: Workload Management Systems for the Cloud Environment

Resource replication is defined as the creation of multiple instances of the


same IT resource, and is typically performed when an IT resource’s
availability and performance need to be enhanced. Virtualization technology
is used to implement the resource replication mechanism to replicate cloud-
based IT resources (Figure 1).
The resource replication mechanism is commonly implemented as a
hypervisor. For example, the virtualization platform’s hypervisor can access
a virtual server image to create several instances, or to deploy and replicate
ready-made environments and entire applications.

Figure 1 – The hypervisor replicates several instances of a virtual server,


using a stored virtual server image.
Other common types of replicated IT resources include cloud service
implementations and various copies of data and cloud storage devices

Ready-Made Environment
The ready-made environment mechanism (Figure 1) is a defining component
of the PaaS cloud delivery model that represents a pre-defined, cloud-based
platform comprised of a set of already installed IT resources, ready to be
used and customized by a cloud consumer. These environments are utilized
by cloud consumers to remotely develop and deploy their own services and
applications within a cloud. Typical ready-made environments include pre-
installed IT resources, such as databases, middleware, development tools,
and governance tools.

Figure 1 – A cloud consumer accesses a ready-made environment hosted on


a virtual server.
A ready-made environment is generally equipped with a complete software
development kit that provides cloud consumers with programmatic access to
the development technologies that comprise their preferred programming
stacks.
Middleware is available for multitenant platforms to support the
development and deployment of Web applications. Some cloud providers
offer runtime performance and billing parameters. For example, a frontend
instance of a cloud service can be configured to respond to time-sensitive
requests more effectively than a backend instance. The former variation will
be billed at a different rate than the latter.
A solution can be partitioned into groups of logic that can be designated for
both frontend and backend instance invocation so as to optimize runtime
execution and billing.
(Java)Remote method invocation (RMI):

RMI (Remote Method Invocation)


1. Remote Method Invocation (RMI)
2. Understanding stub and skeleton
1. stub
2. skeleton
3. Steps to write the RMI program
4. RMI Example

The RMI (Remote Method Invocation) is an API that provides a mechanism to


create distributed application in java. The RMI allows an object to invoke
methods on an object running in another JVM.

The RMI provides remote communication between the applications using two
objects stub and skeleton.

Understanding stub and skeleton


RMI uses stub and skeleton object for communication with the remote object.

A remote object is an object whose method can be invoked from another


JVM. Let's understand the stub and skeleton objects:

stub
The stub is an object, acts as a gateway for the client side. All the outgoing
requests are routed through it. It resides at the client side and represents the
remote object. When the caller invokes method on the stub object, it does
the following tasks:

1. It initiates a connection with remote Virtual Machine (JVM),


2. It writes and transmits (marshals) the parameters to the remote Virtual
Machine (JVM),
3. It waits for the result
4. It reads (unmarshals) the return value or exception, and
5. It finally, returns the value to the caller.
skeleton
The skeleton is an object, acts as a gateway for the server side object. All the
incoming requests are routed through it. When the skeleton receives the
incoming request, it does the following tasks:

1. It reads the parameter for the remote method


2. It invokes the method on the actual remote object, and
3. It writes and transmits (marshals) the result to the caller.

In the Java 2 SDK, an stub protocol was introduced that eliminates the need for

skeletons.

Understanding requirements for the distributed


applications
If any application performs these tasks, it can be distributed application.

1. The application need to locate the remote method


2. It need to provide the communication with the remote objects, and
3. The application need to load the class definitions for the objects.
The RMI application have all these features, so it is called the distributed
application.

Java RMI Example


The is given the 6 steps to write the RMI program.

1. Create the remote interface


2. Provide the implementation of the remote interface
3. Compile the implementation class and create the stub and skeleton objects
using the rmic tool
4. Start the registry service by rmiregistry tool
5. Create and start the remote application
6. Create and start the client application

Cloud Hypervisor?
A Cloud Hypervisor is software that enables the sharing of cloud provider’s
physical compute and memory resources across multiple virtual machines
(VMs). Originally created for mainframe computers in the 1960s, hypervisors
gained wide popularity with the introduction of VMware for industry standard
servers in the 1990s, enabling a single physical server to independently run
multiple guest VMs each with their own operating systems (OSs) that are
logically separate from each other. In this manner, problems or crashes in
one guest VM have no effect on the other guest VMs, OSs, or the applications
running on them.

How does a Cloud Hypervisor work?


Cloud Hypervisors abstract the underlying servers from ‘Guest’ VMs and OSs.
OS calls for server resources (CPU, memory, disk, print, etc) are intercepted
by the Cloud Hypervisor which allocates resources and prevents conflicts. As
a rule, guest VMs and OSs run in a less-privileged mode than
the hypervisor so they cannot impact the operation of the hypervisor or
other guest VMs
There are two major classifications of Hypervisor: Bare metal or native (Type
1) and Hosted (Type 2). Type 1 Hypervisors run directly on host machine
hardware with no OS beneath. These hypervisors communicate directly with
the host machine resources. VMware ESXi and Microsoft Hyper-V are Type 1.

Type 2 Hypervisors usually run above the host machine OS and rely on the
host OS for access to machine resources. They are easier to se up and
manage since the OS is already in place, and thus Type 2 hypervisors are
often used for home use and for testing VM functionality. VMware Player and
VMware Workstation are Type 2 hypervisors.

KVM (Kernel-based Virtual Machine) is a popular hybrid hypervisor with some


Type 1 and Type 2 characteristics. This open-source hypervisor it built into
Linux and lets Linux act as a Type 1 hypervisor and an OS at the same time.

Benefits of a Cloud Hypervisor?


There are several benefits to using a hypervisor that hosts multiple virtual
machines:

Time to Use: Cloud Hypervisors enable VMs to be instantly spun up or


down, as opposed to days or weeks required to deploy a bare metal server.
This enables projects to be created and have teams working the same day.
Once a project is complete, VMs can be terminated to save organizations
from paying for unnecessary infrastructure.
Utilization: Cloud Hypervisors enable several VMs to run on a single
physical server and for all the VMs to share its resources. This improves the
server utilization and saves on power, cooling, and real estate that is no
longer needed for each individual VM.
Flexibility: Most Cloud Hypervisors are Type 1 (Bare-metal) enabling guest
VMs and OSs to execute on a broad variety of hardware, since the hypervisor
abstracts the VMs from the underlying machine’s drivers and devices.
Portability: Since Cloud Hypervisors enable portability of workloads
between VMs or between a VM and an organization’s on-premises hardware.
Applications that are seeing spikes in demand can simply access additional
machines to scale as needed.
Reliability: Hardware failures can be remediated by moving VMs to other
machines, either at the cloud provider or in a private cloud or on-premises
hardware. Once the failure is repaired workloads can fail back to ensure
availability of application resources on the VM.

Cloud management
Cloud management is a discipline but one that is facilitated by tools and
software. To realize the control and visibility required for efficient cloud
management, enterprises should see their hybrid IT infrastructure through a
consolidated platform that pulls relevant data from all the organization’s cloud-
based and traditional on-premises systems.

Cloud management platforms help IT teams secure and optimize cloud


infrastructure, including the applications and data residing on it. Administrators
can manage compliance, set up real-time monitoring, and preempt
cyberattacks

You might also like