KEMBAR78
Cloud Architecture for Developers | PDF | Web Service | Soap
0% found this document useful (0 votes)
35 views32 pages

Cloud Architecture for Developers

The document explains web services as client-server applications enabling communication between different applications using standards like SOAP and REST. It details the features, components, advantages, and disadvantages of both SOAP and RESTful web services, highlighting their differences in design, flexibility, performance, scalability, security, and reliability. Additionally, it relates Service-Oriented Architecture (SOA) to cloud computing, emphasizing its role in creating reusable software components through network services.

Uploaded by

Sankit Ingale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views32 pages

Cloud Architecture for Developers

The document explains web services as client-server applications enabling communication between different applications using standards like SOAP and REST. It details the features, components, advantages, and disadvantages of both SOAP and RESTful web services, highlighting their differences in design, flexibility, performance, scalability, security, and reliability. Additionally, it relates Service-Oriented Architecture (SOA) to cloud computing, emphasizing its role in creating reusable software components through network services.

Uploaded by

Sankit Ingale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Unit 4

Cloud Architecture

What is Web Service?

A Web Service is can be defined by following ways:


o It is a client-server application or application component for communication.
o The method of communication between two devices over the network.
o It is a software system for the interoperable machine to machine communication.
o It is a collection of standards or protocols for exchanging information between two
devices or application.

Let's understand it by the figure given below:

As you can see in the figure, Java, .net, and PHP applications can communicate with other
applications through web service over the network. For example, the Java application can
interact with Java, .Net, and PHP applications. So web service is a language independent way
of communication.

Types of Web Services


There are mainly two types of web services.
1. SOAP web services.
2. RESTful web services.

Web Service Features


1. XML-Based
Web services use XML at data description and data transportation layers. Using XML exclude
any networking, operating system, or platform binding. Web services-based operation is
extremely interoperable at their core level.
2. Loosely Coupled
A client of a web service is not fixed to the web service directly. The web service interface
can support innovation over time without negotiating the client's ability to communicate with
the service. A tightly coupled system means that the client and server logic are closely tied to
one another, indicating that if one interface changes, then another must be updated. Accepting
a loosely coupled architecture tends to make software systems more manageable and allows
more straightforward integration between various systems.
3. Coarse-Grained
Object-oriented technologies such as Java expose their functions through individual methods.
A specific process is too fine an operation to provide any suitable capability at a corporate
level. Building a Java program from scratch needed the creation of various fine-grained
functions that are then collected into a coarse-grained role that is consumed by either a client
or another service.
Businesses and the interfaces that they prove should be coarse-grained. Web services
technology implement a natural method of defining coarse-grained services that approach the
right amount of business logic.
4. Ability to be Synchronous or Asynchronous
Synchronicity specifies the binding of the client to the execution of the function. In
synchronous invocations, the client blocks and delays in completing its service before
continuing. Asynchronous operations grant a client to invoke a task and then execute other
functions.
Asynchronous clients fetch their result at a later point in time, while synchronous clients
receive their effect when the service has completed. Asynchronous capability is an essential
method in enabling loosely coupled systems.
5. Supports Remote Procedure Calls (RPCs)
Web services allow consumers to invoke procedures, functions, and methods on remote
objects using an XML-based protocol. Remote systems expose input and output framework
that a web service must support.
Component development through Enterprise JavaBeans (EJBs) and .NET Components has
more become a part of architectures and enterprise deployments over a previous couple of
years. Both technologies are assigned and accessible through a variety of RPC mechanisms.
A web function supports RPC by providing services of its own, equivalent to those of a
traditional role, or by translating incoming invocations into an invocation of an EJB or a .NET
component.
6. Supports Document Exchange
One of the essential benefits of XML is its generic way of representing not only data but also
complex documents. These documents can be as simple as describing a current address, or
they can be as involved as defining an entire book or Request for Quotation (RFQ). Web
services support the transparent transfer of documents to facilitate business integration.

Web Service Components


There are three major web service components.
1. SOAP
2. WSDL
3. UDDI

SOAP
• SOAP is an acronym for Simple Object Access Protocol.
• SOAP is a XML-based protocol for accessing web services.
• SOAP is a W3C recommendation for communication between applications.
• Backward Skip 10sPlay Video Forward Skip 10s
• SOAP is XML based, so it is platform independent and language independent. In other
words, it can be used with Java, .Net or PHP language on any platform.
WSDL
• WSDL is an acronym for Web Services Description Language.
• WSDL is a xml document containing information about web services such as method
name, method parameter and how to access it.
• WSDL is a part of UDDI. It acts as a interface between web service applications.
• WSDL is pronounced as wiz-dull.

UDDI
• UDDI is an acronym for Universal Description, Discovery and Integration.
• UDDI is a XML based framework for describing, discovering and integrating web
services.
• UDDI is a directory of web service interfaces described by WSDL, containing
information about web services.

SOAP Web Services


• SOAP stands for Simple Object Access Protocol. It is a XML-based protocol for
accessing web services.
• SOAP is a W3C recommendation for communication between two applications.
• SOAP is XML based protocol. It is platform independent and language independent.
By using SOAP, you will be able to interact with other programming language
applications.

SOAP API
SOAP or Simple Object Access Protocol is a messaging protocol. Is allows to exchange
the structure information without any platform. Soap uses the XML data format due to the
complexity. It is mostly used for complex systems with strict standards ensuring security and
reliability.

Key Concepts
• SOAP is a protocol as it has some strict rules for data format and communication.
• It manages the records and maintain the state between the requests.
• SOAP relies on SSL and WS-Security for secured communication.
• SOAP works with the XML data format to handle the complex data.

Advantages of Soap Web Services


1. WS Security: SOAP defines its own security known as WS Security.
2. Language and Platform independent: SOAP web services can be written in any
programming language and executed in any platform.

Disadvantages of Soap Web Services


1. Slow: SOAP uses XML format that must be parsed to be read. It defines many standards
that must be followed while developing the SOAP applications. So, it is slow and consumes
more bandwidth and resource.
2. WSDL dependent: SOAP uses WSDL and doesn't have any other mechanism to
discover the service.

RESTful Web Services


• REST stands for Representational State Transfer.
• REST is an architectural style not a protocol.

REST API
REST or Representational State Transfer is an architectural style for building the web
services. It is mostly used for lightweight and stateless communication. It uses simple HTTP
methods like GET, POST, PUT, and DELETE to perform operations on the data resources.

Key Concepts
• Rest uses URI i.e. Uniform Resource Identifier and assume everything as a resource
• It does not store any past data, requests and do independent operations
• It relies on HTTP method to request any type of operation on the resource.
• Rest usually works with JSON and XML data formats

Advantages of RESTful Web Services


Fast: RESTful Web Services are fast because there is no strict specification like SOAP. It
consumes less bandwidth and resource.
Language and Platform independent: RESTful web services can be written in any
programming language and executed in any platform.
1. Can use SOAP: RESTful web services can use SOAP web services as the
implementation.
2. Permits different data format: RESTful web service permits different data format such
as Plain Text, HTML, XML and JSON.

What is the difference between SOAP and REST?


SOAP and REST are two internet data exchange mechanisms. For example, imagine
that your internal accounts system shares data with your customer's accounting system to
automate invoicing tasks. The two applications share data by using an API that defines
communication rules. SOAP and REST are two different approaches to API design. The SOAP
approach is highly structured and uses XML data format. REST is more flexible and allows
applications to exchange data in multiple formats.

What are the similarities between SOAP and REST?


To build applications, you can use many different programming languages,
architectures, and platforms. It’s challenging to share data between such varied technologies
because they have different data formats. Both SOAP and REST emerged in an attempt to solve
this problem.
You can use SOAP and REST to build APIs or communication points between diverse
applications. The terms web service and API are used interchangeably. However, APIs are the
broader category. Web services are a special type of API.
Here are other similarities between SOAP and REST:
• They both describe rules and standards on how applications make, process, and respond
to data requests from other applications
• They both use HTTP, the standardized internet protocol, to exchange information
• They both support SSL/TLS for secure, encrypted communication
You can use either SOAP or REST to build secure, scalable, and fault-tolerant distributed
systems.

When to use SOAP vs REST?


Before choosing between SOAP and REST, consider your scenarios and your API users'
requirements. The following criteria are worth considering.

Overall application design: Modern applications like mobile apps and hybrid applications
work better with REST APIs. REST gives you the scalability and flexibility to design
applications using modern architecture patterns like microservices and containers. However,
if you need to integrate or extend legacy systems that already have SOAP APIs, you may be
better off continuing with SOAP.

Security: Public APIs have lower security requirements and demand greater flexibility so
anyone can interact with them. So, REST is a better choice when you build public APIs.
Conversely, some private APIs for internal enterprise requirements (like data reporting for
compliance) may benefit from the tighter security measures in WS-Security of SOAP.

ACID compliance: Do your API users require stringent consistency and data integrity across
a chain of transactions? For instance, finance transactions require an entire batch of data
updates to fail if even one update fails.
SOAP has built-in compliance for atomicity, consistency, isolation, and durability (ACID).
And SOAP may be better suited for high data integrity requirements. In this case, REST APIs
may require additional software modules to enforce the state at the server or database level.

How do SOAP APIs and REST APIs work?


SOAP is an older technology that requires a strict communication contract between systems.
New web service standards have been added over time to accommodate technology changes,
but they create additional overheads. REST was developed after SOAP and inherently solves
many of its shortcomings. REST web services are also called RESTful web services.

SOAP APIs
SOAP is a protocol that defines rigid communication rules. It has several associated standards
that control every aspect of the data exchange. For example, here are some standards SOAP
uses:
• Web Services Security (WS-Security) specifies security measures like using unique
identifiers called tokens
• Web Services Addressing (WS-Addressing) requires including routing information as
metadata
• WS-ReliableMessaging standardizes error handling in SOAP messaging
• Web Services Description Language (WSDL) describes the scope and function of SOAP
web services
When you send a request to a SOAP API, you must wrap your HTTP request in a SOAP
envelope. This is a data structure that modifies the underlying HTTP content with SOAP
request requirements. Due to the envelope, you can also send requests to SOAP web services
with other transport protocols, like TCP or Internet Control Message Protocol (ICMP).
However, SOAP APIs and SOAP web services always return XML documents in their
responses.

REST APIs
REST is a software architectural style that imposes six conditions on how an API should work.
These are the six principles REST APIs follow:
1. Client-server architecture. The sender and receiver are independent of each other
regarding technology, platforming, programming language, and so on.
2. Layered. The server can have several intermediaries that work together to complete
client requests, but they are invisible to the client.
3. Uniform interface. The API returns data in a standard format that is complete and fully
useable.
4. Stateless. The API completes every new request independently of previous requests.
5. Cacheable. All API responses are cacheable.
6. Code on demand. The API response can include a code snippet if required.
You send REST requests using HTTP verbs like GET and POST. Rest API responses are
typically in JSON but can also be of a different data format.

Difference between SOAP and REST


Simple Object Access Representational State
Stands for
Protocol(SOAP) Transfer(REST)
SOAP is a protocol for REST is an architecture style for
What is it?
communication between applications designing communication interfaces.
Design SOAP API exposes the operation. REST API exposes the data.
Transport SOAP is independent and can work
REST works only with HTTPS.
Protocol with any transport protocol.
SOAP supports only XML data REST supports XML, JSON, plain
Data format
exchange. text, HTML.
REST has faster performance due to
SOAP messages are larger, which
Performance smaller messages and caching
makes communication slower.
support.
SOAP is difficult to scale. The server REST is easy to scale. It’s stateless,
Scalability maintains state by storing all previous so every message is processed
messages exchanged with a client. independently of previous messages.
SOAP supports encryption with REST supports encryption without
Security
additional overheads. affecting performance.
SOAP is useful in legacy applications REST is useful in modern
Use case
and private APIs. applications and public APIs.

Key differences: SOAP vs REST

SOAP is a protocol, while REST is an architectural style. This creates significant differences
in how SOAP APIs and REST APIs behave.

1. Design
The SOAP API exposes functions or operations, while REST APIs are data-driven. For
example, consider an application with employee data that other applications can manipulate.
The application's SOAP API could expose a function called Create Employee. To create an
employee, you would specify the function name in your SOAP message when sending a
request.
However, the application's REST API could expose a URL called /employees, and a POST
request to that URL would create a new employee record.

2. Flexibility
SOAP APIs are rigid and only allow XML messaging between applications. The application
server also has to maintain the state of each client. This means it has to remember all previous
requests when processing a new request.
REST is more flexible and allows applications to transfer data as plain text, HTML, XML,
and JSON. REST is also stateless, so the REST API treats every new request independently
of previous requests.

3. Performance
SOAP messages are larger and more complex, which makes them slower to transmit and
process. This can increase page load times.
REST is faster and more efficient than SOAP due to the smaller message sizes of REST.
REST responses are also cacheable, so the server can store frequently accessed data in a cache
for even shorter page load times.

4. Scalability
The SOAP protocol requires applications to store the state between requests, which increases
bandwidth and memory requirements. As a result, it makes applications expensive and
challenging to scale.
Unlike SOAP, REST permits stateless and layered architecture, which makes it more scalable.
For example, the application server can pass the request to other servers or allow an
intermediary (like a content delivery network) to handle it.
5. Security
SOAP requires an additional layer of WS-Security to work with HTTPS. WS-Security uses
additional header content to ensure only the designated process in the specified server reads
the SOAP message content. This adds communication overheads and negatively impacts
performance.
REST supports HTTPS without additional overheads.

6. Reliability
SOAP has error handling logic built into it, and it provides more reliability. On the other hand,
REST requires you to try again in case of communication failures, and it’s less reliable.

4.2. Relating SOA and Cloud Computing.


Service-Oriented Architecture (SOA) is a stage in the evolution of application
development and/or integration. It defines a way to make software components reusable using
the interfaces. Formally, SOA is an architectural approach in which applications make use of
services available in the network. In this architecture, services are provided to form
applications, through a network call over the internet. It uses common communication
standards to speed up and streamline the service integrations in applications. Each service in
SOA is a complete business function in itself. The services are published in such a way that it
makes it easy for the developers to assemble their apps using those services. Note that SOA is
different from microservice architecture.
• SOA allows users to combine a large number of facilities from existing services to form
applications.
• SOA encompasses a set of design principles that structure system development and
provide means for integrating components into a coherent and decentralized system.
• SOA-based computing packages functionalities into a set of interoperable services,
which can be integrated into different software systems belonging to separate business
domains.

The different characteristics of SOA are as follows :


o Provides interoperability between the services.
o Provides methods for service encapsulation, service discovery, service composition,
service reusability and service integration.
o Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
o Provides loosely couples services.
o Provides location transparency with better scalability and availability.
o Ease of maintenance with reduced cost of application development and
deployment.

There are two major roles within Service-oriented Architecture:


1. Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract that
specifies the nature of the service, how to use it, the requirements for the service, and
the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry
and develop the required client components to bind and use the service.

Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is known
as service orchestration Another important interaction pattern is service choreography, which
is the coordinated interaction of services without a single point of control.

Components of SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description
documents.
2. Loose coupling: Services are designed as self-contained components, maintain
relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute
supplemental metadata through which they can be effectively discovered. Service
discovery provides an effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex
operations can be implemented. Service orchestration and choreography provide a solid
support for composing services and achieving business goals.

Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
• Scalability: Services can run on different servers within an environment, this increases
scalability

Disadvantages of SOA:
• High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact, they exchange messages to
tasks. the number of messages may go in millions. It becomes a cumbersome task to
handle a large number of messages.

Practical applications of SOA: SOA is used in many ways around us whether it is mentioned
or not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an
app might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in
mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.

2.3. Service Level Agreement (SLA), Billing, Pricing, and Support

A Service Level Agreement (SLA) is the bond for performance negotiated between the
cloud services provider and the client. Earlier, in cloud computing all Service Level
Agreements were negotiated between a client and the service consumer. Nowadays, with the
initiation of large utility-like cloud computing providers, most Service Level Agreements are
standardized until a client becomes a large consumer of cloud services.
Service level agreements are also defined at different levels which are mentioned below:
• Customer-based SLA
• Service-based SLA
• Multilevel SLA

Few Service Level Agreements are enforceable as contracts, but mostly are agreements or
contracts which are more along the lines of an Operating Level Agreement (OLA) and may
not have the restriction of law. It is fine to have an attorney review the documents before
making a major agreement to the cloud service provider. Service Level Agreements usually
specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
3. Service component’s reliability
4. Each party accountability
5. Warranties

In any case, if a cloud service provider fails to meet the stated targets of minimums, then
the provider has to pay the penalty to the cloud service consumer as per the agreement. So,
Service Level Agreements are like insurance policies in which the corporation has to pay as
per the agreements if any casualty occurs. Microsoft publishes the Service Level Agreements
linked with the Windows Azure Platform components, which is demonstrative of industry
practice for cloud service vendors. Each individual component has its own Service Level
Agreements.

Below are two major Service Level Agreements (SLA) described:


1. Windows Azure SLA – Window Azure has different SLAs for compute and storage.
For compute, there is a guarantee that when a client deploys two or more role instances
in separate fault and upgrade domains, client’s internet facing roles will have external
connectivity minimum 99.95% of the time. Moreover, all of the role instances of the
client are monitored and there is guarantee of detection 99.9% of the time when a role
instance’s process is not runs and initiates properly.
2. SQL Azure SLA – SQL Azure clients will have connectivity between the database and
internet gateway of SQL Azure. SQL Azure will handle a “Monthly Availability” of
99.9% within a month. Monthly Availability Proportion for a particular tenant database
is the ratio of the time the database was available to customers to the total time in a
month. Time is measured in some intervals of minutes in a 30-day monthly cycle.
Availability is always remunerated for a complete month. A portion of time is marked
as unavailable if the customer’s attempts to connect to a database are denied by the SQL
Azure gateway.

Service Level Agreements are based on the usage model. Frequently, cloud providers
charge their pay-as-per-use resources at a premium and deploy standards Service Level
Agreements only for that purpose. Clients can also subscribe at different levels that guarantees
access to a particular amount of purchased resources. The Service Level Agreements (SLAs)
attached to a subscription many times offer various terms and conditions. If client requires
access to a particular level of resources, then the client need to subscribe to a service. A usage
model may not deliver that level of access under peak load condition.
SLA Lifecycle

Steps in SLA Lifecycle


1. Discover service provider: This step involves identifying a service provider that can
meet the needs of the organization and has the capability to provide the required service.
This can be done through research, requesting proposals, or reaching out to vendors.
2. Define SLA: In this step, the service level requirements are defined and agreed upon
between the service provider and the organization. This includes defining the service
level objectives, metrics, and targets that will be used to measure the performance of the
service provider.
3. Establish Agreement: After the service level requirements have been defined, an
agreement is established between the organization and the service provider outlining the
terms and conditions of the service. This agreement should include the SLA, any
penalties for non-compliance, and the process for monitoring and reporting on the
service level objectives.
4. Monitor SLA violation: This step involves regularly monitoring the service level
objectives to ensure that the service provider is meeting their commitments. If any
violations are identified, they should be reported and addressed in a timely manner.
5. Terminate SLA: If the service provider is unable to meet the service level objectives,
or if the organization is not satisfied with the service provided, the SLA can be
terminated. This can be done through mutual agreement or through the enforcement of
penalties for non-compliance.
6. Enforce penalties for SLA Violation: If the service provider is found to be in violation
of the SLA, penalties can be imposed as outlined in the agreement. These penalties can
include financial penalties, reduced service level objectives, or termination of the
agreement.

Advantages of SLA
1. Improved communication: A better framework for communication between the
service provider and the client is established through SLAs, which explicitly outline the
degree of service that a customer may anticipate. This can make sure that everyone is
talking about the same things when it comes to service expectations.
2. Increased accountability: SLAs give customers a way to hold service providers
accountable if their services fall short of the agreed-upon standard. They also hold
service providers responsible for delivering a specific level of service.
3. Better alignment with business goals: SLAs make sure that the service being given is
in line with the goals of the client by laying down the performance goals and service
level requirements that the service provider must satisfy.
4. Reduced downtime: SLAs can help to limit the effects of service disruptions by
creating explicit protocols for issue management and resolution.
5. Better cost management: By specifying the level of service that the customer can
anticipate and providing a way to track and evaluate performance, SLAs can help to
limit costs. Making sure the consumer is getting the best value for their money can be
made easier by doing this.

Disadvantages of SLA
1. Complexity: SLAs can be complex to create and maintain, and may require significant
resources to implement and enforce.
2. Rigidity: SLAs can be rigid and may not be flexible enough to accommodate changing
business needs or service requirements.
3. Limited-service options: SLAs can limit the service options available to the customer,
as the service provider may only be able to offer the specific services outlined in the
agreement.
4. Misaligned incentives: SLAs may misalign incentives between the service provider
and the customer, as the provider may focus on meeting the agreed-upon service levels
rather than on providing the best service possible.
5. Limited liability: SLAs are not legal binding contracts and often limited the liability of
the service provider in case of service failure.

4.4. Cloud Computing Architecture

OR
How does cloud architecture work?
Cloud computing architecture integrates four essential components to create an IT environment
that abstracts, pools and shares scalable resources across one or more cloud environments.
1. A front-end
2. A back-end
3. A network
4. A cloud-based delivery platform

Cloud architectures vary based on an organization’s unique business drivers and technology
requirements. Still, they all share the same goal of creating a roadmap that considers
application workloads, cloud deployment models, service management and design needs.

1. The front-end
Front-end cloud architecture refers to the user- or client-side of the cloud computing
system. It consists of graphic user interfaces (GUIs), dashboards and navigation tools that
provide on-demand access to cloud services and resources. Key components include software
apps and programs installed on devices (such as., mobile phone, laptop or desktop) to access
the cloud platform or service. Accessing a web-based video communications application (for
example, Zoom, Webex) via a laptop computer or ordering food through a mobile delivery
platform (Uber Eats, DoorDash) are both examples of front-end cloud architecture capabilities.

2. The back-end
While the front-end includes all elements related to the client (for example, a visitor to
an e-commerce site), the back-end (or ‘server-side’) refers to the structuring of the site and the
programming of its main functionalities. It provides all of the behind-the-scenes technology
(cloud servers, cloud databases, application programming interfaces (APIs) to access files)
used by the CSP to support the front-end, including all the code that helps a database or web
server communicate with a web browser or a mobile operating system.
Back-end cloud architecture components include the following:
• Applications: Back-end apps are the software or platforms that deliver the client service
requests on the front-end.
• Cloud computing service: The back-end service provides utility in cloud architecture
and manages the accessibility of cloud-based resources (such as, cloud-based storage
services, application development services, web services, security services, and more).
• Cloud runtime: Runtime provides the environment (operating system, hardware,
memory) for executing or running services. Virtualization plays a crucial role in
enabling multiple runtimes on the same server. (Read more about virtualization below.)
• Cloud storage: Cloud storage in the back-end refers to the flexible and scalable storage
service and management of data stored to carry out applications.
• Infrastructure: Infrastructure consists of all the back-end resources or hardware (such
as, servers, databases, CPU (central processing unit), network devices like routers and
switches, graphics processing unit (GPU), and so on.) and all the software used to run
and manage cloud-based services. In cloud-computing speak, the term infrastructure is
sometimes confused with cloud architecture, but there’s a distinct difference. Like a
blueprint for constructing a building, cloud architecture serves as the design plan for
building cloud infrastructure.
• Management software: Middleware coordinates communication between the front-
end and back-end in a cloud computing system. This component allows for the delivery
of services in real-time to ensure smooth front-end user experiences.
• Security tools: Security tools provide the back-end security (also referred to as service-
side security) for potential cyberattacks or system failures. Virtual firewalls protect web
applications, prevent data loss and ensure backup and disaster recovery. Back-end
components include encryption, access restriction and authentication protocols to
protect data from breaches.

3. A network
An internet connection typically connects the front-end with the back-end functions. An
intranet—a privately maintained computer network accessed only by authorized persons and
limited to one institution—or an intercloud connection may also connect the back-end and
front-end. A cloud network should provide high bandwidth and low latency, allowing users to
continuously access their data and applications. The network must also provide agility so that
access to resources can occur quickly and efficiently between servers and cloud-based
environment.
Other significant cloud architecture networking gear includes load balancers, content delivery
networks (CDNs) and software-defined networking (SDN) to ensure data flows smoothly and
securely between front-end users and back-end resources.
4. Cloud-based delivery models
There are three main types of cloud delivery models (also known as cloud service models):
IaaS, PaaS and SaaS. These models are not mutually exclusive. Most large enterprises use all
three as part of their cloud delivery stack:
• IaaS, or Infrastructure-as-a-Service, is the on-demand access to cloud-hosted
physical and virtual servers, storage and networking—the back-end IT infrastructure for
running applications and workloads in the cloud. IaaS allows organizations to scale and
shrink infrastructure resources as needed. This cloud-based service helps them avoid the
high costs associated with building and managing an on-premises data center, providing
the capacity to accommodate highly variable or ‘spiky’ workloads.
• PaaS, or Platform-as-a-Service, is the on-demand access to a complete, ready-to-use
cloud computing platform for developing, running and managing applications. PaaS can
simplify the migration of existing applications to the cloud through re-platforming
(moving an application to the cloud with modifications that take better advantage of
cloud scalability, load balancing and other capabilities) or refactoring (re-architecting
some or all of an application using microservices, containers and other cloud-
native technologies).
• SaaS, or Software-as-a-Service, is the on-demand access to ready-to-use, cloud-hosted
application software (such as, Salesforce, Mailchimp). SaaS offloads all software
development and infrastructure management to the cloud service provider. Because the
software (application) is already installed and configured, users can provision the cloud-
based server instantly and have the application ready for use in hours. This capability
reduces the time spent on installation and configuration and speeds up software
deployment.
Key cloud architecture technologies
The following are a few of the most critical technologies for developing cloud
architecture.
• Virtualization
Crucial to cloud architecture, virtualization acts as an abstraction layer that
enables the hardware resources of a single computer—processors, memory, storage and
more—to be divided into multiple virtual computers known as virtual machines (VMs).
Virtualization connects physical servers maintained by a cloud service provider (CSP)
at numerous locations, then divides and abstracts resources to make them accessible to
end users wherever there is an internet connection. Besides virtualizing servers, cloud
technology uses many other forms of virtualization, including network virtualization
and storage virtualization.
• Automation
Cloud automation involves implementing tools and processes that reduce or
eliminate the manual work associated with provisioning, configuring and managing
cloud environments. Cloud automation tools run on top of virtualized environments and
play an essential role in enabling organizations to take more significant advantage of the
benefits of cloud computing, like the ability to leverage cloud resources on demand and
scale them up and down on an as-needed basis. Automation plays a vital role
in DevOps workflows, speeding up tasks related to building, testing, deploying and
monitoring applications, resulting in cost savings and faster time to market.

Cloud architecture best practices


A well-defined cloud architecture framework should include best practices and guidelines to
help architects create cloud solutions that are resilient, performant, and secure. Best practices
should include the following:
• Automate operations to reduce costs and support the solution’s reliability, availability
and security.
• Respect data gravity—the concept that data has its own mass and force. The larger the
mass of data, the greater the effort required to move it, which usually translates into
more time, cost and processing power. Implement solutions that shift computing to the
data where it resides to reduce operating costs and complexity.
• Choose the best platform for each workload to take advantage of platform capabilities
to optimize service levels and workload operating characteristics.

The benefits of cloud architecture


With a customized cloud architecture in place, you can develop a high-performance,
cost-saving strategy with wide-ranging benefits.
1. Customize cloud migration: Develop the best cloud migration strategy to meet your
workload needs (for example, migrate specific databases or servers to the cloud to capitalize
on lower costs, more reliable performance and improved efficiency).
2. Accelerate modernization: Gain the flexibility, scalability and cost control needed to
support cloud-native technologies like self-service orchestration and automation tools (such
as, Kubernetes).
3. Speed time to market: Expand Agile and DevOps methodologies so development teams
can develop applications once and deploy to all clouds, increasing time to market.
4.Innovate faster: Stay ahead of today’s on-demand trends and gain a competitive advantage
with evolving cloud capabilities that support artificial intelligence (AI), machine learning
(ML), generative AI, quantum computing, blockchain and IoT.
5. Boost resiliency and minimize risk: Reduce downtime and enable a faster disaster
recovery plan by spreading workloads and data across multiple resilient cloud environments.
6. Enhance compliance and security: Access the latest cloud security and regulatory
compliance technologies and consistently implement security and compliance across all
environments.

4.5. Multi Cloud Environment


What is multi-cloud?
Multi-cloud is the use of two or more cloud providers together, they might be public,
private or a mix of both to achieve the organization’s goals. Basically, it is the use of various
types of cloud operators to have more features and increase their organization’s flexibility
which also covers the disadvantages of the individual platforms. Companies prefer multi-cloud
environments as they can distribute computing resources and minimize the risk of downtime
and data loss.
Multi cloud allows one organization to use most appropriate and most beneficial cloud
options based on their business requirement which may be public or private cloud for each
separate application.

Why Use a Multi Cloud Strategy?


Multi cloud strategy empowers organization with flexibility, resilience and performance
optimization. It access multi diverse cloud services and provides innovation and mitigating
risks. The following are the reasons to use Multi Cloud Strategy:
• Flexibility and Redundancy: It offers the flexibility to choose the best services from
different cloud providers ensuring redundancy and reducing risk of downtime.
• Vendor Lock-in Mitigation: It helps organizations to avoid dependency on single cloud
provider. It helps to avoid the mitigating risks associated with vendor lock-in and
negotiation leverage.
• Optimized Performance: On using multiple cloud providers facilitates in optimizing
the workload based on specific requirements such as geographical locations, scalability
and cost.
• Compliance and Data Sovereignty: It facilitates compliance with data residency
regulations by enabling organizations to store data in multiple geographic regions as
required.

Multi Cloud Architecture


Multiple cloud Architecture involves in distributing workloads, improving performance,
and reduce risks and security threats. It combines the advantages of many cloud platforms such
as AWS, Azure, and Google Cloud and provides flexibility, scalability, and resilience.
Depending on the needs of organizations, we can distribute apps across several clouds,
guaranteeing redundancy, preventing vendor lock-in, and maximizing savings.Moreover
multi-cloud architecture makes it easier to comply with data residency laws and allows for the
smooth integration of various services for increased agility and creativity. The following
diagram illustrate on of the architecture of setup multi cloud.

What are Multi Cloud Services?


The following are the multi–Cloud Services, that are discussed in short effectively:
• Cloud Management Platforms (CMPs): Tools like VMware Cloud Health
or Microsoft Azure Arc helps in managing the resources across multiple cloud providers
from a single interface with streamlining operations.
• Multi-Cloud Networking: Solutions like Cisco Cloud enter or Aviatrix provides
seamless network connectivity and management between various cloud environments
to facilitate consistent performance and security.
• Container Orchestration: Container Orchestration platforms like Kubernetes helps in
enabling deployment and management of containerized applications across different
cloud platforms to facilitate portability and scalability.
• Cloud Security: services like Palo Alto Networks Prisma or AWS Security Hub offer
centralized security management and threat detection across multiple cloud
environments, enhancing overall security posture.

Advantages of Using Multi-Cloud Strategy


The following are advantages/benefits of using multi-cloud strategy:
• Vendor Flexibility: It provides flexibility of using multi cloud providers for
organizations to use best services and pricing models from each vendor.
• Risk Mitigation: It reduces the risk of data loss, by distributing and maintaining the
workload across mutli cloud platforms enhancing resilience and disaster recovery.
• Geographic Reach: It facilitates organizations to deploy resources in multi regions
across the world. It improves the performance of global users ensuring compliances.

Disadvantages of using Multi-Cloud Strategy
The following are disadvantages/drawbacks of using multi-cloud strategy:
• Talent Management: Finding the right people to manage and develop on a multi-cloud
infrastructure can be a hassle.
• Cost estimation, Optimization and Reporting: Getting a multi cloud provider might
sound easy and cheap but estimating the costs and consolidating it is tough.
• Security Risks : Handling a multi cloud environment is difficult because they have to
manage all individual providers and their respective security protocols which increases
complexity in maintaining consistent security standards.

Multi Cloud Examples and Use cases


Multi-cloud strategies find application across various industries and scenarios, offering
flexibility, resilience, and performance optimization. The following are the some of the key
use cases and examples:
• Disaster Recovery and Business Continuity: Usage of multi cloud provides helps in
ensuring redundancy and continuity of operations in the event of disaster recovery.
Example: A financial Institution may use one cloud provider for its primary operations
and another as a backup to quickly restore services in case of downtime.
• Hybrid Cloud Deployments: Organization uses in combination of public and private
cloud to meet their specific requirements.
Example: A healthcare may use private cloud for storing sensitive information and
public cloud for less critical applications like email or collaboration tools.
• Optimized Workload Placement: By distributing the workloads across multiple
clouds helps organizations to optimize performance and cost.
Example: A Retail company may use one cloud provider for its e-commerce platform
to handle peak traffic loads efficiently while other one for data analytics to improve
specialized services

Challenges Of Multi-Cloud
The following are the challenges of multi-cloud strategy:
• Provider Selection: It will be challenging of choosing the right mix of cloud providers
based on offering services, its reliability and compatibility.
• Integration Complexity: Managing the complexity of integrating multi cloud
environments for seamless data flow requires careful and detailed planning and
execution.
• Location and Pricing Optimization: Optimizing the cloud locations and pricing
models for meeting the performance needs while staying within the budget limits is
essential.
• Security and Compliance: Ensuring the security measures and regulatory compliance
across multiple clouds through regular monitoring and management.

Service Providers Used in Multi Cloud


As multi cloud is a combination of multiple cloud vendors so there is no specific multi-cloud
infrastructure vendor. Instead, it involves a mix of multiple cloud service providers i.e.
• Amazon Web Services (AWS): It offers a wide of range of cloud services. It known for
its scalability, reliability and extensive global infrastructure.
• Microsoft Azure: It comes wit ha comprehensive suite of cloud services and strong
integration support with Microsoft products. It is favoured for business for its hybrid
capabilities and enterprise-grade security features.
• Google Cloud: It know for its data analytics and machine learning service features and
capabilities. It provides innovative services and robust global network infrastructure
with cutting-edge technologies.
• IBM: IBM offers enterprise needed range of cloud services including AI-powered
solutions, blockchain, and hybrid cloud offerings.
• Oracle: It provides Cloud services focused on database management , enterprise
applications and infrastructure with a strong security, compliance and performance.

Security Strategies in Multi Cloud


The following are the security strategies in multi cloud environments that involves
implementation of security policies, IAM practices, encryption mechanisms and compliance
measures:
• Unified Security Policies: On implementing the consistent security policies across
multiple cloud environment provides protection against threats and vulnerabilities.
• Identity and Access Management (IAM): On using strong IAM solutions like AWS
IAM or Azure Active Directory helps in managing the user access and permissions
across all the platforms reducing the risk of authorization.
• Compliance and Governance: Ensure compliance with regulatory requirements and
industry standards by implementing governance frameworks and conducting regular
audits across all cloud environments.

Multi Cloud Management


Management of multiple cloud environment efficiently is crucial for organizations that
following a multi-cloud strategy. With usage of multiple cloud platforms like AWS, Azure,
and Google Cloud ensuring centralized management is essential. Solutions like CloudHealth
or VMware Cloud Manager provide a unified interface for monitoring, optimizing costs, and
ensuring security across all cloud providers. By streamlining operations and simplifying
resource allocation, multi-cloud management platforms empower businesses to maximize the
benefits of their multi-cloud architecture while minimizing complexity and risk.

The Cost Vs Value of Multi Cloud


Adaptation of multi-cloud strategy requires additional costs because of managing multiple
providers, the value it offers often outweighs the expenses. By distributing workloads across
various platforms, organizations can optimize performance, enhance resilience, and mitigate
vendor lock-in risks. Moreover, Usage of different cloud providers provides flexibility in
choosing the best services for specific needs and budgets. Despite the complexities involved,
the strategic advantages of multi-cloud, including improved agility, innovation, and
scalability, ultimately deliver long-term value that justifies the investment in a multi-cloud
approach.

Strategy of Multi-Cloud
Cloud Computing is the delivery of cloud computing services like servers, storage
networks, databases, applications for software Big Data Processing or analytics via the
Internet.
The most significant difference between cloud services and traditional web-hosted services is
that cloud-hosted services are available on demand. We can avail ourselves of as many or as
little as we'd like from a cloud service. Cloud-based providers have revolutionized the game
using the pay-as-you-go model. This means that the only cost we pay is for services we use,
proportion to the number of times our customers or we utilize the services.

We can save money on expenditures for buying and maintaining servers in-house as well
as data warehouses and the infrastructure that supports them. The cloud service provider
handles everything else.
There are generally three kinds of clouds:
o Public Cloud
o Private Cloud
o Hybrid Cloud

A public cloud is described by cloud-based computing provided by third-party vendors like


Amazon Web Services over the Internet and making them accessible to users on the
subscription model.
One of the major advantages of the cloud public is that it permits customers to pay only the
amount they've used in terms of bandwidth, storage processing, or the ability to analyse.
Cloud providers can eliminate the cost of infrastructure for buying and maintaining their cloud
infrastructures (servers, software, and much more).

A private cloud is described as a cloud that provides the services of computing via the
Internet or a private internal network to a select group of users. The services are not accessible
open to all users. A private cloud is often known as a private cloud or a corporate cloud.
Private cloud enjoys certain benefits of a cloud public like:
o Self-service
o Scalability
o Elasticity
Benefits of Clouds that are private Cloud:
o Low latency because of the proximity to Cloud setup (hosted near offices)
o Greater security and privacy thanks to firewalls within the company
o Blocking of sensitive information from third-party suppliers and users

One of the major disadvantages of using a private cloud is that we can't reduce the cost of
equipment, staffing, and other infrastructure costs in establishing and managing our cloud.
The most effective way to use a private cloud can be achieved through an effective Multi-
Cloud and Hybrid Cloud setup.

In general, Cloud Computing offers a few business-facing benefits:


o Cost
o Speed
o Security
o Productivity
o Performance
o Scalability

Hybrid Cloud vs. Multi-Cloud

Hybrid Cloud is a combination of private and public cloud computing services. The
primary difference is that both the public and private cloud services that are part of the Hybrid
Cloud setup communicate with each other.
Contrary to this, in a multi-Cloud setup, both the public and private cloud providers are
not able to speak to one another. In general, cloud configurations for public and private clouds
are utilized for completely different purposes and are separated from one another within the
business.

Hybrid cloud solutions have advantages that could entice users to choose the hybrid approach.
With a private and a public cloud that communicates with one another, we can reap the
advantages of both by hosting less crucial elements in a cloud that is public and using the
private cloud reserved for important and sensitive information.

In a broad sense in the overall picture, from a holistic perspective, Hybrid cloud has
more of an execution point of view to take advantage of the benefits that come from both cloud
services that are private and public, as well as their interconnection. Contrarily, multi-cloud is
a more strategic option than an execution decision.
Multi-Cloud is usually not a multi-vendor cloud configuration. Multi-cloud can utilize
services from multiple vendors and is a mix between AWS, Azure, and GCP.

The primary distinguishing factors that differentiate Hybrid and Multi-Cloud


could be:
o A Multi-Cloud is utilized to perform a range of tasks. It typically consists of multiple
cloud providers.
o A hybrid cloud is typically the result of combining cloud services that are private and
public, which mix with one another.
Multi-Cloud Strategy
Multi-Cloud Strategy involves the implementation of several cloud computing
solutions simultaneously.
Multi-cloud refers to the sharing of our web, software, mobile apps, and other client-facing or
internal assets across several cloud services or environments. There are numerous reasons to
opt for a multi-cloud environment for our company, including the reduction of dependence on
a single cloud service provider and improving fault tolerance. Furthermore, businesses choose
cloud service providers that follow an approach based on services. This has a major impact on
why companies opt for a multi-cloud system. We'll talk about this in the near future.

A Multi-Cloud may be constructed in many ways:


o It is a mix of cloud computing services offered by the private cloud to create a multi-
cloud cloud,Setting up our servers in various regions of the globe and creating an online
cloud network to manage and distribute the services is an excellent illustration of an all-
private multi-cloud configuration.
o It could be a mixture of all cloud service providers and A combination of several cloud
service providers, like Amazon Web Services (AWS), Microsoft Azure, and Google
Cloud Platform, is an example of a free cloud setup.
o It may comprise a combination of both private cloud service providers to make a multi-
cloud architecture.
o Private cloud providers that use AWS in conjunction with AWS or Azure could fall into
this category. If it is optimized for your business, we could enjoy all the benefits of AWS
and Azure.
A typical multi-Cloud setup is a mix of two or more cloud providers together with one private
cloud to remove the dependence on one cloud services provider.

Why has Multi-cloud strategy become the norm?


When cloud computing was introduced in a huge way, businesses began to recognize a
few issues.
1. Security: Relying on security services that one cloud service provider provides makes us
more susceptible to DDoS as well as other cyber-attacks. If there is an attack on the cloud,
the whole cloud would be compromised, and the company could be crippled.
2. Reliability: If we're relying on just one cloud-based service, reliability is at risk. A cyber-
attack, natural catastrophe, or security breach could compromise our private information
or result in a loss of data.
3. Loss of Business: Software-driven businesses are working on regular UI improvements,
bug fixes, and patches that have to be rolled out monthly or weekly to their Cloud
Infrastructure. In order to implement a single cloud strategy, the business suffers downtime
because their cloud services are not accessible to their customers. This can result in the
loss of business as well as the loss of money.
4. Vendor lock-in: Vendor lock-in refers to the situation of a client of one particular service,
product, or product in which the customer is unable to easily switch from the product or
service to a competitor's service or product. This is usually the case in the event that
proprietary software is utilized in a service that isn't compatible with the new service or
product vendor or even within the legal bounds of the contract or the law. It is why
businesses are forced to commit to a certain cloud provider even if they're dissatisfied with
their service. The reason for switching providers can be numerous, including better
capabilities and features provided by competitors to lower pricing, and so on.

Additionally, moving the data between cloud providers to the next is a hassle since it has to
be transferred to the local datacentres before being transferred to the cloud provider.

Benefits of a Multi-Cloud Strategy


Let's discuss the advantages from the benefits of a Multi-Cloud Strategy that inherently
answer the challenges posed by one or more cloud-based service. Many of the problems with
a single cloud environment are solved when we consider a multi-cloud perspective.

1. Flexibility
One of the most important benefits of multi-cloud cloud computing systems is flexibility. There
is no lock-in of the vendor customers able to test different cloud providers and play with their
capabilities and features. A lot of companies that are tied to a single provider cannot implement
new technologies or innovate because the cloud service provider is bound to them to certain
compatibility. This is not a problem with a multi-cloud system. we can create a cloud system
to sync with our company's goals.
Multi-cloud lets us select our cloud services. Each cloud service has its distinct features.
Choose the ones that meet our business's requirements the best, and then choose services from
a variety of providers to select the best solution for our business.

2. Security
The most important aspect of multi-cloud is risk reduction. If multiple cloud providers host us,
we can reduce the chance of being hacked and losing data in the event of vulnerabilities in our
cloud provider. Also, we reduce the chance of injury caused by natural disasters or human
error. In the end, we should not put all our eggs in one basket.

3. Fault Tolerance
One of the biggest issues with using one cloud service provider is that it offers zero fault
tolerance. With a multi-cloud system, it is possible to have backups and data redundancies in
the right place. Also, we can strategically schedule downtime for deployment or maintenance
of our software/applications without letting our clients suffer.

4. Performance
Each cloud service provider, such as AWS (64plus nations), Azure (140+ countries), or GCP
(200plus countries), has been established throughout the world. Based on our location and our
workload, we'll be able to choose the best cloud service provider to lower the delay and speed
of our operations.

IoT and ML/AI are Emerging Opportunities.


In the age of Machine Learning and Artificial Intelligence growing exponentially, there's a lot
of potential for analysis of our data on the cloud and using these capabilities for better decision-
making and customer service. The top cloud service providers offer their distinct features.
Google Cloud Platform (GCP) for AI, AWS for serverless computing, and IBM for AI/ML are
just a couple of options worth considering.

5. Cost
The cost will always be an important factor when making a purchase decision. Cloud
computing is evolving in the time we go through this. The competition is so fierce that
providers of cloud services are coming up with a viable pricing solution that we can gain. In a
multi-cloud setting, depending on the service or feature we'll use with the service provider, we
are able to select the most appropriate option. AWS, Azure, and Google all offer pricing
calculators. They help manage costs to aid us in making the right choice.

6. Governance and Compliance Regulations


The big clients typically will require you to comply with specific local as well as cybersecurity
regulations. For example, GDPR compliance or the ISO cybersecurity certification. There is a
chance that our business could be affected because a certain cloud service could violate our
security certificates, or the cloud provider may not have been certified. We may choose an
alternative provider without losing our significant clientele if this happens.

Disadvantages of Multi-Cloud
1. Discount on High Volume Purchases
Cloud service providers that are public offer massive discounts when we buy their services in
bulk. But, if we have multi-cloud, it is unlikely that we'll get these discounts because the
volume we purchase will be split between various service providers.
2. The Training of Existing Employees or new Hiring
We must prepare our existing staff or recruit new employees to be able to use cloud computing
in our company. It will cost us more and time spent in training.
3. Effective Multi-Cloud Management
Multi-cloud requires efficient cloud management, which requires knowing the workload and
business requirements and then dispersing the work among cloud service providers most
suitable for the task. For instance, a company might make use of AWS for computing service,
Google or Azure for communication and email tools, and Salesforce to manage customer
relationships. It requires expertise in the cloud and business domain to comprehend these
subtleties.

4.6. Edge Computing Concepts


What is Edge Computing?
Edge computing refers to the exercise of processing information in the direction of its
delivery at the edge of a network, in the region of counting on a centralized cloud or data
center. In edge computing, statistics are processed and analysed locally on devices or in nearby
servers, lowering latency and permitting quicker reaction times. This technique is especially
useful in situations in which actual-time or near-real-time processing is crucial, which include
the Internet of Things (IoT), autonomous automobiles, and commercial automation.
Edge computing enhances performance, security, and the capability to manipulate and
study information at or close to the beginning, making it a treasured addition to the broader
cloud computing atmosphere.
Edge Computing is a buzzword such as cloud, IoT, and Artificial Intelligence. Simply
saying, Edge Computing brings the decentralization of networks. Edge Computing is the
upcoming enhancement and advancement in technology. The literal meaning of the word
'Edge' is the geographic location on the planet to deliver services in a distributed manner. Edge
Computing is a distributed computing system that allows to bring computation of data and
storage too close to the source (where data is required). It brings computing as much close as
possible so as to minimize the bandwidth, improve response time, and use of latency. Instead
of locating the data at a centralized place, the concept of edge computing believes in
distributing the computing process of the data. However, cloud computing and IoT are faster
plus efficient, but edge computing is a more faster computing method. The objective of Edge
Computing is to improve the network technology by moving the computation of data close to
the edge of the network and away from the data centres. Such a process exploits network
gateways or smart objects for performing tasks and provide services on behalf of the cloud. As
it is well-known that per day data is produced in a huge amount that makes its computation
difficult and complicated to be handled by the data centres. Also, the network bandwidth limit
almost gets exhausted, and response time increase highly. So, when moving computation and
data services in the hands of edge computing, it is possible to provide efficient service delivery,
better data storage, and IoT management that could minimize the response time and transfer
rate of data. With the 5G data network, it has enabled to converge 5G data network and edge
technologies within reach. Thus, Edge Computing reduces the long-distance processing and
slow communication of the data.

Challenges in Edge Computing


There are the following issues and challenges that take place in Edge Computing:

o Privacy and Security of Data: It is the new change and enhancement in technology,
and so there should be change and enhancement in the privacy and security feature also.
Securing part devices and the information they generate may be complex. These devices
are frequently distributed to diverse locations, putting them at risk of bodily and cyber
threats. Security protocols and updates want to be continuously controlled.
o Scalability: Scaling edge infrastructure to satisfy the demands of an expanding network
or consumer base can be tough. It calls for cautious planning and aid allocation to make
certain that processing competencies are allotted successfully. Edge Computing is based
on a distributed network, and scalability becomes a challenge for such a distributed
network-facing several issues. These issues are:
o Heterogeneity of Devices: A focus should be present on the heterogeneity of
those devices that have its own different energy and performance constraints.
o Devices that have extensively dynamic condition and reliability of the
connections in comparison to the robust infrastructure of the cloud data centers.
In addition to this, the increase in the security requirements affects and slows
down the scaling factor of edge computing as it may bring more latency between
the nodes that communicate with each other.
o Reliability: Such a feature is a very challenging task for every technology and so for
edge computing. For handling service from certain failovers, its management is
required, which is very crucial. As edge computing relies on a distributed network, so if
a single node fails or unable to reach, still the users must be able to avail the service
without any disturbance. Also, the edge computing must be able to alert the user about
the failure node and must provide actions to recover from the failure. For this, each
device should maintain the network topology of the whole distributed system that will
make the detection of errors and its recovery easy to process. Other than this, the
connection technology in use may provide different reliability levels and data accuracy
produced at the edge that could be unreliable because of the environmental conditions.
o Speed: Edge computing should be able to provide speed services to the end-users as it
brings analytical, computational resources near to the source (end users), and it leads to
fast communication. Such a modern system will automatically outperform the cloud
computing system, which is a traditional system. So, maintenance of good speed is also
a challenging task for edge computing.
o Efficiency: The efficiency of the edge computing becomes better because the
availability of the analytical tools is too close to the end-users, and due to this, AI tools
and analytical tools which are sophisticated can possibly execute on the edge of the
system. Such a platform improves and increases operational efficiency and thus provides
several benefits to the system.

In addition to this, various factors can be discussed below:


o Latency and Bandwidth: Reduced latency is a key gain of edge computing, but it could
be tough to preserve low latency as records are processed at the edge. Limited bandwidth
also can be an issue, specifically in faraway or bandwidth-limited places.
o Management Complexity: Managing a massive form of edge devices, each with its
very own configurations, updates, and protection necessities, may be complex.
Centralized control equipment had to be simplified this way.
o Data Governance: Edge computing generates terrific portions of records, and
managing and governing this information in compliance with regulatory necessities may
be hard. Ensuring data privacy and safety is critical.
o Cost: Deploying and maintaining the infrastructure can be steeply priced, in particular,
while coping with a massive number of gadgets inside the course of a couple of places.
Organizations ought to consider the implications of problem computing carefully.
o Interoperability: Ensuring that component gadgets and structures from one-of-a-type
producers and companies can work seamlessly together is a challenge. Standardization
efforts are ongoing to deal with this hassle.
o Data Redundancy: Processing information at the threshold can result in truth
redundancy, with a couple of tools appearing similar to operations. Managing and
deduplicating statistics efficiently is critical to optimizing and processing assets.
o Remote Maintenance: Performing protection, updates, and troubleshooting for
difficult devices, specifically in long, way-flung, or inaccessible places, can be
logistically complicated and luxurious.
o Environmental Impact: Edge computing infrastructure consumes electricity and
resources. It's important to take into account the environmental impact of deploying edge
devices, particularly in far-off or ecologically touchy areas.

Addressing these challenges in edge computing requires an aggregate of progressive


technologies, robust safety functions, effective management solutions, and careful making of
plans to attain the blessings of localized records processing while mitigating ability drawbacks.

Why Edge Computing


Edge Computing is a new type of technology that will not only save time but also save the cost
of servicing and other charges too. There are the following reasons that will answer the
question:
o Through edge computing, it allows smart applications and devices for responding to
data very quickly as soon as it is created and thus removing the lag time.
o Edge Computing also enables data stream acceleration that includes real-time
processing of data without latency use. Data Stream acceleration is, however, critical for
self-driving cars type of technologies and provides equal and vital benefits for the
businesses.
o Efficient processing of data at large scale by allowing processing close to the source,
and it saves the use of internet bandwidth also. Thus, it reduces the cost and enables
effective accessibility to the applications in remote locations.
o The ability of edge computing to provide services and processing data at the furthest
distance makes a secured layer for the sensitive data without keeping it into a public
cloud.

Applications of Edge Computing


Today, the world relies on the Internet from small to big things, and thus the prices of IoT
devices such as sensors and computing cost is reducing too. In this way, more things will
remain connected to the Internet. As a result, more connected devices become available, and
edge computing will go on demand. There are following sectors that will be potentially
benefitted from Edge Computing:
1. Transportation: It is one of the most potential sectors where edge computing plays a
vital role and particularly in autonomous vehicles. It is because autonomous vehicles are
full of different sensors types from the camera to the radar system of the car. Such
autonomous devices can essentially utilize edge computing for processing data too close
to the vehicle through these sensors, and consequently, a good amount of time will be
saved. But these autonomous cars are not the mainstream yet, and it is still in
preparation. The Automotive Edge Computing Consortium (AECC) in the
year 2018 that it would be launching the operations which are focused on the connected
car solutions. But not only the self-driving cars, edge computing will be focused upon
trains, aeroplanes, and other transportation forms also.
2. HealthCare: People rely on fitness trackers, smartwatches, stamina measurement
watches, etc., and find these health-monitoring wearable comfortable. However, the
real-time analysis is essential for capturing the actual benefits of the collected data
because many health wearables devices are directly connected to the cloud while others
can be operated in an offline mode only. Certain health devices analyze the pulse rate
and sleep pattern in an offline mode only, and doctors use to check and evaluate the
patients on the spot using the analysis result. Such smart devices are useful for collecting
and processing data in treating patients in any pandemic (such as COVID-19). By the
use of edge computing, scientific professionals can acquire and approach facts more
quickly, which may motivate higher and faster patient care, which includes a safety layer
to PDHD (patient-generated fitness statistics). Through edge computing, hospitals and
doctors will be able to use and access more clod applications more quickly, but the
security and privacy of data is still a confusion.
3. Manufacturing: Edge computing in the field of manufacturing will reduce the data that
goes to the cloud for applications like predictive maintenance, and it will move the
operational technology to the edge computing platforms for running process similarly
as processed in the cloud but with more speed and result. However, the maintenance of
an on-premise deployment is still reliable on the cloud.
4. Grid Edge Control and Analytics: These smart grid controls work by creating two-
way communication channels via WAN protocols between the power distribution
infrastructure, consumers, and the utility head-end. However, edge grid computing is
capable of providing advanced real-time monitoring and analytics and also producing
actionable insights on distributed energy that generates resources like renewables. Such
capability is available in edge computing technologies only. Edge Grid Computing can,
however, reduce the overall cost, wastage of energy, avoid out stages and
overcompensation if electric vehicles, wind farms and hydroelectric dams that generate
a massive amount of useful data can assist the utilities with analytics on requirements,
peak usage predictions, availability, and energy production.
5. Remote monitoring of Oil and Gas: At present, IoT devices are providing modern
safety monitoring, sensory devices for controlling, viewing, and sensing the
temperature, pressure, moisture, humidity, sound, and radiation of oil and gas. IP
cameras and other IoT devices generate massive and continuous amounts of data, and
then the data is combined as well as analyzed in order to provide the key insights for
evaluating the health of any running system reliably. With edge computing, real-time
safety monitoring can be possible for safeguarding critical machinery infrastructure and
oil and gas systems from disasters. Also, several edge IoT monitoring devices are being
developed, keeping safety and reliability as the main focus points. Edge Computing
allows analyzing, processing, and delivery of data to the end-users in real-time.
Consequently, it will enable the control centers to access the data instantly as it occurs,
preventing the malfunctions in the most optimized manner. Thus, oil and gas services
are the critical infrastructures that can be catastrophic in nature if not maintained with
safety and precautions.
6. Traffic Management: Traffic is the worst waste of time and needs to be optimized. The
best way to optimize traffic is by maintaining and improving real-time data. For the
traffic management process, the smart transportation systems such as self-driving cars,
and other sensory systems make extensive use of edge computing devices. Through edge
computing, a massive amount of sensory and other data is analyzed, then filtered, and
finally compressed before transmitting on the IoT edge gateways to other systems for
use. As a result, edge computing reduces network expenses, operating processing, and
storage cost for undertaking traffic management solutions.
7. Edge Video Orchestration: It uses edge computing resources for delivering heavy
bandwidth videos by implementing a highly optimized method. It does not deliver the
video through a centralized core network to all networks. Instead, it orchestrates, caches,
and distributes the video files closely to the device. Through edge computing, it is fast
to serve the freshly created video clips and live streams to the paying customers via rich
media processing applications that run on mobile edge servers and hotspots in venues.
Thus, avoiding certain quality issues that of the mobile networks when delivering heavy
videos (terabyte size) and reducing the service costs too. Still, the development of such
edge computing is in the process but will boon in the coming years.

Benefits of Edge Computing

There are the following benefits of Edge Computing:


o Speed: It is the most attractive and essential factor in any field and especially in the
computer science field. Every company and industry demand high-speed technology
aspects such as financial organizations because slow speed data processing can make a
heavy financial loss to the company, healthcare industries because a fraction of second
either can save the patient's life or can take the life, and other service-providing
industries need fast speed computing otherwise it can irritate the customers which will
make a bad impact of the industry on its customers. Edge Computing will definitely
benefit these sectors because of its extremely fast computing speed. Through edge
computing, the latency of the networks will be decreased, and also IoT devices will
process data at edge data centers. Thus, data need not be traveled back to the central
server (i.e., centralized server).
o Security of Data: In edge computing, data is located near to the source, which will
distribute the work of data processing across several data centers as well as devices. It
will safeguard your data from any type of cyber attack that can be vulnerable for
confidential data such as safeguarding data from DDoS attacks. Thus, the data can be
saved from the hackers to harm the data as the area of attack will increase because data
is not placed at a single location only, i.e., data is decentralized. Also, when data is stored
locally, it will become easy to monitor the data for its security purposes and thus will
enable the industries to maintain the privacy of the data.
o Scalability of Data: Scaling becomes easy and simple with edge computing where one
can buy edge devices with high computation power for increasing their edge network.
There is no such requirement to make their own private and centralized data centers for
fulfilling their data needs. Just combine the edge computing with colocation services in
order to expand your edge network. Otherwise, companies need to purchase new
equipment for expanding their IT infrastructure. Thus, it will save the companies for
purchasing new devices. It is enough if the industries by few IoT devices to expand the
network.
o Faster Data Processing: There are varieties of IoT applications that operate together,
and if they are centralized, the server will no doubt slow down the speed. Also, a massive
amount of data is also generated that can create complexities for both servers and all the
fragments of the IoT devices. However, if the server slows down or fails, the devices
which are connected will also fail. Through, edge computing data can be accessed
locally or near to the connected devices. Through, edge computing the cost of moving
data, i.e., (traveling cost) to a centralized server, is also saved, and the time is taken for
processing the data also becomes fast. All this brings more efficiency over the data. Also,
when the entire network is not busy in exchanging data all the time, it saves a lot of
network clustering and maintains data sharing between nodes only when required.
o Cost-Effectiveness: Edge Computing has gain popularity because it is the most cost-
effective method as compared to the existing alternative technologies. It is because edge
computing reduces the cost of data storage, network costs, data traveling costs, and data
processing costs. Also, edge computing ensures interoperability among modern legacy
and smart IoT devices, which are not compatible by concerting those communication
protocols that are used by the legacy devices into a language that could be understood
by the modern smart devices as well as the cloud. Thus, there is no need to invest money
in purchasing new IoT devices because we can easily connect the existing or older IoT
devices via edge computing. With this, edge computing also enables the fragments to
operate without any high-speed internet connectivity as for operating cloud
functionalities, high-internet connectivity is essential.

Disadvantages of Edge Computing


There are the following disadvantages of edge computing:
1. Edge Computing requires more storage as data will be placed and processed at different
and various locations.
2. As in edge computing, data is kept on distributed locations, and security becomes a
challenging task in such an environment. It often becomes risky to identify thefts and
cybersecurity issues. Also, if some new IoT devices are added, it can open gates for the
attackers for harming the data.
3. It is known that edge computing saves many expenses in purchasing new devices, but
edge computing is also expensive. It means the cost is too high.
4. It needs advanced infrastructure for processing data in an advanced way.
5. However, edge computing fails to pool resources in a resource pool. It means it is not
capable of performing resource pooling.
6. It has a limit to a smaller number of peripherals only.

Edge Computing Vs. Cloud Computing


Although edge computing does not replace cloud computing technology, the emergence will
certainly reduce and impact cloud computing. On the other hand, edge computing will enhance
cloud computing technology by providing less complex solutions for handling messy data.
Both these technologies have its own purpose and use, below we have discussed several points
that distinguish between edge and cloud computing:
Edge Computing Cloud Computing

It is good to be used for those It is generally recommended for


organizations that have a limited processing and managing a high
budget to invest in financial volume of data that is complex
resources. So, mid-level and massive enough. Thus, such
organizations can use edge organizations that deal with huge
computing. data storage use cloud computing.

It can use different programming Cloud Computing works for one


languages on different platforms, target platform using one
each having different runtime. programming language only.

Security in edge computing needs


tight and robust plans such as It does not need high and
advanced authentication methods, advanced security methods.
network security, etc.

It processes that data that is not


It processes time-sensitive data. driven by time, i.e., not time-
driven.

It processes data at remote It processes and deals with data at


locations and uses the centralized locations by using a
Decentralization approach. centralized approach.

Organizations can indulge in edge


For advancement, existing IoT
computing with the existing IoT
devices need to be exchanged with
devices, advance them, and use
the new ones that will cost more
them. There is no need to purchase
money and time.
new devices.

Edge Computing is the upcoming Cloud Computing is the currently


future. existing technology.

You might also like