Cloud Computing Lab Manual
Cloud Computing Lab Manual
Course Outcomes:
On successful completion of this course, students will be able to:
1. Adapt different types of virtualizations and increase resource utilization.
2. Describe and demonstrate the underlying principles of different Cloud Service Models.
3. Build a private cloud using open-source technologies.
4. Examine and explain the core issues of cloud computing such as resource management and
security.
5. Develop applications on Cloud Platforms.
6. Develop real world web applications and deploy on commercial cloud.
Student PRN -
EXPERIMENT NO: 1
Theory:
Cloud computing enables companies to consume compute resources as a utility -- just like
electricity -- rather than having to build and maintain computing infrastructures in-house. Cloud
computing promises several attractive benefits for businesses and endusers.
Three of the main benefits of cloud computing include:
• Self-service provisioning: End users can spin up computing resources for almost anytype of
workload on-demand.
• Elasticity: Companies can scale up as computing needs increase and then scale downagain as
demands decreases.
Pay per use: Computing resources are measured at a granular level, allowing users to pay only for
the resources and workloads they use.
Cloud computing services can be Private, Public or Hybrid.
Private cloud services are delivered from a business' data center to internal users. This model offers
versatility and convenience, while preserving management, control and security. Internal
customers may or may not be billed for services through IT chargeback.
In the Public cloud model, a third-party provider delivers the cloud service over the Internet.
Public cloud services are sold on-demand, typically by the minute or thehour. Customers
only pay for the CPU cycles, storage or bandwidth they consume. Leading public cloud
providers include Amazon Web Services (AWS), Microsoft Azure, IBM/SoftLayer and Google
Compute Engine.
Hybrid cloud is a combination of public cloud services and on-premises private cloud
– with orchestration and automation between the two.
Companies can run mission-critical workloads or sensitive applications on the privatecloud while
using the public cloud for workloads that must scale on-demand. The goal of hybrid cloud is to
create a unified, automated, scalable environment which takes advantage of all that a public cloud
infrastructure can provide, while still maintaining control over mission-critical data.
companies. Instead of using "generative" systems (ones that can be added to and extended in
exciting ways the developers never envisaged), you're effectively using "dumb terminals" whose
uses are severely limited by the supplier. Good for convenience and security, perhaps, but what
will you lose in flexibility? And is such a restrained approach good for the future of the Internet
as a whole? (To see why it may notbe, take a look at Jonathan Zittrain's eloquent book The Future
of the Internet— And How to Stop It.)
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO: 2
Theory:
Virtualization is software that separates physical infrastructures to create various dedicated
resources. It is the fundamental technology that powerscloud computing.
The technology behind virtualization is known as a virtual machine monitor (VMM) or virtual
manager, which separates compute environments from the actual physical infrastructure.
Virtualization makes servers, workstations, storage and other systems independent of the physical
hardware layer. This is done by installing a Hypervisor on top of the hardware layer, where the
systems are then installed.
There are three areas of IT where virtualization is making headroads, network virtualization, storage
virtualization and server virtualization:
Network virtualization is a method of combining the available resources in a network by splitting
up the available bandwidth into channels, each of which is independent from the others, and each
of which can be assigned (or reassigned) to a particular server or device in real time. The ideais
that virtualization disguises the true complexity of the network by separating it into manageable
parts, much like your partitioned hard drive makes it easier to manage your files.
Storage virtualization is the pooling of physical storage from multiple network storage devices into
what appears to be a single storage device that is managed from a central console. Storage
virtualization is commonly used in storage area networks (SANs).
Server virtualization is the masking of server resources (including the number and identity of
individual physical servers, processors, and operating systems) from server users. The intention is
to spare the user from having to understand and manage complicated details of server resources
while increasing resource sharing and utilization and maintaining the capacity to expand later.
Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic
computing, a scenario in which the IT environment will beable to manage itself based on perceived
activity, and utility computing, in which computer processing power is seen as a utility that clients
can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks
while improving scalability and workloads.
Procedure:
Installation Steps :
1. #sudo grep -c "svm\|vmx" /proc/cpuinfo
2. #sudo apt-get install qemu-kvm libvirt-bin bridge-utils virt-manager
3. #sudoadd
userrait
#sudoadduserrait
libvirtd
After running this command, log out and log back in as rait
4. Run following command after logging back in as rait and you should
see anempty list of virtual machines. This indicates that everything is working
correctly. #virsh -c qemu:///system list
5. Open Virtual Machine Manager application and Create Virtual
Machine#virt-manager
Result:
SNAPSHOTS
Step 1 : #sudo grep -c "svm\|vmx" /proc/cpuinfo
Step 3 : #sudoadduserrait
After running this command, log out and log back in as rait
Step 4 : #sudoadduserraitlibvirtd
After running this command, log out and log back in as rait
Step 5 : Open Virtual Machine Manager application and Create Virtual Machine
#virt-manager as shown below
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO: 3
Theory:
Infrastructure as a Service (IaaS) is a form of cloud computing that provides virtualized computing
resources over the internet. IaaS is one of the three main categories of cloud services, alongside Software
as a Service (SaaS) and Platform as a Service (PaaS).
In an IaaS model, a cloud provider hosts the infrastructure components traditionally present in an on-
premises data center, including servers, storage, and networking hardware, as well as the virtualization or
hypervisor layer. The IaaS provider also offers a range of services to accompany those infrastructure
components. These can include detailed billing, monitoring, log access, security, load balancing, and
clustering, as well as storage resiliency, such as backup, replication, and recovery.
These services are typically billed on a pay-as-you-go basis, and users can scale services up and down
according to requirements. IaaS provides users with the highest level of flexibility and management
control over their IT resources and is most like traditional on-premises data centers.
Procedure:
1. Preparation:
Understand the basics of virtualization, as it is the foundation of IaaS.
Choose a cloud provider (e.g., AWS, Azure, Google Cloud Platform).
Set up an account with the provider.
2. Set Up Virtual Networks:
Create a virtual network (VPC for AWS, VNet for Azure) within the cloud provider’s console.
Define your subnets and IP ranges.
Run DevStack:
SERVICE_PASSWORD=iheartksl
./stack.sh
A seemingly endless stream of activity ensues. When complete you will see a summary of stack.sh’s
work, including the relevant URLs, accounts and passwords to poke at your shiny new OpenStack.
Using OpenStack
At this point you should be able to access the dashboard from other computers on the local network. In
this example that would be http://192.168.43.29/ for the dashboard (aka Horizon). Launch VMs and if
you give them floating IPs and security group access those VMs will be accessible from other machines
on your network
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO: 4
Theory:
Storage as a Service (STaaS) is a cloud computing model where a service provider rents out storage space
to users over the internet. This model enables businesses and users to store data in the cloud, making it
accessible from any internet-connected device. STaaS allows for scalability, so users can expand or
reduce storage based on their needs, and it is cost-effective because it typically operates on a pay-per-use
basis. It also ensures that data management, maintenance, and backup are handled by the service provider.
Procedure:
Choose a Provider: Select a cloud service provider offering STaaS like AWS S3, Azure Blob Storage, or
Google Cloud Storage.
Create an Account: Sign up for an account with your chosen cloud provider and create a storage service
instance.
Set Permissions: Configure the access permissions and security settings to define who can access the
stored data.
Create Storage Containers: Depending on the provider, create buckets (in AWS) or containers (in Azure)
to hold your data.
Upload Data:
Use the provider’s management console or API to upload files to your storage container.
Optionally, organize data with folders or prefixes.
Access Control:
Utilize tools provided by the service for data lifecycle management, versioning, and archiving.
Integrate with Applications:
Use APIs or SDKs provided by the service to integrate storage access into applications or services.
Monitor Usage:
Set up monitoring to keep track of storage usage, requests, and potential security events.
Clean Up:
To avoid unnecessary charges, delete any data or storage containers that are no longer needed after the
experiment.
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO: 5
1. Outcomes:
Result:
SNAPSHOTS
OwnCloud is open source file sync and share software for everyone from individuals
operating the free ownCloud Server edition, to large enterprises and service providers
operating the ownCloud Enterprise Subscription. ownCloud provides a safe, secure,
and compliant file synchronization and sharing solution on servers that you control.
You can share one or more files and folders on your computer, and synchronize them
with your ownCloud server.
Step 2 : By default, the ownCloud Web interface opens to your Files page. You can add, remove,
and share files, and make changes based on the access privileges set by you (if you are
administering the server) or by your server administrator. You can access your ownCloud files
with the ownCloud web interface and create, preview, edit, delete, share, and re-share files. Your
ownCloud administrator has the option to disable these features, so if any of them are missing on
your system ask your server administrator.
Step 3: Apps Selection Menu: Located in the upper left corner, click the arrow to open a dropdown menu
to navigate to your various available apps. Apps Information field: Located in the left sidebar, this
provides filters and tasks associated with your selected app. Application View: The main central field in
the ownCloud user interface. This field displays the contents or user features of your selected app.
Step 4: Share the file or folder with a group or other users, and create public shares with hyperlinks. You
can also see who you have shared with already, and revoke shares by clicking the trash can icon. If
username auto-completion is enabled, when you start typing the user or group name ownCloud will
automatically complete it for you. If your administrator has enabled email notifications, you can send an
email notification of the new share from the sharing screen.
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO: 6
Theory:
Cloud computing security is the set of control-based technologies and policies designed to adhere
to regulatory compliance rules and protect information, data applications and infrastructure
associated with cloud computing use. Because of the cloud's very nature as a shared resource,
identity management, privacy and access control are of particular concern. With more
organizations using cloud computing and associated cloud providers for data operations, proper
security in these and other potentially vulnerable areas have become a priority for organizations
contracting with a cloud computing provider.
Cloud computing security processes should address the security controls the cloud provider will
incorporate to maintain the customer's data security, privacy and compliance with necessary
regulations. The processes will also likely include a business continuity and databackup plan in
the case of a cloud security breach.
Physical security
Cloud service providers physically secure the IT hardware (servers, routers, cables etc.) against
unauthorized access, interference, theft, fires, floods etc.
and ensure that essential supplies (such as electricity) are sufficiently robustto minimize the
possibility of disruption. This is normally achieved by serving cloud applications from 'world-
class' (i.e. professionally specified, designed, constructed, managed, monitored and maintained)
data centers.
Personnel security
Various information security concerns relating to the IT and other professionals associated with
cloud services are typically handled through pre-, para- and post-employment activities such as
security screening potential recruits, security awareness and training programs, proactive security
monitoring and supervision, disciplinary procedures and contractual obligations embedded in
employment contracts, service level agreements, codes of conduct, policies etc.
Application security
Cloud providers ensure that applications available as a service via the cloud (SaaS) are secure by
specifying, designing, implementing, testing and maintaining appropriate application security
measures in the production environment. Note that - as with any commercial software - the
controls they implement may not necessarily fully mitigate all the risks they have identified, and
that they may not necessarily have identified all the risks that are of concern to customers.
Consequently, customers may also need to assure themselves that cloud applications are
adequately secured for their specific purposes, including their compliance obligations.
Procedure:
(select "Generate and access key for each user" checkbox, it will create a userwith a specific
key)
click on "Create" button at right bottom
3) once the user is created click on it
4) go to security credentials tab
5) click on "Create Access Key", it will create an access key for user.
6) click on "Manage MFA device" it will give you one QR code displayed on the screen
you need to scan that QR code on your mobile phone using barcode scanner (install it in
mobile phone)you also need to install "Google Authenticator" in your mobile phone to generate
the MFA code
7) Google authenticator will keep on generating a new MFA code after every60 seconds
that code you will have to enter while logging as a user. Hence, the security is
maintained by MFA device code.one can not use your AWS account even if it
may have your user name andpassword, because MFA code is on your MFA
device (mobiel phone in thiscase) and it is getting changed after every 60
seconds.
Permissions in user account:
After creating the user by following above mentioned steps; you can givecertain
permissions to specific user
1) click on created user
2) goto "Permissions" tab
3) click on "Attach Policy" button
4) select the needed policy from given list and click on apply.
Result:
Step 1 :goto aws.amazon.com
Step 2 : Click on "My Account". Select "AWS management console" and click
on it. Give Email id in the required field
Conclusion:
We have studied how to secure the cloud and its data. Amazon EWS provides the
best security with its extended facilities and services like MFA device. It also gives
you the ability to add your own permissions and policies for securing data more
encrypted.
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO 7
Theory:
A case study on a cloud platform like Amazon EC2, Microsoft Azure, or Google Cloud Platform
examines their service offerings, specifically their computing solutions. These platforms provide scalable
computing resources on-demand, allowing users to create, launch, and manage virtual servers (instances
or virtual machines) with a variety of operating systems, configurations, and connectivity options. They
support a pay-as-you-go pricing model, which provides flexibility and cost savings over traditional on-
premises servers.
Amazon EC2 (Elastic Compute Cloud): Offers resizable compute capacity in the cloud, allowing users to
run servers and scale applications.
Microsoft Azure Virtual Machines: Provides on-demand, scalable computing resources with various
configurations for computing power, memory, and storage.
Google Compute Engine (GCE): Delivers virtual machines running in Google's innovative data centers
and worldwide fiber network.
Procedure:
Select Platform: Choose one platform (EC2, Azure VMs, or Compute Engine) for the case study.
Go to the compute section (EC2 for AWS, VMs for Azure, GCE for Google).
Select or create a new VM instance with the desired specifications.
Configure the instance with necessary settings (like security groups in AWS or network security groups
in Azure).
Configure Storage:
Attach storage volumes to your instance if needed (EBS in AWS, Managed Disks in Azure, Persistent
Disks in GCE).
Set Up Networking:
Use built-in tools (CloudWatch for AWS, Azure Monitor for Azure, Stackdriver for GCE) to monitor the
VM's performance.
Test Scaling:
Explore and test the auto-scaling features based on load or predefined schedules.
Snapshot and Backup:
Monitor and analyze costs using budgeting and cost management tools provided by the platform.
Document the Process:
Keep a detailed record of steps, configurations, and observations throughout the experiment.
Clean Up Resources:
To avoid additional charges, terminate resources and delete any unnecessary storage or snapshots after
the case study is completed.
GCP
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO 8
Aim. Deploy web applications on commercial cloud. Technology: Google appEngine/ Windows Azure
Theory:
A case study on Amazon EC2, Microsoft Azure, or Google Cloud Platform involves a detailed
examination of their cloud computing services, focusing on compute capabilities. Each platform offers
scalable virtual machines (VMs) with various configurations and operating systems, network
connectivity, security, and storage options, billing flexibility, and additional cloud services integration.
Amazon EC2 (Elastic Compute Cloud) provides resizable compute capacity in the cloud, designed to
make web-scale computing easier for developers.
Microsoft Azure VMs are on-demand, scalable computing resources provided by Microsoft Azure, with a
wide variety of options for computing power, memory, and storage.
Google Compute Engine (GCE) offers VMs that run on Google’s infrastructure with services like live
migration and custom machine types.
Procedure:
Select a Cloud Platform: Choose one of the platforms (Amazon EC2, Microsoft Azure, Google Compute
Engine) for the case study.
Navigate to the VM service (EC2 for AWS, Azure VMs for Azure, Compute Engine for GCP).
Select an instance type or VM size based on CPU, memory, and storage requirements.
Choose an OS image (AMI for AWS, Azure Image, or GCE Image).
Configure instance settings like security groups or network security groups and key pairs for access.
Configure Storage:
Attach additional storage if required (EBS for AWS, Managed Disks for Azure, Persistent Disks for
GCE).
Networking:
Set up virtual private cloud (VPC) settings, including subnets, IP ranges, and internet gateways.
Access the VM:
Monitor the performance of your VM using the provider’s monitoring tools (CloudWatch for AWS,
Create snapshots or backups of your VM to ensure data durability and recovery options.
Cost Management:
Monitor and manage costs using budgeting tools and cost analysis provided by the platform.
Clean-Up:
Once the case study experiment is completed, ensure to clean up resources to avoid additional charges.
This means stopping or terminating VMs, deleting storage, and releasing any other resources.
Documentation:
Document the setup process, observations, and performance metrics throughout the case study.
Step 2 : In the Google Cloud Console , on the project selector page, select or
create a Google Cloud project.
Step 3: Make sure that billing is enabled for your Google Cloud project.
Step 4: Enable the Cloud Build API.
Step 5: Install the Google Coud CLI.
Step 6: Create an App Engine application for your Google Cloud project in the
Google Cloud console.
Step 6: Follow below steps in the image to create the project and deploy it
directly in the GCP cloud.
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________
EXPERIMENT NO 9
Theory:
Deploying web applications on a commercial cloud platform involves hosting your web application on a
cloud provider's infrastructure. Google App Engine and Microsoft Azure are two such platforms that
provide managed services to deploy, manage, and scale web applications.
Google App Engine is a fully managed, serverless platform for developing and hosting web applications
at scale. It automatically scales your app up and down while balancing the load.
Microsoft Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and
mobile back ends. It supports multiple languages, integrates with Azure DevOps, and allows for auto-
scaling and high availability.
Procedure:
Create a Google Cloud Project: Set up a new project in the Google Cloud Console.
Develop Your Application: Code your application in a supported language and prepare it for deployment,
including specifying dependencies and an app configuration file (app.yaml).
Google Cloud SDK: Install the Google Cloud SDK on your local machine, which provides you with the
command-line tools to deploy your application.
Use the gcloud app deploy command to deploy your application to App Engine.
Configure routing with dispatch.yaml if necessary.
Access Your Application: After deployment, access your application via the URL provided by App
Engine.
Monitor and Manage: Use Google Cloud’s operations suite to monitor performance, set alerts, and
manage traffic splitting.
Create an Azure Account: Sign up for an Azure account and set up an Azure subscription.
Develop Your Application: Build your application using a supported programming language and tools
like Visual Studio or VS Code with Azure extensions.
Azure Portal: Navigate to the Azure Portal and create an Azure App Service resource.
Deploy directly from your IDE or use Azure CLI with commands like az webapp up.
Alternatively, set up continuous deployment from a Git repository or Azure DevOps.
Configure Application Settings: Adjust your application’s settings, connection strings, and scaling
options within the Azure portal.
Access Your Application: Visit your application's URL, provided in the App Service overview in the
Azure portal.
Monitor and Manage: Utilize Azure Monitor to observe your app's health and performance.
Step 2 : Insert a Desired Name in the Virtual Machine Name option given in the image below.
Step 3: Select the maximum disk size and splitting options and click next as given in the image below. .
Step 5: After the virtual Machine is Installed Click Power on this Virtual Machine .
Step 5: After the virtual Machine is Starts we can proceed with the selected Operating system or Software
where we want to work on .
Conclusion:-
____________________________________________________________________________________
____________________________________________________________________________________
____________________________________________________________________________________
________________________