OpenstackDeployment GoogleDocs
OpenstackDeployment GoogleDocs
net/publication/357877262
CITATION READS
1 483
3 authors:
Vidya Gopalakrishnarao
Frankfurt University of Applied Sciences
2 PUBLICATIONS 1 CITATION
SEE PROFILE
All content following this page was uploaded by Jathin Sreenivas on 17 January 2022.
Table Of Contents
Introduction 3
OpenStack Architecture 3
Deployment Instructions 4
Prerequisites 4
Install Microstack 4
Setup Control Node 5
Setup Compute Node 5
Setup Multi-node Cluster 5
Login 5
Enable/Disable Microstack 6
Instance Creation 6
Image Creation 6
Instance Creation 10
Security Group 18
Enabling Internet in the instance 22
SSH to the instance 22
OpenStack Architecture
Nova
Nova provides the OpenStack compute service. It supports creating virtual machines, bare metal servers by
using ironic. It runs as a set of daemons on top of existing Linux servers to provide that service.
Cinder
Cinder is the OpenStack block storage service for providing volumes to Nova virtual machines, Ironic bare metal
hosts and containers. Cinder provides many useful advantages namely fault-tolerant, recoverable and open
standards.
Neutron
Neutron provides the OpenStack network connectivity service between interfaces managed by other OpenStack
services like vNICs and nova. It implements the neutron API.
Keystone
Keystone is the identity service used by OpenStack. It provides API client authentication, service discovery, and
distributed multi-tenant authorization by implementing OpenStack’s Identity API.
Glance
Glance is the OpenStack image service which enables users to discover, register, and retrieve virtual machine
data assets that are meant to be used with other services, this currently includes images.
Horizon
Horizon service is the OpenStack dashboard which provides a web based graphical interface to OpenStack
services including Nova, Swift, Keystone, where users can access to manage OpenStack.
Architecture
The architecture shown below can be achieved by following the deployment instructions provided in the
document. Here two physical machines are used to host three virtual machines, where one of the VMs will
act as the control node of OpenStack and the other two as compute nodes, thereby achieving multi-node
deployment of OpenStack.
Fig 2. Architecture
There are various tools available to deploy an OpenStack infrastructure like Devstack[3], Packstack[4],
Microstack[5]. This document describes the installation using Microstack.
"MicroStack provides a single or multi-node OpenStack deployment which can run directly on your workstation.
Although made for developers to prototype and test, it is also suitable for edge, IoT, and appliances. MicroStack
is an OpenStack in a snap which means that all OpenStack services and supporting libraries are packaged
together in a single package which can be easily installed, upgraded or removed. MicroStack includes all key
OpenStack components: Keystone, Nova, Neutron, Glance, and Cinder." [2]
Prerequisites
To install OpenStack, the following prerequisites needs to be satisfied for a single node,
● A system with 16GB RAM
● Multi-core processor
● Atleast 50GB free disk space
● VMware
● Ubuntu 18.04 LTS or later (https://ubuntu.com/download/desktop)
Create three virtual machines with prerequisites mentioned above, The VMs used here are as follows,
● control-vm
● compute1-vm
● compute2-vm
Note: All the VMs must have connection to the internet, for this the VM's network must be configured to be a
bridged network.
There are various channels available for example, microstack --devmode --beta, but the current installation is
done using edge.
Output: Use the following connection string to add a new compute node to the cluster
(valid for 20 minutes from this moment):<connection-token-string>
Use the <connection-token-string> in the following command and execute it in the compute1-vm.
To check if Microstack is initialized. Open http://localhost in the browser of the control-vm to view the login page
of OpenStack.
To view OpenStack dashboard in the compute node, node the <ip-address> of the control node using ifconfig
and open http://<ip-address> in the browsers of the compute1-vm and compute2-vm.
Output: <password>
Open OpenStack in a browser as explained above. And login as admin and use the <password> as password to
login.
Enable/Disable Microstack
To disable microstack in the VMs before shutting down the VMs, execute the following command. It will save the
changes made in the OpenStack before disabling.
This will bring up the microstack with the previously saved state.
1. Image Creation
Firstly, the image of the OS for the virtual machine must be uploaded. To find the virtual machines images that
works on the OpenStack visit https://docs.OpenStack.org/image-guide/obtain-images.html.The image can be
uploaded to OpenStack in two ways:
I. Download Image: Execute the following command in any of the VMs to download
bionic-server-cloudimg-amd64-disk.img.
$ wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
II. Create Image: Execute the following command to create the image in OpenStack:
Sample Output:
+--------------------------------------+---------+--------+
| ID | Name | Status |
+--------------------------------------+---------+--------+
| 54627c07-61c9-4185-b2ad-f8cea7be4aa5 | bionic | active |
| cbdfad7c-a5be-4335-93bd-c7be28c87a0c | cirros | active |
+--------------------------------------+---------+--------+
● Now enter the image details as shown in the figure below and click Create Image
Troubleshoot: In case there is an error while creating an image: “Request entity too large, nginx”. This is
caused due to nginx limiting the size of the file being uploaded. This can be corrected by increasing the size in
the nginx.conf file. Follow the steps below to correct the error in the control-vm,
$ sudo vi /var/snap/microstack/common/etc/nginx/snap/nginx.conf
client_max_body_size 32768M;
That increases the maximum file size to 32GB. After the file is saved, restart microstack or enable and disable
the microstack.
Or
III. Image List: Now in the Images tab under the Compute tab, the bionic image should be added as shown
in the figure below:
2. Instance Creation
To create the instance of the image can be done in two ways:
I. Create a new Key-pair: Execute the following commands to create a new SSH key which then can be
used to login to the instance
$ ssh-keygen -q -N ""
+------------------+-------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------+
|fingerprint | ab:eb:bc:55:9e:c2:4f:0b:ad:f0:62:7b:02:f0:89:e7 |
|name | mykey |
|user_id | cd22ff23ece040bca3d12639abddd726 |
+------------------+-------------------------------------------------+
III. Create the instance: Execute the following command to create the instance
+-------------------------------------+-----------------------------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | t86Ncrk6GnjS |
| config_drive | |
| created | 2021-01-02T17:46:42Z |
| flavor | m1.small (2) |
| hostId | |
| id | 3017568b-8aa4-44da-8e84-9efc0bf9ee79 |
| image | debian-9-openstack-amd64 (f05a6a5d-0e97-4b5d-8880-9461eedf54bf) |
| key_name | mykey |
| name | Debianserver |
| progress | 0 |
| project_id | df2d2153582a419da31561593ca7a315 |
| properties | |
| security_groups | name='71b2f9f3-07ed-485f-88ba-d80f04c2eb5a' |
| status | BUILD |
| updated | 2021-01-02T17:46:43Z |
| user_id | cd22ff23ece040bca3d12639abddd726 |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------------+
I. Create the instance: Following the steps in the order of the following figures:
● Provide the name and select the zone available and click on next.
● Select “No” for creating new volume as we do not need any volume and select the image from
which an instance needs to be created.
● Select the test network as it is the one to which the instance must be connected.
● Create the key-pair as it will be needed later to login using SSH, by clicking on “Create Key Pair”.
● Copy the keypair and save it in a file for later use and click on done.
● Once a new keypair is created click on “Launch instance” to create the instance and a new
instance will be created.
Once the instance is created, a floating IP has to be allocated to it. The following steps explain how to
associate a floating IP to an instance.
● First create a floating IP by clicking on “Network”, then select “Floating IPs” and then click on
“Allocate IP To Project” on the page.
● Now move back to the Instance tab within the Compute Tab. Then click on the drop down on the
provided in the instance you want to associate floating IP to as shown in the figure. Then click on
the option of “Associate Floating IP”.
3. Security Group
While creating the instance a default security group is assigned to the instance. The purpose of the security
group is to handle the traffic and provide security to the instance. The default security group provided by
OpenStack will restrict the traffic to and from the instance.
For this purpose a new security group that allows the traffic flow to and from the instance is created in the
dashboard and assigned to the instance.
3.1. Go to the Network tab with the Project tab. Then select the "Security Groups". The following screen will
be displayed as shown in Figure 13.
3.2.2. A new window will open to show the rules available in the security group created. Add
new rules to enable the traffic flow. Click on "Add Rule" as shown in figure below. A pop
up window will appear as shown. Create a rule with following specification,
3.2.3. Move to the "Compute" tab and then select "Instance" tab as shown in figure 21
3.2.4. Select the dropdown at the end of an instance in which the security group has to be
updated as displayed in figure( Instances Security Group). Click on "Edit Security Group".
3.2.5. A popup window will open. Add the security group that is needed from left to right. This
will update the security group for the instance.
Note: This will not persist if the system is restarted. The commands have to be executed in case the changes
are required after restart of control-vm.
To perform this action, use the key-pair file created and saved in the control-vm (explained in Key-pair creation
section) and execute the following command in the control-vm terminal.
For example:
● Click on the drop down provided on the instance you want to rebuild.
● Select the appropriate image file and click on rebuild. This will take a few minutes to rebuild. Once
completed the status will turn to Active.
1. Service Catalog: OpenStack keystone service catalog allows API clients to dynamically discover and
navigate to cloud services. The service catalog may differ from deployment-to-deployment, user-to-user, and
project-to-project[16]. The service catalog itself is composed of a list of services and each service is
associated with one or more related endpoints. For additional commands -
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/catalog.html
+-----------+-----------+---------------------------------------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+---------------------------------------------------------------------------+
| placement | placement | microstack |
| | | admin: http://192.168.0.110:8778 |
| | | microstack |
| | | public: http://192.168.0.110:8778 |
| | | microstack |
| | | internal: http://192.168.0.110:8778 |
| | | |
| nova | compute | microstack |
| | | internal: http://192.168.0.110:8774/v2.1 |
| | | microstack |
| | | admin: http://192.168.0.110:8774/v2.1 |
| | | microstack |
| | | public: http://192.168.0.110:8774/v2.1 |
| | | |
| neutron | network | microstack |
| | | public: http://192.168.0.110:9696 |
| | | microstack |
| | | admin: http://192.168.0.110:9696 |
| | | microstack |
| | | internal: http://192.168.0.110:9696 |
| | | |
| cinderv3 | volumev3 | microstack |
| | | internal:http://192.168.0.110:8776/v3/c2bd9d300b5340b79ef5e7798b6f77a4 |
| | | microstack |
| | | admin: http://192.168.0.110:8776/v3/c2bd9d300b5340b79ef5e7798b6f77a4 |
| | | microstack |
| | | public: http://192.168.0.110:8776/v3/c2bd9d300b5340b79ef5e7798b6f77a4 |
| | | |
| keystone | identity | microstack |
| | | public: http://192.168.0.110:5000/v3/ |
| | | microstack |
2. Compute services[18] - OpenStack Compute is used to host and manage cloud computing systems.
OpenStack Compute interacts with OpenStack Identity for authentication, OpenStack Placement for resource
inventory tracking and selection, OpenStack Image service for disk and server images, and OpenStack
Dashboard for the user and administrative interface. Image access is limited by projects, and by users. For
additional commands - https://docs.openstack.org/nova/latest/admin/services.html
+----+----------------+----------------------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+----------------------------+----------+---------+-------+----------------------------+
| 3 | nova-conductor | node2 | internal | enabled | up | 2021-01-21T19:26:29.000000 |
| 4 | nova-scheduler | node2 | internal | enabled | up | 2021-01-21T19:26:27.000000 |
| 7 | nova-compute | node2 | nova | enabled | up | 2021-01-21T19:26:27.000000 |
| 9 | nova-compute | controller-virtual-machine | nova | enabled | down | 2021-01-14T03:11:29.000000 |
| 10 | nova-compute | compute-virtual-machine | nova | enabled | up | 2021-01-21T19:26:21.000000 |
| 11 | nova-compute | compute2 | nova | enabled | up | 2021-01-21T19:26:28.000000 |
+----+----------------+----------------------------+----------+---------+-------+----------------------------+
3. Flavors - Flavors[15] define the compute, memory, and storage capacity of nova computing instances. It
specifies the hardware configuration for a server. Execute the following command to list all the flavors. For
additiona commands to create and manage flavors - https://docs.openstack.org/nova/latest/user/flavors.html
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 20 | 0 | 2 | True |
| 4 | m1.large | 8192 | 20 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 20 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
+------------------------------------+-------------------+----------------+------------------------------------+------------------------------------+--------------------------------+
| ID |Floating IP Address|Fixed IP Address| Port |Floating Network | Project |
+------------------------------------+-------------------+----------------+------------------------------------+------------------------------------+--------------------------------+
|a3845b2b-5b84-4979-bed6-74e213fd0915|10.20.20.53 | 192.168.222.66 |51b1969b-1de6-4225-8592-bdbc05d51092|2d039649-b494-40ef-b02c-028dcc7f2417|c2bd9d300b5340b79ef5e7798b6f77a4|
+--------------------------------------+---------------------+------------------+--------------------------------------+----------------------------+--------------------------------+
5. Hypervisor - OpenStack Compute supports many hypervisors such as KVM, LXC, QEMU etc.[19]
+----+----------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+----------------------------+-----------------+---------------+-------+
| 1 | node2 | QEMU | 192.168.0.110 | up |
| 2 | controller-virtual-machine | QEMU | 192.168.0.104 | down |
| 3 | compute-virtual-machine | QEMU | 192.168.0.105 | up |
| 4 | compute2 | QEMU | 192.168.0.106 | up |
+----+----------------------------+-----------------+---------------+-------+
6. Image - A virtual machine image is a single file which contains a virtual disk that has a bootable operating
system installed on it. The Following command retrieves the list of images. To get further details about a
single image, use openstack image show <image-name> command[20].
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 54627c07-61c9-4185-b2ad-f8cea7be4aa5 | bionic | active |
| cbdfad7c-a5be-4335-93bd-c7be28c87a0c | cirros | active |
+--------------------------------------+--------+--------+
7. Keypair - After launching a virtual machine, a key pair has to be injected, which allows SSH access to the
instance. A single key pair can be used for multiple instances that belong to that project. Execute the
following command to list the key pair.
+------------+-------------------------------------------------+
| Name | Fingerprint |
+------------+-------------------------------------------------+
8. Networks - OpenStack Networking handles the creation and management of a virtual networking
infrastructure, including networks, switches, subnets, and routers for devices managed by the OpenStack
Compute service (nova). A network is an isolated Layer 2 networking segment. There are two types of
networks, project and provider networks. Project networks are fully isolated and are not shared with other
projects. Only an OpenStack administrator can create provider networks. Networks can be connected via
routers. Execute the following commands to list the networks. For additional commands to manage networks -
https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/network.html
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 2d039649-b494-40ef-b02c-028dcc7f2417 | external | dfe00b34-077e-49e6-b254-227ed175e522 |
| 9a96c71e-2ea8-4b57-8fce-0ccc9016e319 | test | bfbfc303-0281-4d6d-b501-0da5572eed1a |
+--------------------------------------+----------+--------------------------------------+
9. Security Groups - Security groups are sets of IP filter rules that are applied to all project instances, which
define networking access to the instance. Group rules are project specific; project members can edit the
default rules for their group and add new rule sets.
+--------------------------------------+-----------------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+-----------------+------------------------+----------------------------------+------+
| 9ee20efb-33e6-4a26-9faa-4e906146a713 | default | Default security group | c2bd9d300b5340b79ef5e7798b6f77a4 | [] |
| d8eb33c0-eaf5-4ed9-92f3-0a22e5be7b54 | mySecurityGroup | | c2bd9d300b5340b79ef5e7798b6f77a4 | [] |
| de059903-71ab-416b-970c-08f8681118d9 | default | Default security group | d6d822f4ef67469fbf64bc4b8379461c | [] |
+--------------------------------------+-----------------+------------------------+-----------------------------------------+
10. Server - A server[14] is a virtual machine (VM) instance, a physical machine or a container. Execute the
following command to view the list of servers. For additional commands to create and manage servers -
https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/server.html#server-list
+--------------------------------------+--------+---------+----------------------------------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------+---------+----------------------------------+--------+----------+
| 35479735-8b26-447d-a07f-d97c65ff0397 | bionic | SHUTOFF | test=192.168.222.66, 10.20.20.53 | bionic | m1.small |
+--------------------------------------+--------+---------+----------------------------------+--------+----------+
V. API
● Environment Variables:
$ export OS_PROJECT_NAME=admin
$ export OS_USERNAME=adminAPI
$ export OS_PASSWORD=<password>
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://192.168.64.2:5000/v3/
$ export OS_HORIZON_URL=http://192.168.64.2:8774/v2.1/
Output :
* Trying 192.168.0.110:5000...
* TCP_NODELAY set
* Connected to 192.168.0.110 (192.168.0.110) port 5000 (#0)
> POST /v3//auth/tokens?nocatalog HTTP/1.1
> Host: 192.168.0.110:5000
> User-Agent: curl/7.68.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 252
>
} [252 bytes data]
* upload completely sent off: 252 out of 252 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 201 CREATED
< Server: nginx/1.19.0
< Date: Thu, 21 Jan 2021 19:29:53 GMT
● Copy the X-Subject-Token from the response header and export to the environment variable as
$ export OS_TOKEN=<X-Subject-Token>
$ export OS_PROJECT_ID=<project-id>
Output:
{
"flavors": [
{
"id": "1",
"name": "m1.tiny",
"links": [
{
"rel": "self",
"href": "http://192.168.0.110:8774/v2.1/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/1"
},
{
"rel": "bookmark",
"href": "http://192.168.0.110:8774/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/1"
}
]
},
{
"id": "2",
"name": "m1.small",
"links": [
{
"rel": "self",
"href": "http://192.168.0.110:8774/v2.1/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/2"
},
{
"rel": "bookmark",
"href": "http://192.168.0.110:8774/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/2"
}
]
},
{
"id": "3",
"name": "m1.medium",
"links": [
{
"rel": "self",
"href": "http://192.168.0.110:8774/v2.1/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/3"
},
{
"rel": "bookmark",
"href": "http://192.168.0.110:8774/c2bd9d300b5340b79ef5e7798b6f77a4/flavors/3"
}
]
},
{
"id": "4",
"name": "m1.large",
"links": [
{
Output:
{
"images": [
{
"id": "54627c07-61c9-4185-b2ad-f8cea7be4aa5",
"name": "bionic",
"links": [
{
"rel": "self",
"href":
"http://192.168.0.110:8774/v2.1/c2bd9d300b5340b79ef5e7798b6f77a4/images/54627c07-61c9-4185-b2ad-f8cea7be4a
a5"
},
{
"rel": "bookmark",
"href":
"http://192.168.0.110:8774/c2bd9d300b5340b79ef5e7798b6f77a4/images/54627c07-61c9-4185-b2ad-f8cea7be4aa5"
},
{
"rel": "alternate",
"type": "application/vnd.openstack.image",
"href": "http://192.168.0.110:9292/images/54627c07-61c9-4185-b2ad-f8cea7be4aa5"
}
]
},
{
"id": "cbdfad7c-a5be-4335-93bd-c7be28c87a0c",
"name": "cirros",
"links": [
{
"rel": "self",
Output:
{
"servers": [
{
"id": "35479735-8b26-447d-a07f-d97c65ff0397",
"name": "bionic",
"links": [
{
"rel": "self",
"href":
"http://192.168.0.110:8774/v2.1/c2bd9d300b5340b79ef5e7798b6f77a4/servers/35479735-8b26-447d-a07f-d97
c65ff0397"
},
{
"rel": "bookmark",
"href":
"http://192.168.0.110:8774/c2bd9d300b5340b79ef5e7798b6f77a4/servers/35479735-8b26-447d-a07f-d97c65ff
0397"
}
]
}
]
}
2. Install pip3 using python3. Before installing pip3 update the ubuntu by running following command,
$ sudo apt-get update
$ cd SchedulingSimulation/SchedulingSimulator/
6. To create a virtual env, it has to find the pyhton3 files so execute this to find the python3 source folder,
$ which python3
7. Now execute the below command using the python3 path obtained in the previous step.
8. Install django
$ cd SchedulingSimulator
11. Execute manage.py to start the server. Note that this port 8000 must be added in the security groups
while creating a new security group.
VII. Conclusion
To conclude, OpenStack is a good open source Infrastructure as a Service (IaaS), that provides huge potential in
scalability, by allowing a large number of nodes interconnected to provide the necessary services. Also,
providing flexibility by having modular components that interact to form the final infrastructure. These modular
components can be added or removed when the necessity arises. In this document it shows the deployment of
OpenStack using Microstack, through which one can easily deploy the infrastructure. Micostack deploys the
OpenStack with minimal system requirements and also it handles the burden of configuration of OpenStack and
its network before deployment. The main intention of Microstack is to provide an OpenStack environment in a
developer’s system for testing or development purposes and also support IoT applications. Microstack is a part
of Canonical and it only works on Ubuntu. There are various tools available that help in deployment like
Devstack. Which is provided by OpenStack for deployment. The downside of this is it takes a considerable
amount of time to deploy and the configurations are to be done manually and requires the system to be highly
capable.
VIII. References
[1] “OpenStack: Open Source Cloud Computing Infrastructure” - https://www.openstack.org/, Accessed On:
29/01/2021
[2] “Microstack Overview” - https://ubuntu.com/tutorials/microstack-get-started#1-overview/, Accessed On:
29/01/2021
[3] “OpenStack Docs: DevStack Overview” - https://docs.openstack.org/devstack/latest/, Accessed On:
29/01/2021
[4] “Packstack — RDO” - https://www.rdoproject.org/install/packstack/, Accessed On: 29/01/2021
[5] “Single-node OpenStack deployment” - https://ubuntu.com/openstack/install#single-node-deployment/,
Accessed On: 29/01/2021
[6] “Download Ubuntu Desktop” - https://ubuntu.com/download/desktop/, Accessed On: 29/01/2021
[7] “Get Ubuntu Server” - https://ubuntu.com/download/server/, Accessed On: 29/01/2021
[8] “Fedora” - https://getfedora.org/, Accessed On: 29/01/2021
[9] “CentOS Download” - https://www.centos.org/download/, Accessed On: 29/01/2021
[10]“openSUSE TOOLS” - https://www.opensuse.org/, Accessed On: 29/01/2021
[11] “Scheduling Simulator Codebase Github” - https://github.com/bhatvineeth/SchedulingSimulation/,
Accessed On: 29/01/2021
[12]“Scheduling Simulator Report” -
https://github.com/bhatvineeth/SchedulingSimulation/blob/master/Documentation/Paper/Scheduling_Sim
ulator.pdf, Accessed On: 29/01/2021
[13]“Snap” - https://snapcraft.io/, Accessed On: 29/01/2021