IoT Lecture Notes 7th Sem CSE Copy 3
IoT Lecture Notes 7th Sem CSE Copy 3
IoT Definition :
The IoT can be defined in two ways based on
• existing Technology
• Infrastructure
Definition of IoT based on existing technology: IoT is a new revolution to the internet due to the
advancement in sensor networks, mobile devices, wireless communication, networking and cloud
technologies.
Definition of IoT based on infrastructure: IoT is a dynamic global network infrastructure of physical and
virtual objects having unique identities, which are embedded with software, sensors, actuators, electronic
and network connectivity to facilitate intelligent applications by collecting and exchanging data.
Goal of IoT:
The main goal of IoT is to configure, control and network the devices or things, to internet, which are
traditionally not associated with the internet i.e thermostats, utility meters, a Bluetooth connected headset,
irrigation pumps and sensors or control circuits for an electric car’s engine that make energy, logistics,
industrial control, retail, agriculture and many other domain smarter.
Characteristics of IoT
• Dynamic & Self-Adapting
• Self-Configuring
• Interoperable Communication Protocols
• Unique Identity
• Integrated into Information Network
Dynamic and Self-Adapting (Complexity) –
IoT devices should dynamically adapt themselves to the changing contexts and scenarios. Assume a camera
meant for the surveillance. It should be adaptable to work in different conditions and different light situations
(morning, afternoon, night).
Self-configuring:
I. IoT devices can be able to upgrade the software with minimal intervention of user, whenever they are connected
to the internet.
II. They can also setup the network i.e a new device can be easily added to the existing network. For ex:
Whenever there will be free wifi access one device can be connected easily.
Interoperable Communication:
IoT allows different devices (different in architecture) to communicate with each other as well as with different
network. For ex: MI Phone is able to control the smart AC and smart TV of different manufacturer.
Unique identities:
I. The devices which are connected to the internet have unique identities i.e IP address through which they
can be identified throughout the network.
II. The IoT devices have intelligent interfaces which allow communicating with users. It adapts to the
environmental contexts.
III. It also allows the user to query the devices, monitor their status, and control them remotely, in association
with the control, configuration and management infrastructure.
I. The IoT devices are connected to the network to share some information with other connected devices.
The devices can be discovered dynamically in the network by other devices. For ex. If a device has wifi
connectivity then that will be shown to other nearby devices having wifi connectivity.
Service Set Identifier
II. The devices ssid will be visible throughout the network. Due to these things the network is also called as
information network.
III. The IoT devices become smarter due to the collective intelligence of the individual devices in
collaboration with the information network. For Ex: weather monitoring system. Here the information
collected from different monitoring nodes (sensors, arduino devices) can be aggregated and analysed to
predict the weather.
Physical Design of IoT
Things in IoT:
I. IoT i.e Internet of things, where things refer to the IoT devices which have unique identities and
can perform remote sensing, actuating and monitoring capabilities (ex: combination of sensors,
actuators, Arduino, relay, non IoT devices).
II. The IoT devices can share information with as well as collect information from other connected
devices and applications (directly and indirectly).
III. They can process the data locally or in the cloud to find greater insights and put them into
action based on temporal and space constraints (i.e space memory, processing capabilities,
communication latencies and speeds and deadlines).
IV. IoT devices can be of varied types. For ex: wearable sensors, smart watches, LED lights,
automobiles and industrial machines.
Generic block diagram of an IoT Device
• An IoT device may consist of
several interfaces for connections to
other devices, both wired and
wireless.
• Audio/video interfaces.
Logical Design of IoT
Logical design of an IoT system refers
to an abstract representation of the
entities and processes without going
into the low-level specifics of the
implementation.
An IoT system comprises of a number
of functional blocks that provide the
system the capabilities for identification,
sensing, actuation, communication, and
management.
Device:
I. IoT i.e Internet of things, where things refer to the IoT devices which have
unique identities and can perform remote sensing, actuating and monitoring
capabilities (ex: combination of sensors, actuators, Arduino, relay, non IoT
devices).
II. The IoT devices can share information with as well as collect information from
other connected devices and applications (directly and indirectly).
III. They can process the data locally or in the cloud to find greater insights and put
them into action based on temporal and space constraints (i.e space memory,
processing capabilities, communication latencies and speeds and deadlines).
IV. IoT devices can be of varied types. For ex: wearable sensors, smart watches,
LED lights, automobiles and industrial machines.
Communications: It refers to various communication protocols which allows
different devices to communicate with each other by sharing some information. It
also allows interoperability among different devices.
Services: IoT system provides various services such as device monitoring, device
control services, data publishing services, device discovery services.
Management: Various management functions to govern the IoT system.
Security: It secures the IoT system by providing authentication, authorization,
message and content integrity and data security.
Application:
I. IoT applications provide an interface that the users can use to control and
monitor various aspects of the IoT system.
II. It also allows viewing the system status and view or analysing the processed
data.
IoT communication model
To provide communication to various IoT devices, there are various
communication models.
✓ Client-server ✓ Stateless
✓ Scalability
Client-Server: The principle behind the client-server constraint is the separation of concern. Ex: The server
is concerned about the storage part i.e storage of data and the client will not bother about it. The client
should concern about the user interface and the server will not bother about it. Due to this type of separation
client and server can be independently developed and updated.
Stateless: Each time the request from client to server must contain all the information necessary for
understanding the request.
Cache-able: It requires that data within a response to request be implicitly or explicitly labelled as cach-able or
non-cache-able. The data can be cached in client side so that it can be reused when requested for the next time in
order to minimise the time. It will increase the efficiency and scalability.
Layered System: This constraint limits the behaviour of components i.e each component cannot see beyond the
immediate layer with which they are interacting. Ex: client cannot say whether it is connected directly to the end
server or to an intermediary. It improves scalability by allowing intermediaries to respond to requests instead of the
end server without the client having to do anything different.
Uniform interface: The method of communication between a client and a server must be uniform.
Code on demand: Servers can provide executable codes or scripts for clients to execute in their context.
Scalability: it supports both horizontal and vertical scalability. As it is stateless so scalability is easier to implement.
WebSocket-based Communication APIs
1. Websocket API helps to design web services and web APIs.
In web socket communication first the client sets up connection with the server. This request is sent over
the HTTP and the server interprets as an upgrade request (called Websocket handshake). If the server
supports websocket protocol then only it will respond to this handshake. If the server supports then client
and server can send message to each other in full-duplex mode.
Difference between REST and Websocket
IoT enabling technologies
The technologies which are cooperative with IoT those are as follows.
• Wireless sensor networks
• Cloud computing
• Big Data analytics
• Embedded systems
• Communication protocols
Wireless Sensor networks
1. Wireless sensor network comprises of distributed devices, wireless sensors. These devices with sensors are used to
monitor the environment and physical conditions. Since all the nodes are wireless so they communicate with each other
through wifi or Bluetooth.
3. Sensors are attached with end nodes. Each router can also be called as end node.
4. Routers are responsible for routing the data packets from end nodes to the coordinator nodes. Coordinator node
connects the WSN to the internet. The Coordinator node can be another arduino, raspberry pi or any other IoT DIY
device.
8. ZigBEE Bluetooth module is based on IEEE802.15.4. It operates at 2.4 GHz frequency. It offers data rate up to 250
KB/s and ranges from 10 to 100 meters depending upon power output and environmental conditions. In WSN the
devices can reconfigure themselves i.e new nodes can be added to the networks and software can be updated
automatically whenever they will be connected to the internet.
9. Ex. of Wireless sensor network: Weather monitoring system, Indoor air quality monitoring, soil moisture monitoring,
surveillance system, smart grids, machine prognosis and diagnosis.
Cloud Computing
1. It is an emerging technology which enables on-demand network access to computing
resources like network servers, storage, applications and services that can be rapidly
provisioned and released.
2. On demand: we invoke cloud services only when we need them, they are not
permanent part of IT infrastructure.
3. Pay as you go model: You pay for the cloud services when you use them, either for the
short period of time or longer duration (for cloud based storage).
4. Cloud provides various services such as
i. IAAS: Infrastructure as a service
ii. PAAS: Platform as a service
iii. SAAS: Software as a service
IAAS: Instead of creating a server room we will hire it from a cloud service provider. Here user will not use
its local computer, storage and processing resources rather it will use virtual machine and virtual storage,
servers, networking of third party. Here the client can deploy the OS (operating system), application of his
own choice. User can start, stop, configure and manage the virtual machine instances and virtual storage.
PAAS: User can develop and deploy applications. For ex. We are using various online editors to write
codes like online arduino IDE, C IDE, APIs, software libraries. Here we don’t need to install anything. The
cloud service provider will manage servers, network, OS and storage. The users will develop, deploy,
configure and manage applications on the cloud infrastructure.
SAAS: It provides complete software application or the user interface to the application itself. The user is
not concerned about the underlying architecture of cloud only service provider is responsible for this. It is
platform independent and can be accessed from various client devices such as workstation, laptop, tablet
and smartphone, running different OS. Ex: The online software we use like online image converter, doc
converter etc.
Big data analytics
Big data refers to large amount of data which cannot be stored, processed and analysed using traditional
database like (oracle, mysql) and traditional processing tools. In big data analytics BIG refers to 5 Vs.
• Volume
• Velocity
• Variety
• Veracity
• Value
Volume: volume refers to the massive amount of data generated from the IoT systems. There is no
threshold value for generated data. It is difficult to store, process and analyse using traditional database
and processing tools. Ex: The volume of data generated by modern IT, industrial and healthcare system.
Velocity: The rate at which the data is generated from the IoT system. This is the primary reason for the
exponential growth of data. Velocity refers to how fast the data is generated and how frequently it varies.
Ex: Modern IT, industrial and other systems like social networking sites are generating data at increasingly
higher speed.
Variety: Variety refers to different forms of data. Since there are various domain of IoT so various type of
data are generated from different IoT domain. Those data is called as sparse data. Those data include
text, audio, video etc..
✓ structured
✓ semi structured
✓ unstructured
Structured data: The data which has a fixed format to be stored is known as structured data. The data
stored in database like oracle, mysql is an example of structured data. With a simple query data can be
retrieved from the database.
Semi-structured data: The data which has not a fixed format to be stored but uses some elements and
components through which they can be analysed easily is known as semistructured data. Ex: HTML, XML,
JSON data
Unstructured data: The data which has not any fixed format. It is difficult to store and analyse. It can be
analysed after converting into structured data. Ex: Audio, video (gif, audio with lyrics), Text (containing
special symbols).
Veracity: The data in doubt is known as veracity. Sometimes what happen it is
very difficult accept the data stored in database. This happens due to typical error,
corrupted storage or data.
Value: It is efficient to access big data if we can turn it into values i.e we can find
greater insights from it so that we can perform some action to get the desired
output. This will be beneficial for the organisation. Otherwise it has no use.
Embedded Systems
1. An embedded system is a computer system that has hardware and software embedded to
perform specific task.
3. It allows devices to exchange data over the network. These protocols define
data exchange format, data encoding, addressing schemes for devices and
routing of packets from source to destination. It also includes sequence control,
flow control and retransmission of lost packets.
IOT Levels and deployment templates
Based upon the number of monitoring nodes used, type of database used, complexity/ simplicity of
analysis, computation there are 6 levels of IoT. Different applications are implemented based on this level.
The IoT systems consist of these following components.
• Device
• Resources
• Controller Service
• Database
• Web Service
• Analysis Component
• Application
Device: The IoT device allows identification, remote sensing, actuating, and remote monitoring capabilities.
Resource: Resources are the software components on the IoT device for accessing, processing and storing sensor
information, or controlling actuators connected to the device Resources include the software components that enable
network access for the device. For ex: The programs that we have written for object detection using IR sensor, to find
out the distance using ultrasonic sensor etc.
Controller Service: Controller service is a native service that runs on the device and interacts with the web services.
Controller service sends data from the device to the web service and receives commands from the application for
controlling the web services. For ex: The ESP 8266 programming, setting of API keys, SSID etc. .
Database: Database can be either local or in the cloud and stores the data generated by the IoT device.
Web Service: This act as an interface between IoT device, application, database and analysis components. Web
services can be implemented using HTTP and REST principle or using WebSocket protocol.
Analysis Component: The analysis component is responsible for analysing the IoT data and generates results
inform which are easy for the user to understand. Analysis can be performed either locally or in the clouds.
Application: IoT applications provide an interface that the user can use to control and monitor various aspects of the
IoT system.
IoT Level-1
I. It has single node/device for sensing, monitoring,
actuating, storing data, performing analysis and hosting
application.
Arduino boards are generally based on microcontrollers from Atmel Corporation like 8, 16 or 32 bit
AVR architecture based microcontrollers.
The important feature of the Arduino boards is the standard connectors. Using these connectors, we
can connect the Arduino board to other devices like LEDs or add-on modules called Shields.
The Arduino boards also consists of on board voltage regulator and crystal oscillator. They also
consist of USB to serial adapter using which the Arduino board can be programmed using USB
connection.
In order to program the Arduino board, we need to use IDE provided by Arduino. The Arduino IDE is
based on Processing programming language and supports C and C++.
Types of Arduino Boards
UNO is based on ATmega328P microcontroller. There are two variants of the Arduino UNO: one which consists of
through – hole microcontroller connection and other with surface mount type. Through-hole model will be
beneficial as we can take the chip out in case of any problem and swap in with a new one.
Arduino UNO comes with different features and capabilities. As mentioned earlier, the microcontroller used in UNO
is ATmega328P, which is an 8-bit microcontroller based on the AVR architecture.
UNO has 14 digital input – output (I/O) pins which can be used as either input or output by connecting them with
different external devices and components. Out of these 14 pins, 6 pins are capable of producing PWM signal. All
the digital pins operate at 5V and can output a current of 20mA.
Some of the digital I/O pins have special functions which are described below.
Pins 0 and 1 are used for serial communication. They are used to receive and transmit serial data which can be used
in several ways like programming the Arduino board and communicating with the user through serial monitor.
Pins 2 and 3 are used for external interrupts. An external event can be triggered using these pins by detecting low
value, change in value or falling or rising edge on a signal.
As mentioned earlier, 6 of the 14 digital I/O Pins i.e. 3, 5, 6, 9, 10, and 11 can provide 8-bit PWM output.
Pins 10, 11, 12 and 13 (SS, MOSI, MISO AND SCK respectively) are used for SPI communication.
Pin 13 has a built-in LED connected to it. When the pin is HIGH, the LED is turned on and when the pin is LOW, it is
turned off.
Arduino Uno has 6 analog input pins which can provide 10 bits of resolution i.e. 1024 different values. The analog pins
on the Arduino UNO are labelled A0 to A5. By default, all the analog pins can measure from ground to 5V. Arduino
UNO has a feature, where it is possible to change the upper end of the range by using the AREF pin but the value
should be less than 5V.
Additionally, some analog pins have specialized functionality. Pins A4 and A5 are used for I2C communication. There
are different ways in which we can power the Arduino UNO board. The USB cable, which is used to program the
microcontroller, can be used as a source of power.
IoT Operating Systems
An operating system (OS) is essentially the brain and central nervous system of any computing system, including laptops,
servers, smartphones, and sensors and is available in both commercial and open source varieties.
The OS manages all of a system’s hardware and software and allocates all resources, including processing, memory, and
storage.
An IoT operating system is an OS designed for the particular demands and specifications of IoT devices and applications. It
is critical for connectivity, security, networking, storage, remote device management, and other IoT system needs.
Some IoT operating systems also have real-time processing capabilities and are referred to as a real-time operating
system, or RTOS.
Hardware and software cannot function properly without an underlying OS. A computer without an OS is akin to car without
an engine—it simply won’t run. The same is true for IoT devices and applications; an IoT OS is necessary for them to work
as intended.
An IoT OS enables devices and applications to connect with each other and other systems, such as cloud platforms and
services. The IoT OS also manages the processing power and other resources needed to collect, transmit, and store data.
Eg. TinyOS , Contiki , RIOT, Ubuntu Core, Fuchsia OS, Windows 10 IoT, Tizen, Android Things, OpenWrt, Mbed OS
Huawei LiteOS Overview
● Huawei LiteOS is an IoT-oriented software
platform integrating an IoT operating system
and middleware.
● It is lightweight, with a kernel size of under 10
KB, and consumes very little power — it can
run on an AA battery for up to five year.
● It also allows for fast startup and connectivity
and is very secure.
● These capabilities make Huawei LiteOS a
simple yet powerful one-stop software platform
for developers, lowering barriers to entry for
development and shortening time to market.
● Huawei LiteOS provides a unified open-source
API that can be used in IoT domains as diverse
as smart homes, wearables, Internet of
Vehicles (IoV), and intelligent manufacturing.
Key Features
● Lightweight Kernels: Smaller Kernel Size, Lower Power Consumption, and Faster Response.
Partners and third-party developers can quickly develop smart hardware based on Huawei LiteOS,
creating highly competitive products that have fast startup and lower power consumption.
● Sensor Frameworks: Lower Delay, Higher Precision, and Intelligent sensing. Delay has been
reduced by 50%; precision has more than doubled; and simple collection algorithms have been
replaced with intelligent algorithms.
● Connectivity Engine: More Protocols, Wider Connectivity, Intelligent Connection. Connectivity
middleware supports multiple connectivity technologies, such as short-distance, LTE, and NB-IoT. It
provides connectivity technologies for IoT devices corresponding to different protocols and supports
multiple application scenarios, including smart homes, wearables, and industrial Internet. In addition,
Huawei LiteOS provides APIs and service profile definitions at the service layer, helping developers
to develop applications and enabling interoperation between devices.
● Operating Engine: Lighter Frameworks, Better Performance, and Intelligent Applications
○ IoT-Oriented Application Development Framework:Optimizes performance and reduces
power consumption by coordinating JS frameworks, JS VMs, and OS.
○ High Performance and Lightweight VM based on JavaScript: Small-sized ROM with low
memory usage; Provides independent user space and application separation to ensure
application security.
Contiki OS : The Open Source OS for IoT
Contiki is an open source operating system for the Internet of Things. Contiki connects tiny low-cost, low-power
microcontrollers to the Internet. Contiki is a powerful toolbox for building complex wireless systems.
Contiki provides powerful low-power Internet communication. Contiki supports fully standard IPv6 and IPv4,
along with the recent low-power wireless standards: 6lowpan, RPL, CoAP. With Contiki’s ContikiMAC and sleepy
routers, even wireless routers can be battery-operated.
Contiki provides multitasking and a built-in Internet Protocol Suite (TCP/IP stack), yet needs only about 10
kilobytes of random-access memory (RAM) and 30 kilobytes of read-only memory (ROM). A full system,
including a graphical user interface, needs about 30 kilobytes of RAM.
Contiki applications are written in standard C, with the Cooja simulator Contiki networks can be emulated before
burned into hardware, and Instant Contiki provides an entire development environment in a single download.
Contiki is open source, which means that the source is and always will be available. Contiki may be used in both
commercial and non-commercial systems without restrictions.
Instant Contiki
Instant Contiki is an entire Contiki development environment in a single download. It is an Ubuntu Linux virtual machine that
runs in VMWare player and has Contiki and all the development tools, compilers, and simulators used in Contiki
development installed. Instant Contiki is so convenient that even hardcore Contiki developers use it.
Cooja
Cooja is the Contiki network simulator. Cooja allows large and small networks of Contiki motes to be simulated. Motes can
be emulated at the hardware level, which is slower but allows precise inspection of the system behavior, or at a less
detailed level, which is faster and allows simulation of larger networks.
Communication components in Contiki
uIP and uIPv6.
● There are several advantages of having the Internet protocol stack embedded in the OS. It
allows communication with the existing devices/computers in the Internet.
● For this reason, the IP stack has been trimmed to fit 1kB of RAM and a few kB of ROM, in
comparison with Linux based IP stack that requires 1MB RAM.
● This trimming of memory comes at the cost of reduced throughput.
● uIPv6 is in contrast to the IPv6 certified version of IPv6, which was developed by Cisco.
Support of IP stack along with TCP and UDP protocols makes it possible for devices
running Contiki to directly interact with a Web server or tweak messages to the Internet.
LoWPAN (IEEE 802.15.4) and 6LoWPAN.
● Low-power wireless personal area networks have the characteristics of small packet sizes,
low data rates, low-power devices and large number of devices.
● The IPv6 extension of the same is the 6LoWPAN standard developed by IETF working
group. For saving power, the transmission and receiving periods are short spaced along
with long periods of sleep mode by devices.
● Also, for power constraint reasons the communication is hopbased and short-range.
Functions /Features of Contiki
1. Process and memory management. Contiki supports memory block allocation and standard ‘C’ style
allocation with malloc( ). The concept and implementation of ‘protothreads’ has been introduced keeping
the low system requirements in mind. ‘protothreads’ are C-language-based implementation minimising the
overhead of multi-threaded programming.
2. Communication management. Contiki supports both IPv4 and IPv6 stack implementations, which
include TCP, UDP and HTTP protocols with the smallest footprints. The IP stack is standards-compliant.
The OS also supports low-power variants such as 6LoWPAN.
3. File system management. Not all devices as part of IoT have luxury of persistent storage such as
Flash. But for such devices, Contiki provides support through the Coffee Flash file system.
TinyOS -an embedded, component-based operating system
TinyOS is an embedded, component-based operating system and platform for low-power wireless devices, such
as those used in wireless sensor networks (WSNs), smart-dust, ubiquitous computing, personal area networks,
building automation, and smart meters. It is written in the programming language nesC, as a set of cooperating
tasks and processes. It began as a collaboration between the University of California, Berkeley, Intel Research,
and Crossbow Technology, was released as free and open-source software under a BSD license, and has since
grown into an international consortium, the TinyOS Alliance.
nesC
nesC (pronounced “NES-see”) is an extension to the C programming language designed to embody the
structuring concepts and execution model of TinyOS. TinyOS is an event-driven operating system designed for
sensor network nodes that have very limited resources (e.g., 8K bytes of program memory, 512 bytes of RAM).
● TinyOS is fully non-blocking: it has one call stack. Thus, all input/output (I/O) operations that last longer than a few
hundred microseconds are asynchronous and have a callback.
● To enable the native compiler to better optimize across call boundaries, TinyOS uses nesC's features to link these
callbacks, called events, statically.
● While being non-blocking enables TinyOS to maintain high concurrency with one stack, it forces programmers to
write complex logic by stitching together many small event handlers.
● To support larger computations, TinyOS provides tasks, which are similar to a Deferred Procedure Call and interrupt
handler bottom halves.
● A TinyOS component can post a task, which the OS will schedule to run later. Tasks are non-preemptive and run in
first in, first out order.
Advantages of TinyOS
Small size – The source code of TinyOS is very small. The code is optimized to run for any specific device. Due to smaller
code devices run fast and OS does not tend to overload the device.
Event-driven – Event-driven OS that means it depends upon the events it receives from the surrounding environment. For
example, controlling the temperature, humidity and air quality of the building. An event is fired when the temperature goes
above or below a certain degree and operating system controls the air condition devices to make temperature at a normal
level.
Modularity – TinyOS has different modules in it. Each module performs its own function. The modules include tasks,
commands, events, microcontroller, hardware, and software. Each of these modules communicates with other to make the
wireless devices work properly.
Needed Low memory – TinyOS is a type of embedded OS which is implemented on every device. Needs low memory to
run. We don’t need to buy higher memory devices to run this operating system.
Use low voltages – Due to low memory and space usage tinyOS use low battery. TinyOS can run on smaller devices also
which have low voltage.
Reusability – TinyOS can be reusable on similar devices. That means the code has not to be changed if devices are of
same nature.
Disadvantages of TinyOS
Difficult to program – It is difficult to make a program for this os due to some restriction like
asynchronous behavior, memory limit, and low voltage. NesC programming language is the
major disadvantage of this OS. It is difficult for programmers to write efficient code in NesC.
Asynchronous nature – As network sensor devices have to update its data from surrounding in
every second, so programmers have to keep this in mind to make the code work in any case.
Sometimes there is a communication problem between tasks in tinyOS.
RIOT OS: The friendly Operating System for IoT
RIOT powers the Internet of Things like Linux powers the Internet. RIOT is a free, open source operating system
developed by a grassroots community gathering companies, academia, and hobbyists, distributed all around the
world.
RIOT supports most low-power IoT devices, microcontroller architectures (32-bit, 16-bit, 8-bit), and external
devices. RIOT aims to implement all relevant open standards supporting an Internet of Things that is connected,
secure, durable & privacy-friendly.
RioT supports lot of architectures like AVR,, ARM7, Cortex-M0, Cortex-M0+, Cortex-M3, Cortex-M4, Cortex-M7, ESP8266,
MIPS32, MSP430, PIC32, x86.
Boards: Airfy Beacon, Arduino Due,Arduino Mega 2560, Arduino Zero, Atmel samr21-Xplained Pro, f4vi, mbed NXP LPC1768,
Micro::bit, Nordic nrf51822 (DevKit), Nordic nrf52840 (DevKit), Nucleo boards (almost all of them) and many more.
Features Supported by RIOT OS
a free and open-source (LGPLv2.1) operating system, with the help of it, you can write your code
in native languages like C and C++.
It has Microkernel (μ-kernel) architecture (The Kernel uses ~1.5K RAM or 32-bit architecture).
It has super low latency interrupt handling, and it supports module architecture for threading.
It also supports different network stacks like 6LoWPAN, IPV6, RPL, UDP, TCP, LoRaWAN,
802.15.4, MQTT, and much more. It also supports different PHY technologies (like Bluetooth,
NFC, serial, CAN, etc). It also, support 3rd party packages (like lwIP Stack, uIP, Open-thread
stack).
Other than the mentioned features, it also supports various 32-bit platforms and also supports
16-bit & 8-bit platform (like Arduino Nano/Uno, MSP430, x86, ARM, MIPS, AVR, etc)
RIOT OS provides different types of example source codes. So, we can easily develop our code
by using those example codes. But the major benefit of using this OS is “Code your application
once & run everywhere”, it means, when we write a code for Arduino/ESP, that code now can run
onto a different microcontrollers to do the same operation without changing anything into the
code or configuration.
Architecture and Reference Model
Representational State Transfer (REST) architectural style
REST is a software architectural style that relies on rules that describes how to define and access resources. REST
defines a set of main, general constraints to follow while developing RESTful APIs.
The client uses URIs to obtain resources. It doesn’t concern how the server process the request. On the other
hand, the server process and returns the resources. It doesn’t impact a user interface in any way. Both client
and server don’t need to know about other responsibilities.
Separation of concerns is the principle behind the client-server constraints. By separating the user interface
concerns from the data storage concerns, we improve the portability of the user interface across multiple
platforms and improve scalability by simplifying the server components. The separation allows the
components to evolve independently, thus supporting the Internet-scale requirement of multiple organizational
domains.
Thus, they can evolve independently. It allows using a single API in many different clients, e.g., web browsers,
mobile apps.
Statelessness: -
A RESTful API should be stateless. In simple words, it means that it doesn’t store any information about the user’s session.
Therefore, every single request should provide complete data to process it. Thus, it leads to greater availability of the API. This
constraint induces the properties of visibility, reliability, and scalability.
Visibility is improved because a monitoring system does not have to look beyond a single request datum in order to determine the
full nature of the request.
Reliability is improved because it eases the task of recovering from partial failures.
Scalability is improved because not having to store state between requests allows the server component to quickly free resources,
and further simplifies implementation because the server doesn't have to manage resource usage across requests.
Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease
network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data
cannot be left on the server in a shared context.
In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since
the application becomes dependent on the correct implementation of semantics across multiple client versions.
Cacheability. Cache constraints require that the data within a response to a request be implicitly or explicitly labeled
as cacheable or non-cacheable. If a response is cacheable, then a client cache is given the right to reuse that
response data for later, equivalent requests.
The advantage of adding cache constraints is that they have the potential to partially or completely eliminate some
interactions, improving efficiency, scalability, and user-perceived performance by reducing the average latency of a
series of interactions. The trade-off, however, is that a cache can decrease reliability if stale data within the cache
differs significantly from the data that would have been obtained had the request been sent directly to the server.
Layered system. A REST API can consist of multiple layers, eg., business logic, presentation, data access. Moreover,
layers shouldn’t directly impact others. Further, the client shouldn’t know if it’s connected directly to the end server or
intermediary. Therefore, we can easily scale the system or provide additional layers such as gateways, proxies, load
balancers.
The primary disadvantage of layered systems is that they add overhead and latency to the processing of data,
reducing user-perceived performance. For a network-based system that supports cache constraints, this can be offset
by the benefits of shared caching at intermediaries. Placing shared caches at the boundaries of an organizational
domain can result in significant performance benefits. Such layers also allow security policies to be enforced on data
crossing the organizational boundary, as is required by firewalls.
Code on demand. This one is an optional constraint. The server can return a part of the code itself instead of
the data in JSON format. The point is to provide specific operations on the data that the client can use directly.
Although, it’s not a common practice.
REST allows client functionality to be extended by downloading and executing code in the form of applets or
scripts. This simplifies clients by reducing the number of features required to be pre-implemented. Allowing
features to be downloaded after deployment improves system extensibility. However, it also reduces visibility,
and thus is only an optional constraint within REST.
For Example if all of the client software within an organization is known to support Java applets, then services
within that organization can be constructed such that they gain the benefit of enhanced functionality via
downloadable Java classes. At the same time, however, the organization's firewall may prevent the transfer of
Java applets from external sources, and thus to the rest of the Web it will appear as if those clients do not
support code-on-demand. An optional constraint allows us to design an architecture that supports the desired
behavior in the general case, but with the understanding that it may be disabled within some contexts.
Uniform Interface
The central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform
interface between components.
By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and
the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent
evolvability.
The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than
one which is specific to an application's needs.
The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web,
but resulting in an interface that is not optimal for other forms of architectural interaction.
In order to obtain a uniform interface, multiple architectural constraints are needed to guide the behavior of components. REST is
defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive
messages; and, hypermedia as the engine of application state.
REST Architectural Elements
The Representational State Transfer (REST) style is an abstraction of the architectural elements within a distributed hypermedia
system.
REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the
constraints upon their interaction with other components, and their interpretation of significant data elements.
It encompasses the fundamental constraints upon components, connectors, and data that define the basis of the Web architecture, and
thus the essence of its behavior as a network-based application.
1. Data Elements
Unlike the distributed object style , where all data is encapsulated within and hidden by the processing components, the nature and
state of an architecture's data elements is a key aspect of REST. The rationale for this design can be seen in the nature of distributed
hypermedia. When a link is selected, information needs to be moved from the location where it is stored to the location where it will be
used by, in most cases, a human reader. A distributed hypermedia architect has only three fundamental options:
1) render the data where it is located and send a fixed-format image to the recipient;
2) encapsulate the data with a rendering engine and send both to the recipient; or,
3) send the raw data to the recipient along with metadata that describes the data type, so that the recipient can choose their own
rendering engine.
Option 1, the traditional client-server style, allows all information about the true nature of the data to remain hidden within the sender,
preventing assumptions from being made about the data structure and making client implementation easier. However, it also severely
restricts the functionality of the recipient and places most of the processing load on the sender, leading to scalability problems.
Option 2, the mobile object style, provides information hiding while enabling specialized processing of the data via its unique rendering
engine, but limits the functionality of the recipient to what is anticipated within that engine and may vastly increase the amount of data
transferred.
Option 3 allows the sender to remain simple and scalable while minimizing the bytes transferred, but loses the advantages of
information hiding and requires that both sender and recipient understand the same data types.
REST provides a hybrid of all three options by focusing on a shared understanding of data types with metadata, but limiting the scope
of what is revealed to a standardized interface.
REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard
data types, selected dynamically based on the capabilities or desires of the recipient and the nature of the resource.
Whether the representation is in the same format as the raw source, or is derived from the source, remains hidden behind the
interface. The benefits of the mobile object style are approximated by sending a representation that consists of instructions in the
standard data format of an encapsulated rendering engine (e.g., Java).
REST therefore gains the separation of concerns of the client-server style without the server scalability problem, allows information
hiding through a generic interface to enable encapsulation and evolution of services, and provides for a diverse set of functionality
through downloadable feature-engines.
Resources and Resource Identifiers
● The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a
temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In
other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource. A resource
is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.
● REST uses a resource identifier to identify the particular resource involved in an interaction between components. REST connectors
provide a generic interface for accessing and manipulating the value set of a resource, regardless of how the membership function is
defined or the type of software that is handling the request. The naming authority that assigned the resource identifier, making it possible
to reference the resource, is responsible for maintaining the semantic validity of the mapping over time
Representations
● REST components perform actions on a resource by using a representation to capture the current or intended state of that resource and
transferring that representation between components. A representation is a sequence of bytes, plus representation metadata to describe
those bytes. Other commonly used but less precise names for a representation include: document, file, and HTTP message entity,
instance, or variant.
● A representation consists of data, metadata describing the data, and, on occasion, metadata to describe the metadata (usually for the
purpose of verifying message integrity). Metadata is in the form of name-value pairs, where the name corresponds to a standard that
defines the value's structure and semantics. Response messages may include both representation metadata and resource metadata:
information about the resource that is not specific to the supplied representation.
Control data defines the purpose of a message between components, such as the action being requested or the meaning of a response. It is also used to
parameterize requests and override the default behavior of some connecting elements. For example, cache behavior can be modified by control data included
in the request or response message.
Depending on the message control data, a given representation may indicate the current state of the requested resource, the desired state for the requested
resource, or the value of some other resource, such as a representation of the input data within a client's query form, or a representation of some error
condition for a response. For example, remote authoring of a resource requires that the author send a representation to the server, thus establishing a value for
that resource that can be retrieved by later requests. If the value set of a resource at a given time consists of multiple representations, content negotiation may
be used to select the best representation for inclusion in a given message.
The data format of a representation is known as a media type. A representation can be included in a message and processed by the recipient according to the
control data of the message and the nature of the media type. Some media types are intended for automated processing, some are intended to be rendered for
viewing by a user, and a few are capable of both. Composite media types can be used to enclose multiple representations in a single message.
The design of a media type can directly impact the user-perceived performance of a distributed hypermedia system. Any data that must be received before the
recipient can begin rendering the representation adds to the latency of an interaction. A data format that places the most important rendering information up
front, such that the initial information can be incrementally rendered while the rest of the information is being received, results in much better user-perceived
performance than a data format that must be entirely received before rendering can begin.
For example, a Web browser that can incrementally render a large HTML document while it is being received provides significantly better user-perceived
performance than one that waits until the entire document is completely received prior to rendering, even though the network performance is the same. Note
that the rendering ability of a representation can also be impacted by the choice of content. If the dimensions of dynamically-sized tables and embedded
objects must be determined before they can be rendered, their occurrence within the viewing area of a hypermedia page will increase its latency.
2. Connectors
REST uses various connector types, to encapsulate the activities of accessing resources and transferring resource representations.
The connectors present an abstract interface for component communication, enhancing simplicity by providing a clean separation of
concerns and hiding the underlying implementation of resources and communication mechanisms.
The primary connector types are client and server. The essential difference between the two is that a client initiates communication by
making a request, whereas a server listens for connections and responds to requests in order to supply access to its services. A component
may include both client and server connectors.
A third connector type, the cache connector, can be located on the interface to a client or server connector in order to save cacheable
responses to current interactions so that they can be reused for later requested interactions. A cache may be used by a client to avoid
repetition of network communication, or by a server to avoid repeating the process of generating a response, with both cases serving to
reduce interaction latency. A cache is typically implemented within the address space of the connector that uses it.
A resolver translates partial or complete resource identifiers into the network address information needed to establish an
inter-component connection. For example, most URI include a DNS hostname as the mechanism for identifying the naming authority
for the resource. In order to initiate a request, a Web browser will extract the hostname from the URI and make use of a DNS resolver
to obtain the Internet Protocol address for that authority. Use of one or more intermediate resolvers can improve the longevity of
resource references through indirection, though doing so adds to the request latency.
The final form of connector type is a tunnel, which simply relays communication across a connection boundary, such as a firewall or
lower-level network gateway. The only reason it is modeled as part of REST and not abstracted away as part of the network
infrastructure is that some REST components may dynamically switch from active component behavior to that of a tunnel. The primary
example is an HTTP proxy that switches to a tunnel in response to a CONNECT method request, thus allowing its client to directly
communicate with a remote server using a different protocol, such as TLS, that doesn't allow proxies. The tunnel disappears when both
ends terminate their communication.
3. Components
A user agent uses a client connector to initiate a request and becomes the ultimate recipient of the response. The most common
example is a Web browser, which provides access to information services and renders service responses according to the application
needs.
An origin server uses a server connector to govern the namespace for a requested resource. It is the definitive source for
representations of its resources and must be the ultimate recipient of any request that intends to modify the value of its resources.
Each origin server provides a generic interface to its services as a resource hierarchy. The resource implementation details are hidden
behind the interface.
Intermediary components act as both a client and a server in order to forward, with possible translation, requests and responses. A
proxy component is an intermediary selected by a client to provide interface encapsulation of other services, data translation,
performance enhancement, or security protection.
A gateway (a.k.a., reverse proxy) component is an intermediary imposed by the network or origin server to provide an interface
encapsulation of other services, for data translation, performance enhancement, or security enforcement. Note that the difference
between a proxy and a gateway is that a client determines when it will use a proxy.
HTTP Protocol
Considering REST we usually think of HTTP-based applications. Although, it’s possible to use REST for different
protocols but it happens very rarely. Thus, in this section, we’ll focus on HTTP protocol and its usage with REST.
To begin with, an HTTP protocol is the protocol of the World Wide Web. It defines the rules of communication
between a client and a server. HTTP is stateless and it works in a request-response approach.
A client sends a request related to some resource. It can be an HTML website, a file, or javascript code. HTTP doesn’t
define what a resource should be. Each resource has its own identifier calls URI. A general structure of URI looks
as follows:
We already know, that the HTTP request consists of the request line and there is the HTTP method. The HTTP method describes the action
that the client wants to perform on the resource. There are four basic, most used methods: GET, POST, PUT, and DELETE. Let’s define
them.
GET is used to read the resource. The server returns the resource for the given URI. The GET method doesn’t contain a body. It only fetches
the resource and doesn’t modify it in any way.
POST is used to transfer data to the server. Thus, it’s usually associated with creating a resource. The data are sent in the body. After the
resource is created, the server should respond with its URI.
PUT is similar to POST. Although, it differs significantly. It’s used to update the existing resource. It overrides the whole resource with
transferred data. The main property, that differs PUT from POST, is that PUT is an idempotent operation. That means calling PUT with
the same data many times will always give the same result. It doesn’t have any side effects. Moreover, PUT points to an existing resource.
Whereas POST creates a new one.
**There are also additional methods, that could be sometimes used: PATCH, HEAD, OPTIONS, CONNECT, and TRACE.
Response Codes
HTTP response codes give us a rich dialogue between clients and servers about the status of a request. Most people are
only familiar with 200, 403, 404 and maybe 500 in a general sense, but there are many more useful codes to use. The
tables presented here are not comprehensive, but they cover many of the most important codes you should consider using
in a RESTful environment. Each set of numbers can be categorized as the following:
1XX: Informational
2XX: Success
3XX: Redirection
4XX: Client Error
5XX: Server Error
Uniform Resource Identifier (URI)
URI provides a simple and extensible means for identifying a resource. This specification of URI syntax and semantics is derived from
concepts introduced by the World Wide Web global information initiative, whose use of these identifiers dates from 1990 and is
described in "Universal Resource Identifiers.
Uniform: - Uniformity provides several benefits. It allows different types of resource identifiers to be used in the same context, even
when the mechanisms used to access those resources may differ. It allows uniform semantic interpretation of common syntactic
conventions across different types of resource identifiers. It allows introduction of new types of resource identifiers without interfering
with the way that existing identifiers are used. It allows the identifiers to be reused in many different contexts, thus permitting new
applications or protocols to leverage a pre-existing, large, and widely used set of resource identifiers.
Resource: - This specification does not limit the scope of what might be a resource; rather, the term "resource" is used in a general
sense for whatever might be identified by a URI. Familiar examples include an electronic document, an image, a source of information
with a consistent purpose (e.g., "today's weather report for Los Angeles"), a service (e.g., an HTTP-to-SMS gateway), and a collection
of other resources.
Identifier : - An identifier embodies the information required to distinguish what is being identified from all other things within its scope
of identification. Our use of the terms "identify" and "identifying" refer to this purpose of distinguishing one resource from all other
resources, regardless of how that purpose is accomplished (e.g., by name, address, or context).
URI, URL, and URN
A URI can be further classified as a locator, a name, or both.
The term "Uniform Resource Locator" (URL) refers to the subset of URIs
that, in addition to identifying a resource, provide a means of locating the
resource by describing its primary access mechanism (e.g., its network
"location"). For Example, mailto:myname@webpage.com &
ftp://webpage.com/download.jpg
The term "Uniform Resource Name" (URN) has been used historically to
refer to both URIs under the "urn" scheme which are required to remain
globally unique and persistent even when the resource ceases to exist or
becomes unavailable, and to any other URI with the properties of a name.
Example: urn:isbn:00934563 identifies a book by its unique ISBN number.
The Internet of Things (IoT) has fast grown to be a large part of how human beings live, communicate and do
business. All across the world, web-enabled devices are turning our global rights into a greater switched-on area
to live in.
1. Design Challenges
2. Deployment Challenges
3. Security Challenges
Design challenge in IoT
1. Connectivity –
It is the foremost concern while connecting devices, applications and cloud platforms. Connected devices that
provide useful front and information are extremely valuable. But poor connectivity becomes a challenge where
IoT sensors are required to monitor process data and supply information. A seamless flow of information to and
from devices, infrastructure, applications, and the cloud is vital to successful IoT deployment — particularly where
mission-critical operations are involved. The complexity of wireless connectivity with its still-evolving set of
standards can complicate matters as much as managing a diverse set of devices. Flexible design and testing
solutions capable of assessing devices with many radio formats are necessary for meeting this challenge.
Solutions should be simple, low-cost, and capable of application during both R&D and manufacturing phases.
2. Cross platform capability –
IoT applications must be developed, keeping in mind the technological changes of the future. Its development
requires a balance of hardware and software functions. It is a challenge for IoT application developers to ensure
that the device and IoT platform drivers the best performance despite heavy device rates and fixings.
3. Data collection and processing –
In IoT development, data plays an important role. What is more critical here is the processing or usefulness of
stored data. Along with security and privacy, development teams need to ensure that they plan well for the way
data is collected, stored or processed within an environment.
4. Lack of skill set –
All of the development challenges above can only be handled if there is a proper skilled resource working on the IoT application
development. The right talent will always get you past the major challenges and will be an important IoT application development
asset.
5. Compliance :
There are radio standards and global regulatory requirements to which IoT devices must comply. Compliance testing includes radio
standards conformance and carrier acceptance tests, and regulatory compliance tests such as RF, EMC, and SAR tests. Testing can
be complicated and time-consuming, often requiring designers and manufacturers to seek in-house pre-compliance test solutions to
meet product release schedules.
6. Coexistence
Wireless congestion or the overcrowding of radio channels is a logical consequence of billions of IoT devices competing for
bandwidth. Standards authorities have developed testing methodologies to evaluate device operations in the presence of other
signals. For IoT deployment, coexistence testing is crucial in measuring and assessing how a device will operate in a crowded,
mixed-signal environment and assess the potential risk to maintaining wireless performance in the presence of unintended signals.
Security challenges in IoT
1. Lack of encryption –
a. When a device communicates in plain text, all information being exchanged with a client device or
backend service can be obtained by a Man-in-the-Middle’ (MitM).
b. Anyone who is capable of obtaining a position on the network path between a device and its endpoint
can inspect the network traffic and potentially obtain sensitive data such as login credentials. A typical
problem in this category is using a plain-text version of a protocol (e.g. HTTP) where an encrypted
version is available (HTTPS). A Man-in-the-Middle attack where the attacker secretly accesses, and
then relays communications, possibly altering this communication, without either parties being
aware.Even when data is encrypted, weaknesses may be present if the encryption is not complete or
configured incorrectly.
c. For example, a device may fail to verify the authenticity of the other party. Even though the
connection is encrypted, it can be intercepted by a Man-in-the-Middle attacker.Sensitive data that is
stored on a device (at rest) should also be protected by encryption. Typical weaknesses are lack of
encryption by storing API tokens or credentials in plain text on a device. Other problems are the
usage of weak cryptographic algorithms or using cryptographic algorithms in unintended ways.
Insufficient testing and updating –
With the increase in the number of IoT(internet of things) devices, IoT manufacturers are more eager to produce and deliver their device as fast as
they can without giving security too much of although. Most of these devices and IoT products do not get enough testing and updates and are prone
to hackers and other security issues.
Brute forcing and the risk of default passwords – Weak credentials and login details leave nearly all IoT devices vulnerable to password
hacking and brute force. Any company that uses factory default credentials on their devices is placing both their business and its assets and the
customer and their valuable information at risk of being susceptible to a brute force attack.
IoT Malware and ransomware – Increases with increase in devices. Ransomware uses encryption to effectively lock out users from various
devices and platforms and still use a user’s valuable data and info. Example – A hacker can hijack a computer camera and take pictures. By using
malware access points, the hackers can demand ransom to unlock the device and return the data.
IoT botnet aiming at cryptocurrency – IoT botnet workers can manipulate data privacy, which could be massive risks for an open Crypto market.
The exact value and creation of cryptocurrencies code face danger from mal-intentioned hackers. The blockchain companies are trying to boost
security. Blockchain technology itself is not particularly vulnerable, but the app development process is.
IOT and M2M
Machine-to-Machine (M2M)
Machine-to-Machine (M2M) refers to networking of machines (or devices) for
the purpose of remote monitoring and control and data exchange.
An M2M area network comprises of machines (or M2M nodes) which have
embedded hardware modules for sensing, actuation and communication.
Various communication protocols can be used for M2M local area networks
such as ZigBee, Bluetooth, ModBus, M-Bus, Wireless M-Bus, Power Line
Communication (PLC), 6LoWPAN, IEEE 802.15.4, etc.
The communication network can use either wired or wireless networks (IPbased).
While the M2M area networks use either proprietary or non-IP based communication protocols, the communication network uses IP-based networks.
M2M gateway
● Since non-IP based protocols are used within M2M area networks, the M2M nodes within one network cannot communicate
with nodes in an external network.
● To enable the communication between remote M2M area networks, M2M gateways are used.
Difference between IoT and M2M
M2M IoT
2. Point to point communication usually embedded 2. Multipoint Communication. Ex: By showing the
within hardware at the customer site. Ex (Tap to ID card then your
pay, NFC). The service in Samsung taps to pay attendance will be registered.
without using card. You just need to bring your Monitoring the health parameters through the help
mobile near the swipe machine. Using your ID card of android app or web app.
to open or close the door.
4. Do not necessarily require internet connection. 4. Majority of cases require Internet connection.
7. Data collection and analysis is done in on 7. Data collection and analysis is done in cloud
premises storage infrastructure. mostly.
9. M2M protocols include ZigBee, Bluetooth, 9. IoT protocols include HTTP, CoAP, Web Socket,
ModBus, M-Bus, Wireless M-Bus, Power Line MQTT, XMPP, DDS, AMQP etc.. The protocols
communication (PLC), 6LoWPAN, IEEE 802.15.4, work above the network layer.
Z-wave. These protocols work below the network
layer.
10. The devices used are homogeneous type 10. The devices used are heterogeneous type
within an M2M area network. such as fire alarms, door alarms, lighting control
devices, etc..
ZigBee:
1. It is a new wireless technology based on IEEE802.15.4 standards.
2. It is created for remote control and sensor networks.
3. It is created by ZigBee Alliance. The members of these alliances are Philips, Motorola, intel, HP.
4. It is implemented with low cost. It provides reliable data transfer, short range operations, very low power
consumptions, adequate security features.
5. Useful for home automation.
M-Bus (Meter bus):
1. It is an Europian Standard for the remote reading of gas or electricity meters.
2. M-Bus is also usable for other type of consumption meters (ex-water, gas).
3. The M-Bus interface is made for communication on two-wires, making it cost effective.
PLC (Power Line Communication):
1. It is a communication technology that enables sending data over existing power cables.
2. The power cable can both power up the device and at the same time control/ retrieve data from it in a half
duplex manner.
ModBus:
1. It is a serial communication protocol.
2. It is developed by Modicon published by Modicon in 1979.
3. It is used for PLC (programmable logic controllers).
4. It is a method for transmitting information over serial lines between electronic devices.
5. It is an open protocol i.e it’s free for manufacturers to build into their equipment without having to pay royalties.
6. It is typically used to transmit signals from instrumentation and control devices back
to main controller or data gathering system. Ex: A system measures temperature and humidity and communicate
the result to computer. ModBus is generally used to connect a supervisory computer with remote terminal unit in
supervisory control and data acquisition (SCADA).
SDN and NFV
Limitations and disadvantages of conventional network architecture:
Complex Network Devices:
1. The protocol implementation overhead to improve link speed and reliability and most of the protocols are
designed for specific applications.
2. It supports limited interoperability due to lack of standard and open interfaces i.e hardware and software of
network are proprietary.
3. It has very slow product life cycles which limit the opportunity of innovations.
4. It is well suited for static traffic patterns but difficult to implement the protocols for dynamic traffic patterns
as in IoT and Cloud.
Management Overhead:
1. As it does not support interoperability so it is difficult to manage multiple network devices and interfaces
from multiple vendors.
2. For upgrading the networks it needs configuration changes in multiple devices like switches, routers,
firewalls etc.
Limited scalability:
It is difficult to implement virtualisation, implementation of distributed algorithm and data analytics for
distributed applications and big data with minimal manual configurations.
Benefits of SDN over conventional network architecture:
1. It makes networks flexible with the help of software by removing the demerits of traditional or conventional
network architecture.
2. It reduces the complexity of increasing number of distributed protocols and the use of proprietary hardware
and interfaces which are used to be implemented in conventional
network. It uses simple packet forwarding technique as opposing to conventional network.
3. It separates control plane from the data plane and centralises the network controller. But in conventional
network architecture the control plane and data plane are coupled.
4. The other benefit of SDN is network management and end to end visibility. The network admin only deal
with one centralised controller to distribute policies to the connected switches instead of configuring multiple
individual devices.
5. SDN applications can be deployed through programmable open APIs so this speeds up the innovation as
the network administrators no longer need to wait for the device vendors to embed new features in their
proprietary hardware.
SDN
Software-Defined Networking (SDN) is
a networking architecture that
separates the control plane from the
data plane and centralizes the network
controller.
Control Plane (CP): It is the pat of the network that carries the signalling and routing message traffic. It
takes the decision about how the packets should flow through the network. It decides the path as well as the
metrics to be followed in order to decrease the traffic.
The CP allows dynamic access and administration. A network administrator can shape the traffic from a
centralised controller console without touching individual switches to change the metrics in the preloaded
program in switches and routers (i.e the administrator can change any network switches rules when
necessary). Prioritising, de-prioritising or even blocking specific type of packets i.e called dynamic routing.
This is especially helpful in cloud computing.
Data plane: It is the part of the network that carries the payload data traffic from one place to other.
SDN controller: SDN controller manages the data traffic. The SDN controller is a software installer in
server at data centre. It is based on protocols. It acts in between network devices at one end and
applications at other end. Any communication between applications and devices has to go through
controller.
Programmable open APIs: The SDN applications can be deployed through programmable open APIs. It
acts as an interface between SDN application and control layers (northbound interface). This helps to
implement various network services like routing, access control and quality of service (QoS).
Standard communication interface (Openflow): It is the interface between the control and infrastructure
layers (south bound interface). OpenFlow is defined by Open networking Foundation (ONF). With the
OpenFlow the forwarding plane of the network devices can be directly accessed and manipulated. It uses
concept of flows to identify network traffic based on predefined match rules. Flows can be programmed
statically or dynamically by the SDN control software.
NFV (Network Functions Virtualization)
Network Functions Virtualization (NFV) is the
decoupling of network functions from
proprietary hardware appliances and running
them as software in virtual machines (VMs).
Network functions virtualization (NFV) is the
replacement of network appliance hardware
with virtual machines. The virtual machines use
a hypervisor to run networking software and
processes such as routing and load balancing.
Pay-as-you-go: Pay-as-you-go NFV models can reduce costs because businesses pay only for what they need.
Fewer appliances: Because NFV runs on virtual machines instead of physical machines, fewer appliances are
necessary and operational costs are lower.
Scalability: Scaling the network architecture with virtual machines is faster and easier, and it does not require
purchasing additional hardware.
How does network functions virtualization work?
Essentially, network functions virtualization replaces the functionality provided by individual
hardware networking components.
This means that virtual machines run software that accomplishes the same networking functions
as the traditional hardware.
Load balancing, routing and firewall security are all performed by software instead of hardware
components.
A hypervisor or software-defined networking controller allows network engineers to program all of
the different segments of the virtual network, and even automate the provisioning of the network.
IT managers can configure various aspects of the network functionality through one pane of
glass, in minutes.
NFV architecture
An NFV architecture consists of three parts:
Centralized virtual network infrastructure: An NFV infrastructure may be based on
either a container management platform or a hypervisor that abstracts the compute,
storage and network resources.
Software applications: Software replaces the hardware components of a traditional
network architecture to deliver the different types of network functionality (virtualized
network functions).
Framework: A framework (often known as MANO – management, automation and
network orchestration) is needed to manage the infrastructure and provision network
functionality. NFV Management and Orchestration focuses on all virtualization-specific
management tasks and covers the orchestration and life-cycle management of physical
and/or software resources that support the infrastructure virtualization, and the life-cycle
management of VNFs.
NFV Use Case
NFV can be used to virtualize the Home Gateway. The NFV infrastructure in the cloud hosts a virtualized
Home Gateway. The virtualized gateway provides private IP addresses to the devices in the home. The
virtualized gateway also connects to network services such as VoIP and IPTV.
Benefits of network functions virtualization
Many service providers feel that the benefits of network functions virtualization outweigh
the risks. With traditional hardware-based networks, network managers have to purchase
dedicated hardware devices and manually configure and connect them to build a
network. This is time-consuming and requires specialized networking expertise.
NFV allows virtual network function to run on a standard generic server, controlled by a
hypervisor, which is far less expensive than purchasing proprietary hardware devices.
Network configuration and management is much simpler with a virtualized network. Best
of all, network functionality can be changed or added on demand because the network
runs on virtual machines that are easily provisioned and managed.
Risks of network functions virtualization
Physical security controls are not effective: Virtualizing network components increases their
vulnerability to new kinds of attacks compared to physical equipment that is locked in a data center.
Malware is difficult to isolate and contain: It is easier for malware to travel among virtual components
that are all running off of one virtual machine than between hardware components that can be isolated or
physically separated.
Network traffic is less transparent: Traditional traffic monitoring tools have a hard time spotting
potentially malicious anomalies within network traffic that is traveling east-west between virtual machines,
so NFV requires more fine-grained security solutions.
Complex layers require multiple forms of security: Network functions virtualization environments are
inherently complex, with multiple layers that are hard to secure with blanket security policies.