KEMBAR78
Leonardi Notes Gabriele DiVittorio DiCaro | PDF | Routing | Computer Network
0% found this document useful (0 votes)
171 views21 pages

Leonardi Notes Gabriele DiVittorio DiCaro

A communication network allows nodes to communicate by connecting them through transmission links. It has four basic functions: transmission, signalling, switching, and management. There are two main types of switching in networks - circuit switching and packet switching. Circuit switching reserves dedicated resources for the duration of a connection, while packet switching transmits information in bursts without reserving resources. Packet switching can have higher resource utilization but variable delays, while circuit switching guarantees bandwidth but has lower utilization. Network architectures are designed using layered models to separate functions and simplify design.

Uploaded by

Kshitij Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
171 views21 pages

Leonardi Notes Gabriele DiVittorio DiCaro

A communication network allows nodes to communicate by connecting them through transmission links. It has four basic functions: transmission, signalling, switching, and management. There are two main types of switching in networks - circuit switching and packet switching. Circuit switching reserves dedicated resources for the duration of a connection, while packet switching transmits information in bursts without reserving resources. Packet switching can have higher resource utilization but variable delays, while circuit switching guarantees bandwidth but has lower utilization. Network architectures are designed using layered models to separate functions and simplify design.

Uploaded by

Kshitij Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Emilio Tu-tu-tu notes

1 - Communication networks: a brief introduction


A communication network is a collection of terminal nodes and transmission links in order to make possible
the communication between the nodes. Information transmission requires resources both for transmission, i.e.
to carry information of a defined medium and processing to control the information transfer. Networking has
four basic functions:
1. Transmission: the ability to transmit information on a channel
2. Signalling : the ability to generate, exchange and process information to a set-up service
3. Switching : the ability to physically switch and route information through a ”topology”
4. Management: the ability to manage the network as a complex object
A modern network support the transfer of huge amount of data among many millions of endpoints. Different kind
of data can be shared in the network (mainly audio, video and data) and for each type different charac-teristics
and bandwidth profiles are requested. Data network (Internet) application are much different and can be
classified in three main categories:
• Real-time interactive (person to person) characterised by limited storage capabilities, very sensible to
delay and delay variation (jitter) as e.g. telephony, games and video-conferencing
• Quasi-real-time no interactive (machine to person) with ”best-effort” service possible, but requiring large
storage at the endpoint to overcome delay variation introduced by the network, e.g. web browsing, video
and audio playback
• No real time (machine to machine) with ”best-effort” paradigm accepted, e.g. e-mail, batch processing,
file transfer, etc.
As said before transmission needs resources. However these resources can be made available in different
ways. Dedicated resources are permanently assigned to a single information flow while Shared ones are
simultaneously used by several information flows (for example, FDM-Frequency Division Multiplexing and
TDM-Time Division Multiplexing). Shared resources are expensive, but the cost per usage is extremely low. As
it can be seen in Figure 1, with FDM the frequency spectrum is divided up among the connections established
across the link. In particular, the link dedicates a frequency band to each connection for the duration of the
connection. The width of the band is called bandwidth (e.g. radio stations). In case of TDM, instead, time is

(a) FDM-Frequency Division Multiplexing (b) TDM-Time Division Multiplexing

Figure 1: Shared transmission resources


divided into frames for fixed duration, and each frame is divided into a fixed number of time slots. When the
network establishes a connection across a link, the network dedicates one time slot to this connection. These
slots are dedicated for the sole use of that connection, with one time slot available for use to transmit the
connection’s data. In a telephone network the dedicated resources are those paid by the subscriber such as the
plug, the line (in bundle) and the interface card, the shared ones (paid by the caller) are the links (with their
T x and Rx interfaces) and the exchanges (switches).

1
Switching
As previously said, switching is the ability to physically switch and route the information. In a generic network
the resources inside a node can be shared in two ways

• Circuit switching : when the information flows are continuous in time


• Packet switching : when the information flow is in bursts

Circuit switching
In circuit-switched networks, e.g. the telephone network, the resources needed along a path to provide for
communication between the end systems are reserved for the duration of the communication session between
the systems themselves. Consider what happens when one person wants to send information (voice or facsimile)
to another over a telephone network. Before the sender can send the information, the network must establish
a connection between the sender and the receiver. When the network establishes the circuit, it also reserves a
constant transmission rate in the network’s links (representing a fraction of each link’s transmission capacity)
for the duration of the connection. Since a given transmission rate has been reserved for this sender-to-receiver
connection, the sender can transfer the data to the receiver at the guaranteed constant rate. The caller pays
the cost of the call. So, in general, three phases can be distinguished

Figure 2: Circuit switching - telephone network

• Set-up: the caller lifts the receiver off-hook, dials the called number, waits for the called phone ringing
tone. The network allocates the resources required to transfer data between the two users

• Data transfer : the called takes the call, the data transfer starts
• Release: the caller places the receiver on-hook, the network releases all the resources allocated for this call

Advantages In circuit switching we have reserved resources and, for this reason, we have both a guaranteed
bandwidth and a constant delay. Moreover the circuit is transparent to data format, protocols and data rate
(whenever it is lower than the maximum data rate allowed on the circuit).

Disadvantages Due to the fact that we have reserved resources the utilization is low (for phone < 5%)
and the cost is proportional to the duration of the connection, not to the amount of information. No data
format or data rate conversions can be done in the network.

Packet switching
In a network application, end systems exchange messages with each other. Messages can contain anything the
application designer wants. Messages may perform a control function or can contain data, such as an e-mail or
a JPEG image, or an MP3 audio file. To send a message from a source end system to a destination end system,
the source breaks long messages into smaller chunks of data known as packets. Between source and destination,
each packet travels through communication links and packet switches. In this kind of transmission there are
no reserved resources to data flows. When an originator has some information to send, it inserts the info in
a message, puts it in an envelope, writes a destination address on the envelope and sends to a network access
point the message. From now on, the network handles the message using its own rules and tries to carry it to
the destination. If the network fails the message is lost, otherwise, sooner or later, it reaches its destination.
Packets are more formally called Protocol Data Units (PDU). A PDU is a structure (i.e. a sequence of bits
where the position of each bit defines its meaning). The PDU is divided in two parts:

• PCI - Protocol Control Information


• SDU - Service Data Unit

2
Most packet switches use store-and-forward transmission at the inputs to the links. Store-and-forward trans-
mission means that the packet switch must receive the entire packet before it can begin to transmit the first
bit of the packet onto the outbound link. So, when a packet or PDU comes to a node we have first a storing
of each incoming PDU, then the analysis of the PCI (which is in a certain sense the ”envelope” of the message
itself). After the analysis, to the output port to be used to forward the PDU towards its destination is found.
As last stage, the PDU is inserted in the transmission queue of the selected output port. As said, each packet
switch has multiple links attached to it. For each attached link, the packet switch has an output queue. If
an arriving packet needs to be transmitted onto a link but finds the link busy with the transmission of another
packet, the arriving packet must wait in the output buffer. Thus, in addition to the store-and-forward delays,
packets suffer output buffer queuing delays. These delays are variable and depend on the level of congestion in
the network. Since the amount of buffer space is finite, an arriving packet may find that the buffer is completely
full with other packets waiting for transmission. In this case, packet loss will occur.

(a) Output queue (b) Packet loss

Figure 3: Packet switching

Advantages Some advantages of packet switching are:


• possible high resource utilization
• PDU format can be checked in each node
• data rate can be different in each link

• PDU format and transmission protocols can be modified in specialised nodes


• cost effective solution (extremely low cost per bit)

Disadvantages Packet handling in each node requires a huge amount of processing power (CPUs increase
their power according to Moore’s law). Another disadvantage is the fact the the delay is variable so it is
congestion prone and usually the network utilisation is low to reduce the impact of possible congestion of
performance.

Architecture and protocols


Communication is the transfer of information according to pre-established rules. This means that we need
to cooperate if we want to communicate. An abstract definition of the communication rules governing the
interaction of two or ore users requires to define a reference model. At its maximum abstraction level this
model specifies a network architecture. A network architecture is the design of a communications network.
It is a framework for the specification of a network’s physical components and their functional organization
and configuration, its operational principles and procedures, as well as data formats used in its operation. To
achieve our purposes layered architectures are used which permit to obtain:
• separate functions

• streamline design
• reduction of management complexity
• simplification of standardisation

The OSI - Open System Interconnected model is an example of layered network structure. It is a way of sub-
dividing a communications system into smaller parts called layers. A layer is a collection of similar functions
that provide services to the layer above it and receives services from the layer below it. On each layer, an
instance provides services to the instances at the layer above and requests service from the layer below. At the
interface of two layers we can identify:

• A provider of a service (the lower layer)

3
• A user of the service (the upper layer)
• A Service Access Point (SAP)

Figure 4: OSI and other network architectures


A protocol is the formal description of the procedures adopted to ensure the correct communication among
two or more objects operating at the same hierarchical layer. Protocols are a set of rules concerning: semantics,
syntax and timing.

Communication services
They are divided into two big categories:
Connection-oriented An agreement is first established among network and user, then data are transferred,
and finally the connection is released. PDUs are handled according to the agreement
Connectionless (or datagram) Data are transferred without any preliminary agreement and PDUs are handled
independently from each other
A type of connection-oriented service is the virtual circuit. In this circuit all PDUs of a data flow cross the
network following the same path:
• arrival order identical to departure order

• routing is done during connection set-up


• low processing delay in nodes
• allows for bandwidth allocation control in network links

• minimize delay jitter

The OSI model


Let’s now analyse the different layers of the OSI model. The physical, data link and network layer are the only
ones present in every router.

Figure 5: OSI architecture

4
Physical layer
The job of the physical layer is to move the individual bits within the frame from one node to the next. It
provides mechanical and electrical means, along with functions and procedures to activate, supervise, maintain
and deactivate physical links. It transfer bits among Data Link entities.

Data Link layer


The Internet’s network layer routes a datagram through a series of routers between the source and destination.
To move a packet from one node (host or router) to the next node in the route, the network layer relies on the
services of the link layer. In particular, at each node, the network layer passes the datagram down to the link
layer, which delivers the datagram to the next node along the route. At this next node, the link layer passes
the datagram up to the network layer. It also have functions as:
• Medium Access Control
• Error detection and recovery
• Flow control
• PDU identification and delimitation
The MAC sub-layer provides addressing and channel access control mechanisms that make it possible for several
terminals or network nodes to communicate within a multiple access network that incorporates a shared medium.
For Error Correction two main techniques are used: FEC - Forward Error Control and ARQ - Automatic Repeat
request. Flow control is used to adapt the sending rate of the source to receiver maximum speed avoiding the
receiver buffer overflow. Window protocols are typically used. The sender can send a limited number of packets
(free credits matched to the receiver buffer size) before stopping. As soon as it receives new fresh credits (from
the receiver) it can send new packets. The flow control protocols are classified into:
• On-Off based lossless flow control
• Credit based lossless flow control
On-off flow control works transmitting on the base of the buffer filling. A low and a high watermark is defined to
have a safety margin of a quantity equal to RT T · Rate. When the buffer reaches the low watermark the
transmission, and viceversa for the high one. Even if it is very simple it has half the efficiency of credit-based
flow control. In Credit-based flow control window-like protocols are typically used. The sender can send a
limited number of packets/bytes (free credits mathced to the receiver buffer size) before stopping. As soon as it
receives new credits from the receiver it can send new packets. As we can see from the scheme there is a credit

Figure 6: Credit-based flow control


count that ”counts” the buffer slots known to be available at the downstream site. In red the arriving credits
are represented, i.e. an increment for every buffer slot made available a credit is sent upstream. Traffic can only
depart if and when it acquires (decrements) the credits that correspond to the buffer slots needed. The buffer
size must be at list Buffer space · RT T .

Network layer
It provides the functions and procedures to set-up, monitor and tear-down network connections used by Trans-
port entities (connection oriented) and it also provides the functions and procedures to deliver messages among
transport entities (datagram service). Basic functions are:
• Routing
• Congestion control
• Billing

5
Routing Routing can be classified in different categories. We talk about global routing when all routers
have complete view of topology. Conversely we define decentralized routing when routers have a local view of
the networks (the one of neighbours and cost of the links). We can also classify in static (changing slowly with
time) and dynamic (quick changes of traffic conditions - with possible instabilities). Hierarchical routing
is a method of routing in networks that is based on hierarchical addressing. Portions of the networks are
administered by different actors (AS - Autonomous System). The network of networks concept is adopted to
make routing algorithms more scalable. Internet routing works thanks to routers maintaining and periodically
updating their routing tables that permits to associate:

|Destination |next hop router |interface

Packets are thus routed-forwarded according to routing tables. In Intra AS touring simple minimum-cost
routing strategies are adopted. The AS network can be abstracted as a graph with n nodes and different links
with a given cost. If an information must be sent from a node to the other the lowest-cost path is chosen. Inter

Figure 7: Example of AS intra-network


AS routing are based on mutual contracts between AS. Here AS of different size and nature coexist.

Congestion Control To avoid network congestion several strategies are possible:


• Proactive CC : minimizes the probability that congestion arises by controlling the traffic injected into the
network
• Reactive CC : reacts to congestions forcing a reduction of the speed at which sources send their data

Transport layer
It provides the functions and the procedures to set-up, monitor, and tear-down Transport connections used by
Session entities (connection oriented service) and it also provides the functions and the procedures to deliver
messages among Session entities (non-connected service). A transport-layer protocol provides for logical com-
munication between application processes running on different hosts. By logical communication, we mean that
from an application’s perspective, it is as if the hosts running the processes were directly connected; in reality,
the hosts may be on opposite sides of the planet, connected via numerous routers and a wide range of link types.
Application processes use the logical communication provided by the transport layer to send messages to each
other, free from the worry of the details of the physical infrastructure used to carry these messages. Its basic
functions are:
• Error control

• Flow and congestion control


• Sequence control

TCP - Transmission Control Protocol Flow and congestion control are implemented in Internet by
TCP that adapt the sending rate of sources to the current network conditions. It is a window based protocol.
Its working operation can be divided into different condition. At the beginning there is a ”slow start” since
the source slowly increases its sending rate (proportionally to the sending rate) as long as a congestion is not
detected. Then from exponential increase we switch to additive increase of one element every RT T . Packet
losses are used as congestion indicators, in fact when receiving three times an ACK of the same (meaning that
the following ones weren’t received) the windows reduces of a half its dimension. In case of a critical problem
a time-out is properly set. The dimension of the window is reset to one.

6
Figure 8: TCP working example

Session layer
It provides a stable connection to Presentation entities, recovering Transport connection failures and synchro-
nizes the data transfer, so that it can be paused, restarted and completed also in case the underlying Transport
layer connection is unavailable (for a limited amount of time).

Presentation layer
It converts data from different formats so that they are presented to the application in a given known format and
provides ciphering services. It usually requires a large amount of processing power to perform these functions
(bottleneck of the protocol stack)

Application layer
It provides network services to user applications. Some examples are:

• FTP - File transfer


• SMTP - E-mail
• HTTP - Web browsing

2 - Medium Access Control - MAC


The data link layer is mainly divided into two sub-layers: the MAC protocol and the logical flow control. MAC
protocol is a local one and concerns all the issues deriving from the fact that the communication medium is
shared. Almost all physical transmission media can be shared:

• Free space: radio signals


• Copper pairs: electrical signals at low frequency
• Coaxial cables: electrical signals at high frequency

• Optical stars: optical signals (optical fibre)


However not in all system is so easy, e.g. optical fibre has unidirectional propagation and so to communicate
we need a pair of fibre. In the example in Figure 9 we can see different terminals connected to an antenna. We

Figure 9: Sharing example


can distinguish between a down-link direction (from antenna to device) and an up-link direction (from device

7
to antenna). This distinction does not hold for all connection. In fact, in case of Wi-Fi this distinction cannot
be done but in case of 3G or 4G the functions are separated. The biggest problem of sharing is in the up-link
direction, i.e. one receiver with multiple transmitters. MAC protocol addresses this problem, i.e. to manage
all the transmitters to avoid conflicts. Even in the example shown in Figure 10 we can have conflicts between

Figure 10: BUS Topology example


the different terminals since they are sharing all the same channel. Another possibility could be to link all
terminals to a switch allowing a higher bit-rate, but with bad consequences in terms of reliability. In fact if
the switch breaks down all the terminals would be disconnected and all the system would stop working. In the
case in Figure 10, instead, if one terminal breaks, all the others will go on working (bigger reliability but with
lower bit-rate). In vehicles, there is no reason to choose a wireless network since all the components are still.
Moreover we have less issues that could be encountered with a Wi-Fi network, such as interferences coming
from the external world. The advantages of sharing a medium are:
• every node can directly send/receive data from any other node
• every node can send data to many nodes simultaneously

The only disadvantage is the fact that possible contentions for the common resource arise. In case of fixed
assignment as FDMA, TDMA or CDMA the resource is used by a single node at a time, so there are no
conflicts in the access to the transmission medium. Each node use a fixed fraction of the total capacity of
the medium (so it is good only for traffic that has predictable constant rate). It is very inefficient for variable
data-rate. In this case we need dynamic allocation schemes, i.e. MAC protocols. FDMA is the division of the
bandwidth B in channels, so the bit rate is reduced of a factor n (with n number of channels) and the different
channels are separated through a filter. In TDMA the transmission occurs at full data rate cause the receiver
has available all the bandwidth B but at a prescribed time. It is more complex than FDMA since we need
electronic devices capable of full speed transmission and we also need a very good synchronisation. It can be a
bit adaptive by changing the allocating time. In CDMA we have overlapping in time and frequency that are in
a certain sense orthogonal and can be separated at the receiver. I can allocate different rates at different times
making all more flexible.

MAC protocols
In telecommunication networks rules are called protocols, i.e. a collection of algorithms, formats (messages to
transmit information) and timings (to avoid deadlocks). Protocols governing the access to a common medium
are referred to as MAC protocols. Whenever there is a resource to be shared among many users, a rule should
be used. If everyone follows the rule there will be no ambiguities. The protocol can be divided into:
• Distributed protocol : each participant knows the rules and the rank governs the sequence of acts
• Centralised protocol : each participant knows the rules and the rank is used to identify the master

There is something in common in distributed and centralised controls, i.e. the fact that participants know
the rules and the rank is used to take decisions. Thus, to implement a sharing system we need: a procedure
(the rules) and a decision criterion (the rank). Packet Multiple Access can be done following different
strategies:
• Polling

• Token-based systems (distribute polling)


• Reservation and scheduling

8
• Random access
In the first cases, the resource is used by a single node at a time, so there are no conflicts in the access to the
transmission medium. In the latter case conflicts can arise.

Polling
The master periodically manages the system allowing one station at a time to transmit. In this case no collisions
can arise, but we have a big delay and a waste of bandwidth.

RT T t xc
T

Defining the RT T as the sum of the two delay period plus the time txc it is possible to evaluate the coefficient
of utilization u of the entire system that is equal to
T
u=
T + RT T
To optimize the coefficient large values of T are necessary but this has the drawback of increasing the access
time for all the others slave nodes. The things are still worse when only one of the slaves has to transmit, in
fact in this case after the transmission a the delay of all the others must be waited before sending again. In
general the access time considering n slave nodes is

Taccess = (n − 1)(RT T + T )

Token system
With this strategy a token is passed among the stations in a cyclic way (this is the reason why it is also called
virtual ring). The node that is in possession of the token has the right to transmit. Every station receiving
the token can hold it only for a limited time, then it has to release it passing it to the next station. It could
happen that the token, after being released by a station, is not recognised by the following one. In this case
the transmission is stopped for all the stations of the virtual ring. A time-out is thus set and a mechanism to
recover the token must be implemented.

Reservation and Scheduling


In this case the nodes send a reservation request to the master. The master then sends to each requesting node
a transmission grant, specifying when and how long it can use the common medium. This type of access scheme
is rarely employed in packet switching networks.

Random access
In case of random access the resource can be tentatively used by several nodes at a time, so there are conflicts
in the access to the transmission medium. There are many types of random access: Pure Random Access
(ALOHA), Slotted Random Access (S-ALOHA), Carrier Sense Multiple Access (CSMA) and Arbitrated Access
(CANbus). Let’s analyse them separately.

Random Access (ALOHA)


In this case collision may arise in two different ways. The former is a collision in the middle, i.e. A sends a
packet but B starts sending because he didn’t receive the signal. The latter arises when, even if a packet has
been received, the station B does not check and sends in turn. In ALOHA each packet can collide with two
other packets placed before or after (but with overlapping) with the node i in Figure 12. In fact, for each slot,
a transmission with probability p is possible. Each slot thus has two position of collision and since the stations
are not synchronised collision can occur. If we consider the performance of the ALOHA we have N nodes with

9
Figure 11: ALOHA

Figure 12: ALOHA collision due to overlapping

a probability p for each node of transmitting with a fixed packet length. For a single node we have that:

P (success,single) = P (T X)·P (no other T Xin [t0 −1, t0 +1])P (no other T X in [t0 −1, t0 ]) = p(1−p)N −1 (1−p)N −1

Thus considering N independent nodes

P (success,all) = N p(1 − p)N −1 (1 − p)N −1

Finally for a very big number of nodes, i.e. for N → ∞ the maximum of P (success,all) is achieved for
1
p= ≈ 0.18
2e

Slotted Random Access (S-ALOHA)


Whenever a node has a packet ready to be sent, the packet is transmitted at the beginning of the next slot,
without caring for other simultaneous transmissions from other nodes in the same slot. Of course collisions
may occur and colliding packets are destroyed. In this case the success probability increases up to 36%. Time
is divided into slots (i.e. time periods of identical length) where the packet transmission can start only at the
beginning of a slot (Figure 13). It is evident that collision can occur only when other nodes are trying to occupy

Figure 13: S-ALOHA


the same slot, but no collision due to overlapping can occur. In Figure 13 collision are marked with (C), success
transmission with (S) and empty slots with (E). If we consider the performances of S-ALOHA with a network
of N nodes and each node transmitting with a probability p (considering fixed packet length) we have that for
a single node:

P (success,single) = P (T X)P (no other T X in the same slot) = p(1 − p)N −1

Thus considering N independent nodes

P (success,all) = N p(1 − p)N −1

If N is very big (i.e. N → ∞) the maximum of P (success,all) is achieved for

p = e−1 ≈ 0.36

The plot in Figure 14 holds in case of N very large. The part of the plot in which we are more interested is
the linear behaviour in which the ratio S/G is the highest one.

10
Figure 14: Slotted ALOHA versus Pure ALOHA

Exponential back off To avoid congestion collapse the nodes that have collided should refrain from trans-
mitting again the same packet too soon. Moreover they must extract a random delay after which they can
reattempt, e.g. in a uniformly distributed number of slots in [1, K] in S-ALOHA, the average delay (i.e. the
value of K) should be doubled at every unsuccessful transmission attempt of the same packet(exp back off),
being the collision probability P = 1/K.

Carrier Sense Multiple Access


Whenever a node has a packet ready to be sent, the packet is transmitted when the channel is sensed idle.
Collisions can still occur and colliding packets are destroyed. Success probability depends both on distance
and packet length. After having sensed the channel, if it is free the node transmits immediately the packet,
otherwise the packet transmission is delay. There are three delay method in CSMA:
• Persistent CSMA (1-persistent): retry immediately when the channel is free again (this solution is better
in lightly loaded networks)

• Non-persistent CSMA (0-persistent): retry after a random delay (better in high load networks)
• P-persistent CSMA: retry immediately with probability p, retry after a random delay with probability
(1 − p)
Collisions in CSMA occur due to propagation delay. During collision the packet is lost. In this case the distance
plays a fundamental role, being it directly proportional to the collision probability. Also packet length is of
paramount importance, in fact the longer the packet, the lower the number of contentions (i.e. lower number
of collisions). The throughput of CSMA is inversely proportional to the ratio
tp
TT X
where tp is the propagation delay and TT X is the packet transmission time.

CSMA/CD (Collision Detection)


It enhances CSMA because detects collisions, stops packet transmission when a collision is detected and saves
time, reducing time losses in collisions. Collision detection is easy in wired LAN, measuring the DC component
of the data signal or listening to its own transmitted packets, while it is more difficult in wireless LAN (hidden
terminal problem). After a short period (jamming period ) the packet transmission is aborted and collided
packets are rescheduled. Collision duration is shortened with respect to pure CSMA.

Arbitration It is another way to handle collisions. Using arbitration one and only one of the colliding nodes
is allowed to transmit its packet. Arbitration uses some feature at physical layer to discriminate between binary
signals (dominant and recessive), but it requires full-duplex capabilities (receiving while transmitting).

11
Figure 15: CSMA/CD Collision detection

3 - Local Area Network


In the 80’s IEEE standards were published for physical layer, media access, logical link and Inter-networking
(i.e. a large LAN comprising many small LAN networks). The 802.11 IEEE standard is the Wi-Fi standard.
As previously said the MAC is a subpart of the Data Link layer, with Logical Link Control as other sub-part.
Every layer has an address and MAC identifies the card (Network Interface Card) that is fixed and unique for
every machine. It is necessary that the cards are all different otherwise the lane will not work since it would be
impossible to distinguish their packets. MAC addresses can be:
• Unicast: send only to an identified address (NIC)
• Broadcast: send to everyone (never used in Internet)
• Multicast: send to a group of addresses, e.g. video conferences. It is a more complex system which must
be dynamic, since, if we consider N users, the number of possible groups will be 2N (when N is large it
would be very difficult to set 2N multicast addresses, so a dynamic definition is needed)
We have two multicast modes: solicitation, i.e. request for service to a multicast group, and advertisement, i.e.
periodical broadcast of the information concerning a multicast group.

Packet reception
When receiving a packet the NIC checks its correctness (CRC check) and if it is correct the packet is received
if the destination MAC address is identical to the one registered in the NIC. This is the normal mode. Another
way of receiving is promiscuous mode that consists in accepting all incoming packets, regardless of their MAC
destination addresses. This mode was intended to be used to observe packets in a LAN for testing purposes but
it can be also used for malicious activities.

Ethernet
It is a protocol invented in the 70s. It is a 1-persistent CSMA/CD with bus topology. Collisions can occur due
to distance among simultaneously transmitting nodes and protocol persistence parameter. When a collision
is detected, all transmitting nodes start sending the jamming sequence, and then become silent. All colliding
nodes retry to transmit after a random time (statistical contention resolution). A correctly received packet is
not acknowledged. Packets are called ”frame”. At the beginning there is a preamble to allow the receiver to
synchronize the sampling frequency to the transmitter one. LAN is built with a BUS (actually several buses are
present thanks to the presence of a SWITCH, active device). Hubs (layer 1 devices) are devices that receive
electrical signals from their inputs, mix incoming signals and amplify and send them out. Switches (layer
2 devices) receive Ethernet packets from their inputs, store incoming packets and then forward them to the
appropriate output port and send packets on each output port according to a First Come First Serve (FCFS)
policy. In this case collisions do not occur and the available bandwidth is shared between switch and terminal.
Switch are useful to avoid collisions when we have a multi-segment network. Collisions may arise only inside
the branch of the network, i.e. on the common bus before the switch. For this reason Switches work better
than hubs but of course their cost is higher.

12
Ethernet now
100 M b/s Ethernet is the standard for LANs at home and is beign replaced by 1 Gb/s Ethernet in offices. Many
network services rely on its speed (e.g. Network Access Storage, Multimedia stream players and Internet TV
sets).

4 - On-vehicle data networks


ECU needs to cooperate to optimise vehicle control. On one hand sensors should send their measures to ECU,
on the other hand controllers and actuators are far each other. Diagnostic is easier when diagnostic information
is available in a single point (EOBD connector). Vehicular networks can be divided into:

Figure 16: Example of vehicular networks

• CAN-Controller Area Network used for vehicle control and ECU-to-ECU interconnection

• LIN-Local Interconnect Network with a physical span of about one meter is used for actuator control and
ECU-to-actuator interconnection
• MOST-Media Oriented Systems Transport used for entertainment, personal communication and navigation
(GPS, Display, Speaker interconnection)
A vehicular network should be reliable, not expensive, durable (for all vehicle life) and versatile. Real time
support is needed, when the required bandwidth is of the same order of magnitude of the network available
bandwidth. In safety related applications reliability is fundamental, in fact the network should be able to
survive in case of failures of a single wire of failures of one or more ECU (when a node fails the rest of the
network still has to be able to work properly). In non-safety related applications the network can fail, but
each ECU should be able to operate in stand-alone mode (the requirements are less stringent). In this case
we have no more global optimization/control, limited performance of the vehicle and maybe some comfort or
entertainment services unavailable. To avoid network crash in case of a single node/link failure we must have
a passive bus (NO active devices such as switches must be present) and nodes should automatically disconnect
in case of failure or power down (without interrupting the bus). The durability should be at least ten years.
Thermal and mechanical stress, voltage variations can significantly reduce the life of the electronic components
and so the design should consider all these issues. Very Large Scale Integrated (VLSI) chips increase reliability
and durability (instead of using a lot and small). Moreover surface soldering increases vibration endurance and
we use specially designed electronics for automotive. The network should be versatile (i.e. general purpose)
for any type of data, allowing add-ons and simple operation. Some applications require a limited delivery delay
of data, e.g. from a sensor to an ECU, from ECU to an actuator or from ECU to ECU. To have real time
communication real-time and priority based protocols are employed and the network is underused. The cost
of any component is very important, being the total cost of vehicle the sum of the cost of its parts (and there
are thousands of them in a car). Only seldom electric parts can reduce cost of a vehicle, but they are only the
possible way to provide some services, e.g. engine control, brake control, comfort and active safety. The use of
microcontrollers is very common. We reduce the number of chips to reduce failure probability and integrate
many functions and interfaces in a single, small-size chip. They have an integrated CAN interface. Reliability

13
depends on the number of chips in an electronic board, not on their complexity. VLSI includes a lot of functions
in a single chip, reducing the chip count and increasing the system reliability. When chips are tested after the
production if a failure must occur it is in this case that occurs, otherwise if the test is successful the chip will
show a stable behaviour.

5 - CAN bus
They were originally presented by Bosch 25 years ago. The maximum data rate is 1 M b/s. The low bit rate is
not an issue since only small messages must be transmitted. The bus topology is passive and with copper cables.
We have two versions: CAN 1.2 and CAN 2.0 that is backward compatible with CAN 1.2. However backward
compatibility is not a major issue since when we design the closed network we choose the same CAN standard.
Backward compatibility is only necessary because we have different ECUs designed by different producers which
could not be CAN 2.0 compatible. The CAN stack is composed of three layers:
• Logical Link Control (LLC)
• Medium access control (MAC)
• Physical
The Data Link layer is divided in two sub-layers: LLC provides a common interface to the upper layers while
MAC implements the access protocol to the transmission medium. These sub-layers are required whenever
there is a shared transmission medium. The three layers of the CAN network architecture provide a set of
functions. Some of them are peculiar to this architecture such as acceptance filtering, overload notification,
stuffing/destuffing, error signalling and MAC (arbitration).

Figure 17: CAN architecture

Physical layer
It manages the transfer of bits on the shared medium (passive copper bus):
• Bit encoding: it involves how to transmit the 0 and 1 on the channel, i.e. the shape associated to both of
them. It adapts the bits to the physical characteristics of the channel
• Timing: defines the sequence of events related to the transmission of every single bit
• Synchronisation (between transmitter and receiver): specifies the maximum timing error required for a
correct reception
The bits are transmitted using a Non Return to Zero (NRZ) coding scheme. In this way when a 1 must be
transmitted on the channel we will have a high voltage value, vice versa if a 0 must be transmitted a 0 V value
is sent (in practice it is a kind of square wave).

14
Synchronization
CAN receivers should recover the bit period, which is set by the transmitter. Crystal resonators have limited
(but not sufficient timing errors), so estimated time is checked against variations of the received signal. The
synchronization happens during High-to-Low and Low-to-High transitions. If the value transmitted is stable,
synchronization will be very poor. Timing should be corrected within a maximum amount of oscillations (of
the crystal resonator) to avoid errors in the reception of bits. Thus, a transition is required every n bits at least
but physical entities cannot control the bits they are asked to send by MAC entities. The main issue is in the
fact that the transmitter changes and the receiver must synchronise with the transmitter CLK and, of course, it
must be kept synchronised during all transmission period. The need of a transmission after a fixed n number of
bits is called bit stuffing. This bit is then dropped because it does not carry information. The drawback is that
we waste bandwidth due to the added bit. This stuff bit is inserted when a sequence of 5 consecutive identical
bits. As long as no errors are there the scheme works very well.

Timing
For appropriate timing the maximum propagation delay must be smaller than the bit period Td < 1 µs. In 1 µs
the light covers about 200 meters so it is possible to synchronize bit by bit. The synchronization segment takes

Figure 18: Timing phases


into account timing error between any transmitter-receiver pair. At the end of this phase the incoming signal
is stable if there is a single node transmitting on the bus. Whenever two or more nodes are simultaneously
transmitting on the bus a collision occurs. At the end of the propagation delay phase the incoming signal is
stable anyway. During the phase buffer the actual value of the bit is evaluated in the middle of this phase.

Collisions
Being a random access network, when a station has something to transmit it transmits, so collision may arise
if more stations start transmitting at the same time. However, even if collisions cannot be avoided, they can
be recognised. Collisions can be used to manage the access to the shared medium, either by trying again
after a random delay (e.g. Ethernet or Wi-Fi), or by arbitration. Since the propagation delay is very short
the collision is instantaneously recognised, at the first colliding bit. Through arbitration a collision may be
solved either by a centralised arbiter (i.e. a special node), that decides which is the winner, or by a distributed
arbitration algorithm, that identifies the winner using a dominance criterion. CAN uses the latter approach,
and the criterion is bit dominance: when a dominant and a recessive bit are simultaneously transmitted, the
received bit is always the dominant. In this case on signal is still transmitted, i.e. the one with more dominant
bits. This is possible thanks to the synchronisation bit by bit. The actual value of the dominant bit depends on
the transmitter driver and on the medium. In case of electric signals low level voltage dominates over high level
voltage with open-collector drivers. Optical signals instead exploit photon presence dominating over absence of
photons with LASER or LED transmitter.

Medium Access Control (MAC)


It exploits the services of physical layer, i.e. transmission and reception of bits. It recognizes the Start Of Frame
(SOF), i.e. the beginning of a PDU, observing the bits coming from the physical layer. The MAC layer PDUs
are referred to as frames:
• DATA: carry information from source to destination
• REMOTE : request for the transmission of a data frame from destination

• ERROR: informs all nodes that source identified an error on the bus
• OVERLOAD: reserve the bus for some time. Thereafter source will send a frame

15
DATA frame structure
The Start Of Frame (SOF) identifies the beginning of a frame. It is a single dominant bit. Non-transmitting
nodes recognise that at least one node is transmitting, the bus status becomes busy for all nodes. Only nodes
having sent a SOF in the precious bit period can go on with their transmissions, while other nodes should listen
to the bus (to receive the transmitted frame). In the Arbitration Field whenever two or more nodes start

Figure 19: DATA frame structure


transmitting at the same time a collision occurs and an arbitration must then take place. The bits sent in the
arbitration period are the frame identifier, each frame type and sender must have a different identifier and the
winner is the frame with dominants bits at the beginning of the identifier. For arbitration field we can have
a standard and an extended format. The formats are indicated through the use of a bit called IDE that is
dominant in case of standard format, recessive if extended. Typically for high frequency the standard format is
employed. Extended format is usually used for not so frequent operations such as diagnostic. At least one of the

(a) Standard format (b) Extended format

Figure 20: Arbitration field


first 7 bits of the identifier should be dominant. The first 11 bits of the identifier (both standard and extended
format) define the base priority. The following 18 bits (in extended format) define the extended priority. The
Data field as a maximum number of bytes of 8.

ERROR and OVERLOAD frame structure


The ERROR frame just needs to request some data. It is used to notify if there is some error. The error is
identified with six dominant bits thus destroying the information. In particular a node detecting a bus error

(a) ERROR frame (b) OVERLOAD frame

sends an Active Error Flag (i.e. the 6 consecutive dominant bits). This destroys the current frame. The AEF
is detected as a framing error by other nodes, which in turns send an ERROR frame. After sending the frame
a node sends up to 6 recessive bits. When receiving a recessive bit, indicating that the buss error has been
detected by all nodes, sends 8 more recessive bits (Error Delimiter). The errors that can happen on the bus
can be bit errors (e.g. a recessive bit is sent but a dominant is received), or stuff error if after 5 identical
bits the sixth one is not changed. We can also have form error when a fixed value bit in the frame has a
wrong value, an ACK error if there is no ACK delimiter in the ACK field and finally CRC error when the
receiver CRC (computed by the receiver) is different from the CRC in the frame (computed by the sender). The
OVERLOAD frame sends an overload flag of 6 dominants bits (as error flag). Nodes behave in the same
way as before. The bus is busy until the end of the Overload frame.

16
6 - LIN-Local Interconnect Network
While CAN is used to coordinate different ECUs, LIN is typically employed to coordinate the activities of an
ECU with sensors and actuators. The physical span of LIN network is typically even smaller than the one of
CAN. The features of LIN network are:
• single master with multiple slaves
• self synchronisation without a quartz or ceramics resonator in the slave nodes
• deterministic signal transmission with signal propagation time computable in advance
• low cost single-wire implementation
• speed up to 20 kbit/s (even smaller than can bit rate)
• reconfigurable
• transport layer and diagnostic support
The physical topology of a LIN network is still a bus. As said before the LIN network exploits the polling
system. The network is a cluster with one master node and several slave tasks. A master node contains the
master task as well as a slave task, while all other slave nodes contain a slave task only. When the master node
has to send data, it sends a header to its own slave task which, in turn, sends the information to the actuators.

Frames A frame consists of a header (provided by the master task) and a response (provided by a slave
task). The header contains a frame identifier, which uniquely defines the purposes of the frame. The slave task
appointed for providing the response associated with the frame identifier transmits it. The slave tasks interested
in these data read the response, verify the checksum and use the data. LIN network shows several benefits such

(a) Master-slave

(b) Frame structure

as:
• System flexibility: nodes can be added to the LIN cluster without requiring hardware or software changes
in other slave nodes (only the master node must be updated)
• Message routing: the content of a message is defined by the frame identifier
• Multicast/Broadcast: any number of nodes can simultaneously receive and act upon a single frame

Figure 21: LIN design workflow

17
7 - FlexRay
It is an automotive oriented protocol that enhance reliability using a dual channel bus (passive buses). It was
developed by the FlexRay Consortium in 2006 to govern on-board automotive computing. It is designed to be
faster and more reliable than CAN and TTP, but it is also more expensive.developed by the FlexRay Consortium
to govern on-board automotive computing. It uses unshielded twisted pair cabling to connect nodes together.
Dual-channel configurations offer enhanced fault-tolerance and/or increased bandwidth. Respect to CAN net-

Figure 1: Dual channel bus

work it offers a bitrate of 10M bit/s, a predictable latency and fault tolerance. Since it is a deterministic access
network, no collisions arise. One of the things that distinguishes FlexRay, CAN and LIN from more traditional
networks such as ethernet is its topology, or network layout. FlexRay supports simple multi-drop passive con-
nections as well as active star connections for more complex networks. Depending a vehicle’s layout and level
of FlexRay usage, selecting the right topology helps designers optimize cost, performance, and reliability for a
given design.

Multi-drop Bus FlexRay is commonly used in a simple multi-drop bus topology that features a single
network cable run that connects multiple ECUs together. This is the same topology used by CAN and LIN and
is familiar to OEMs, making it a popular topology in first-generation FlexRay vehicles. Each ECU can ”branch”
up to a small distance from the core ”trunk” of the bus. The ends of the network have termination resistors
installed that eliminate problems with signal reflections. Because FlexRay operates at high frequencies, up to
10 Mbit/s compared to CAN’s 1 Mbit, FlexRay designers much take care to correctly terminate and lay out
networks to avoid signal integrity problems. The multi-drop format also fits nicely with vehicle harnesses that
commonly share a similar type of layout, simplifying installation and reducing wiring throughout the vehicle.

Star Topology The FlexRay standard supports ”Star” configurations which consist of individual links that
connect to a central active node. This node is functionally similar to a hub found in PC ethernet networks.
The active star configuration makes it possible to run FlexRay networks over longer distances or to segment the
network in such a way that makes it more reliable should a portion of the network fail. If one of the branches of
the star is cut or shorted, the other legs continuing functioning. Since long runs of wires tend to conduct more
environmental noise such as electromagnetic emissions from large electric motors, using multiple legs reduces
the amount of exposed wire for a segment and can help increase noise immunity.

Hybrid Network The bus and star topologies can be combined to form a hybrid topology. Future FlexRay
networks will likely consist of hybrid networks to take advantage of the ease-of-use and cost advantages of the
bus topology while applying the performance and reliability of star networks where needed in a vehicle.

(a) Multi-Drop (b) Star (c) Hybrid

Figure 2: Networks layouts

Protocol Description
The FlexRay protocol is a time-triggered protocol that provides options for deterministic data that arrives in a
predictable time frame (down to the microsecond). It accomplishes this hybrid of core static frames and dynamic
frames with a pre-set communication cycle that provides a pre-defined space for static and dynamic data. While
CAN nodes only needed to know the correct baud rate to communicate, nodes on a FlexRay network must know
how all the pieces of the network are configured in order to communicate. FlexRay manages multiple nodes with
a Time Division Multiple Access or TDMA scheme. Every FlexRay node is synchronized to the same clock, and

1
each nodes waits for its turn to write on the bus. Because the timing is consistent in a TDMA scheme, FlexRay
is able to guarantee determinism of data deliver to nodes on the network. A periodic Communication Cycle
defines the time reference for the whole network and usually its duration is around 1to5ms. The MAC layer
has two access modes:

• Static TDMA, this means that even if the station has nothing to transmit it has a reserved slot.
• Dynamic mini-slotting access scheme, it allows to the stations to change their sending rate. The static
segment is fixed, this, on the contrary, has not a priori reservation.

Figure 3: Communication Cycle

FlexRay controllers actively synchronize themselves and adjust their local clocks, so that the macrotick occurs
at the same point in time on every node across the network. While configurable for a particular network,
macroticks are often 1 microsecond long. Because the macrotick is synchronized, data that relies on it are also
synchronized. They are essentials for the correct functioning of the networks and directly acts on the cycle
layer.

Figure 4: Macrotick use

Static segment
The static segment is the space in the cycle dedicated to scheduling a number of time-triggered frames. The
segment is broken up into slots, each slot containing a reserved frame of data. When each slot occurs in time,
the reserved ECU has the opportunity to transmit its data into that slot. Once that time passes, the ECU must
wait until the next cycle to transmit its data in that slot. Because the exact point in time is known in the cycle,
the data is deterministic and programs know exactly how old the data is. If the ECU has nothing to transmit
a null packet is set. The number of slots is not fixed in advanced, but at least one for each station is reserved.
The slot quantity is changed using particular algorithms that consider taffic shift and relocate slots.

Dynamic segment
To accommodate a wide variety of data without slowing down the FlexRay cycle with an excessive number of
static slots, the dynamic segment allows occasionally transmitted data. The segment is a fixed length, so there
is a limit of the fixed amount of data that can be placed in the dynamic segment per cycle. To prioritize the
data, minislots are pre-assigned to each frame of data that is eligible for transmission in the dynamic segment.
A minislot is typically a macrotick (a microsecond) long. Higher priority data receives a minislot closer to
the beginning of the dynamic frame and in this way they are the first to be trasmitted. Each frame has a
proper ID and the frame with ID = m can transmit when the mth mini-slot begin. If all station are perfectly
synchronised only the owner of m can transmit and when m is allocated the slot adapts its length. After that
m + 1 m + 2 m + ... can use the bus.

2
Figure 5: Static segment

Figure 6: Dynamic segment

Symbol window
Within the symbol window a single symbol (alarm symbol) may be sent and it is used for special cycles, such as
cold-start cycles. Arbitration among different senders is not provided by the protocol for the symbol window.
If arbitration among multiple senders is required for the symbol window it has to be performed by means of a
higher-level protocol. Symbols are patterns of bits (NOT messages) used to implement basic control functions:
• Pattern 1 Collision Avoidance Symbol (CAS) and Media Access Test Symbol(MTS)
• Pattern 2 Wakeup Symbol (WUS)

Network time
Synchronization is very important and it is possible because the network is small. The timing error must be
smaller than half of the mini-slot time to ensure a proper operation. Medium access is based on counters
(slot counter, mini-slot counter). Counters are incremented at the beginning of the corresponding slot or mini-
slot. The FlexRayprotocol uses a distributed clock synchronization mechanism in which each node individually
synchronizes itself to the cluster by observing the timing of transmitted sync frames from other nodes. A fault-
tolerant algorithm is used. The network present also an idle time known length by ECUs. The ECUs make use
of this idle time to make adjustments for any drift that may have occurred during the previous cycle.

Frame format
ach slot of a static or dynamic segment contains a FlexRay Frame. The frame is divided into three segments:
Header, Payload, and Trailer.

Header It is made of 5bytes. In particular the frame ID has a length of 11bits that is proper of each node.
Information about the packet address are contained in this string too. The payload length is expressed on 7
bits and it counts only even number of bytes for the payload segment since it can occupies up to 254 bytes. The
Header CRC is used to detect errors during the transfer. The Cycle Count contains the value of a counter that
advances incrementally each time a Communication Cycle starts.

3
Figure 7: Frame format

Payload The payload contains the actual data transferred by the frame. The length of the FlexRay payload
or data frame is up to 127 words (254 bytes), which is over 30 times greater compared to CAN.

Trailer The trailer contains three 8-bit CRCs (cyclic error-correcting codes) to detect errors.

You might also like