KEMBAR78
Understanding Network Delays | PDF | Network Congestion | Computer Network
0% found this document useful (0 votes)
147 views7 pages

Understanding Network Delays

Network delay refers to how long it takes for data to travel across a network from one node to another. There are several components that contribute to overall network delay, including processing delay at routers, queuing delay as packets wait in line, transmission delay as packets are pushed onto the link, and propagation delay as signals travel through the medium. Network congestion, where there is too much data traffic for the network resources, can also increase delays significantly due to increased queuing times. Delays in networks can range from a few milliseconds to several hundred milliseconds depending on these factors.

Uploaded by

Vidhul Vidhu K
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
147 views7 pages

Understanding Network Delays

Network delay refers to how long it takes for data to travel across a network from one node to another. There are several components that contribute to overall network delay, including processing delay at routers, queuing delay as packets wait in line, transmission delay as packets are pushed onto the link, and propagation delay as signals travel through the medium. Network congestion, where there is too much data traffic for the network resources, can also increase delays significantly due to increased queuing times. Delays in networks can range from a few milliseconds to several hundred milliseconds depending on these factors.

Uploaded by

Vidhul Vidhu K
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

NETWORK DELAYS

Introducion
Network delay is an important design and performance characteristic of a computer network or telecommunications network. The delay of a network specifies how long it takes for a bit of data to travel across the network from one node or endpoint to another. It is typically measured in multiples or fractions of seconds. Delay may differ slightly, depending on the location of the specific pair of communicating nodes. Although users only care about the total delay of a network, engineers need to perform precise measurements. Thus, engineers usually report both the maximum and average delay, and they divide the delay into several parts:

Processing delay - time routers take to process the packet header Queuing delay - time the packet spends in routing queues Transmission delay - time it takes to push the packet's bits onto the link Propagation delay - time for a signal to reach its destination

There is a certain minimum level of delay that will be experienced due to the time it takes to transmit a packet serially through a link. Onto this is added a more variable level of delay due to network congestion. IP network delays can range from just a few milliseconds to several hundred milliseconds.

Processing delay

In a network based on packet switching, processing delay is the time it takes routers to process the packet header. Processing delay is a key component in network delay. During processing of a packet, routers may check for bit-level errors in the packet that occurred during transmission as well as determining where the packet's next destination is. Processing delays in high-speed routers are typically on the order of microseconds or less. After this nodal processing, the router directs the packet to the queue where further delay can happen (queuing delay).

In the past, the processing delay has been ignored as insignificant compared to the other forms of network delay. However, in some systems, the processing delay can be quite large especially where routers are performing complex encryption algorithms and examining or modifying packet content. Deep packet inspection done by some networks examine packet content for security, legal, or other reasons, which can cause very large delay and thus is only done at selected inspection points. Routers performing network address translation also have higher than normal processing delay because those routers need to examine and modify both incoming and outgoing packets.

Queuing delay

In telecommunication and computer network delay.

engineering,

the queuing

delay (or queueing

delay)

is the time a job waits in a queue until it can be executed. It is a key component of

This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. The maximum queuing delay is proportional to buffer size. The longer the line of packets waiting to be transmitted, the longer the average waiting time is. However, this is much preferable to a shorter buffer, which would result in ignored ("dropped") packets, which in turn would result in much longer overall transmission times. During network congestion, queuing delays can be considered infinite when the packet is dropped. The retransmission of such packets causes significant overall delay because all forms of delay will be incurred more than once. If the network congestion continues, the packet may be dropped many times. Many protocols, such as TCP, will "throttle back" their sending and wait for the network to clear up.

While taking about Queuing in wireless communication its an important tool for handoff as it decreases the probability of forced termination of a call due to unavailable voice channels in a base station. The basic theory is there is a time delay between the threshold level of signal required for handoff and minimum level of signal strength to maintain the call. the call in que in this mean time can be handed over to the free voice channel. In Kendall's notation, the M/M/1/K queuing model, where K is the size of the buffer, may be used to analyze the queuing delay in a specific system.

Transmission delay

In

a network based

on packet

switching, transmission

delay (or store-and-forward

delay)

is the amount of time required to push all of the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link. Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits, It is given by the following formula: DT = N / R where DT is the transmission delay N is the number of bits, and R is the rate of transmission (say in bits per second) Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus store-andforward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.

Propagation delay

Propagation delay is a technical term that can have a different meaning depending on the context. It can relate to networking, electronics or physics. In general it is the length of time taken for the quantity of interest to reach its destination. In computer networks, propagation delay is the amount of time it takes for the head of the signal to travel from the sender to the receiver over a medium. It can be computed as the ratio between the link length and the propagation speed over the specific medium.

Propagation delay = d / s where d is the distance and s is the wave propagation speed. In wireless communication, s=c, i.e. the speed of light. In copper wire, the speed sgenerally ranges from .59c to .77c. This delay is the major obstacle in the development of highspeed computers and is called the interconnect bottleneck in IC systems.

In electronics, digital circuits and digital electronics, the propagation delay, or gate delay, is the length of time starting from when the input to a logic gate becomes stable and valid, to the time that the output of that logic gate is stable and valid. Often this refers to the time required for the output to reach from 10% to 90% of its final output level when the input changes. Reducing gate delays in digital circuits allows them to process data at a faster rate and improve overall performance. The difference in propagation delays of logic elements is the major contributor

to glitches in asynchronous circuits as a result of race conditions. The principle of logical effort utilizes propagation delays to compare designs

implementing the same logical statement. Propagation delay increases with operating temperature, marginal supply voltage as well as an increased output load capacitance. The latter is the largest contributor to the increase of propagation delay. If the output of a logic gate is connected to a long trace or used to drive many other gates (high fan out) the propagation delay increases substantially.

Wires have an approximate propagation delay of 1 ns for every 6 in of length. Logic gates can have propagation delays ranging from more than 10 ns down to the pico second range, depending on the technology being used.

In physics, particularly in the electromagnetism field, the propagation delay is the length of time it takes for a signal to travel to its destination. For example, in the case of an electric signal, it is the time taken for the signal to travel through a wire. See also, velocity of propagation.

Network congestion

In data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increases in network throughput, or to an actual reduction in network throughput. Network protocols which use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load has been reduced to a level which would not normally have induced network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Modern networks use congestion the control and network congestion avoidance techniques in TCP, some and fair are

to try to avoid congestion collapse. These include: exponential backoff in protocols such as 802.11's CSMA/CA and network congestion is original Ethernet, window reduction priority schemes, so that queueing in devices such as routers. Another method to avoid the negative effects of implementing packets transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows.

Congestive collapse

Congestive

collapse (or congestion network can

collapse) when

is little

a or

condition no useful

which

a packet is

switched computer

reach,

communication

happening due to congestion. Congestion collapse generally occurs at choke points in the network, where the total incoming bandwidth to a node exceeds the outgoing bandwidth. Connection points between a local area network and a wide area network are the most likely choke points. A DSL modem is the most common small network example, with between 10 and 1000 Mbit/s of incoming bandwidth and at most 8 Mbit/s of outgoing bandwidth. When a network is in such a condition, it has settled (under overload) into a stable state where traffic demand is high but little useful throughput is available, and there are high levels of packet delay and loss (caused by routers discarding packets because their output queues are too full) and general quality of service is extremely poor. Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. It should not be confused with flow control, which prevents the sender from overwhelming the receiver.

Classification of congestion control algorithms

There are many ways to classify congestion control algorithms:

By the type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit explicit signals By incremental deployability on the current Internet: Only sender needs modification; sender and receiver need modification; only router needs modification; sender, receiver and routers need modification.

By the aspect of performance it aims to improve: high bandwidth-delay product networks; lossy links; fairness; advantage to short flows; variable-rate links By the fairness criterion it uses: max-min, proportional, "minimum potential delay"

Conclusion
Delay may differ slightly, depending on the location of the specific pair of communicating nodes. Although users only care about the total delay of a network, engineers need to perform precise measurements. Thus, engineers usually report both the maximum and average delay

You might also like