Computer Network notes , Elementary Data Link Protocols IN COMPUTER NETWORK,THE MEDIUM ACCESS CONTROL SUBLAYER , A Utopian Simplex Protocol , Simplex Stop-and-Wait Protocol for a Noisy Channel, SLIDING WINDOW PROTOCOLS , A One-Bit Sliding Window Protocol
ELEMENTARY DATA LINK PROTOCOLS
- To start with, we assume that the physical layer, data link layer, and network layer are independent processes that communicate by passing messages back and forth.
- The physical layer process and some of the data link layer process run on dedicate hardware called a NIC (Network Interface Card), three processes offloaded to dedicated hardware called a network accelerator
- The rest of the link layer process and the network layer process run on the main CPU as part of the operating system, with the software for the link layer process often taking the form of a device driver
- Another key assumption is that machine A wants to send a long stream of data to machine B, using a reliable, connection-oriented service. Later, we will consider the case where B also wants to send data to A simultaneously. A is assumed to have an infinite supply of data ready to send and never has to wait for data to be produced. Instead, when A’s data link layer asks for data, the network layer is always able to comply immediately.
- As far as the data link layer is concerned, the packet passed across the inter- face to it from the network layer is pure data, whose every bit is to be delivered to the destination’s network layer. The fact that the destination’s network layer may interpret part of the packet as a header is of no concern to the data link layer.
- This procedure only returns when something has happened (e.g., a frame has arrived). Upon return, the variable event tells what happened. The set of possible events differs for the various protocols to be described and will be defined separately for each protocol.
- When a frame arrives at the receiver, the checksum is recomputed. If the checksum in the frame is incorrect (i.e., there was a transmission error), the data link layer is so informed (event cksum err ). If the inbound frame arrived undamaged, the data link layer is also informed (event frame arrival ) so that it can acquire the frame for inspection using from physical layer.
- A frame is composed of four fields: kind, seq, ack, and info, the first three of which contain control information and the last of which may contain actual data to be transferred. These control fields are collectively called the frame header.
A Utopian Simplex Protocol :
- Data are transmitted in one direction only. Both the transmitting and receiving network layers are always ready. Processing time can be ignored. Infinite buffer space is available. And best of all, the communication channel between the data link layers never damages or loses frames
- The protocol consists of two distinct procedures, a sender and a receiver. The sender runs in the data link layer of the source machine, and the receiver runs in the data link layer of the destination machine. No sequence numbers or acknowledgements are used here, so MAX SEQ is not needed.
- The sender is in an infinite while loop just pumping data out onto the line as fast as it can. The body of the loop consists of three actions: go fetch a packet from the (always obliging) network layer, construct an outbound frame using the variable s, and send the frame on its way.
- The receiver is equally simple. Initially, it waits for something to happen, the only possibility being the arrival of an undamaged frame. Eventually, the frame arrives and the procedure wait for event returns, with event set to frame arrival (which is ignored anyway)
* Protocol 1 (Utopia) provides for data transmission in one direction only, from sender to
receiver. The communication channel is assumed to be error free and the receiver is assumed to
be able to process all the input infinitely quickly. Consequently, the sender just sits in a loop
pumping data out onto the line as fast as it can. */
typedef enum {frame arrival} event type;#include "protocol.h"void sender1(void){frame s;packet buffer;while (true){from network layer(&buffer);}to physical layer(&s);}}{frame r;event type event;while (true){wait for event(&event);physical layer(&r);}}
The utopia protocol is unrealistic because it does not handle either flow control or error correction. Its processing is close to that of an unacknowledged connectionless service that relies on higher layers to solve these problems, though even an unacknowledged connectionless service would do some error detection.
A Simplex Stop-and-Wait Protocol for an Error-Free Channel
- Now we will tackle the problem of preventing the sender from flooding the receiver with frames faster than the latter is able to process them. This situation can easily happen in practice so being able to prevent it is of great importance.
- One solution is to build the receiver to be powerful enough to process a continuous stream of back-to-back frames, It must have sufficient buffering and processing abilities to run at the line rate and must be able to pass the frames that are received to the network layer quickly enough.
- A more general solution to this problem is to have the receiver provide feed- back to the sender. After having passed a packet to its network layer, the receiver sends a little dummy frame back to the sender which, in effect, gives the sender permission to transmit the next frame.
- Protocols in which the sender sends one frame and then waits for an acknowledgement before proceeding are called stop-and-wait.
- As in protocol 1, the sender starts out by fetching a packet from the network layer, using it to construct a frame, and sending it on its way. The only difference between receiver1 and receiver2 is that after delivering a packet to the network layer, receiver2 sends an acknowledgement frame back to the sender before entering the wait loop again.
Simplex Stop-and-Wait Protocol for a Noisy Channel :
- The normal situation of a communication channel that makes errors. Frames may be either damaged or lost completely. However, we assume that if a frame is damaged in transit, the receiver hardware will detect this.
- when it computes the checksum. If the frame is damaged in such a way that the checksum is nevertheless correct—an unlikely occurrence—this protocol (and all other protocols) can fail (i.e., deliver an incorrect packet to the network layer).
- At first glance it might seem that a variation of protocol 2 would work: adding a timer. The sender could send a frame, but the receiver would only send an acknowledgement frame if the data were correctly received.
- If a damaged frame arrived at the receiver, it would be discarded. After a while the sender would time out and send the frame again. This process would be repeated until the frame finally arrived intact.
/* Protocol 2 (Stop-and-wait) also provides for a one-directional flow of data from sender to receiver.
The communication channel is once again assumed to be error
free, as in protocol 1. However, this time the receiver has only a finite buffer capacity and a finite
processing speed, so the protocol must explicitly prevent the sender from flooding the receiver
with data faster than it can be handled. */
typedef enum {frame arrival} event type;#include "protocol.h"void sender2(void){frame s; /* buffer for an outbound frame */packet buffer;event type event;while (true){from network layer(&buffer);to physical layer(&s);}wait for event(&event);void receiver2(void){frame r, s;{wait for event(&event);physical layer(&r);to network layer(&r.info);to physical layer(&s);}}
- The network layer on machine A gives a series of packets to its data link layer, which must ensure that an identical series of packets is delivered to the net- work layer on machine B by its data link layer.
- In particular, the network layer on B has no way of knowing that a packet has been lost or duplicated, so the data link layer must guarantee that no combination of transmission errors, however unlikely, can cause a duplicate packet to be delivered to a network layer.
Consider the following scenario:
1. The network layer on A gives packet 1 to its data link layer. The packet is correctly received
at B and passed to the network layer on
B. B sends an acknowledgement frame back to A.
2. The acknowledgement frame gets lost completely. It just never ar- rives at all. Life would be
a great deal simpler if the channel man- gled and lost only data frames and not control
frames, but sad to say, the channel is not very discriminating.
3. The data link layer on A eventually times out. Not having received an acknowledgement, it
(incorrectly) assumes that its data frame was lost or damaged and sends the frame containing
packet 1 again.
4. The duplicate frame also arrives intact at the data link layer on B and is unwittingly passed
to the network layer there. If A is sending a file to B, part of the file will be duplicated (i.e.,
the copy of the file made by B will be incorrect and the error will not have been detected). In
other words, the protocol will fail.
- Protocols in which the sender waits for a positive acknowledgement before advancing to the next data item are often called ARQ (Automatic Repeat reQuest) or PAR (Positive Acknowledgement with Retransmission). Like protocol 2, this one also transmits data only in one direction.
- protocol 3 differs from its predecessors in that both sender and receiver have a variable whose value is remembered while the data link layer is in the wait state. The sender remembers the sequence number of the next frame to send in next frame to send; the receiver remembers the sequence number of the next frame expected in frame expected.
- After transmitting a frame and starting the timer, the sender waits for something exciting to happen. Only three possibilities exist: an acknowledgement frame arrives undamaged, a damaged acknowledgement frame staggers in, or the timer expires. If a valid acknowledgement comes in, the sender fetches the next packet from its network layer and puts it in the buffer, overwriting the previous packet.
- When a valid frame arrives at the receiver, its sequence number is checked to see if it is a duplicate. If not, it is accepted, passed to the network layer, and an acknowledgement is generated.
SLIDING WINDOW PROTOCOLS :
- The previous protocols, each using a separate link for simplex data traffic (in different directions). Each link is then comprised of a ‘‘forward’’ channel (for data) and a ‘‘reverse’’ channel (for acknowledgement data frames were transmitted in one direction only. In most practical situations, there is a need to transmit data in both directions.
- A better idea is to use the same link for data in both directions. After all, in protocols 2 and 3 it was already being used to transmit frames both ways, and the reverse channel normally has the same capacity as the forward channel.
- Although interleaving data and control frames on the same link is a big improvement over having two separate physical links, yet another improvement is possible. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet.
- The acknowledgement is attached to the outgoing data frame (using the ack field in the frame header). In effect, the acknowledgement gets a free ride on the next outgoing data frame. The technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next outgoing data frame is known as piggybacking.
- The ack field in the frame header costs only a few bits, whereas a separate frame would need a header, the acknowledgement, and a checksum.
- The next three protocols are bidirectional protocols that belong to a class called sliding window protocols. The three differ among themselves in terms of efficiency, complexity, and buffer requirements, as discussed later. In these, as in all sliding window protocols, each outbound frame contains a sequence number, ranging from 0 up to some maximum.
- The essence of all sliding window protocols is that at any instant of time, the sender maintains a set of sequence numbers corresponding to frames it is permit- ted to send. These frames are said to fall within the sending window. Similarly, the receiver also maintains a receiving window corresponding to the set of frames it is permitted to accept. The sender’s window and the receiver’s window need not have the same lower and upper limits or even have the same size.
A One-Bit Sliding Window Protocol :
- Before tackling the general case, let us examine a sliding window protocol with a window size of 1. Such a protocol uses stop-and-wait since the sender transmits a frame and waits for its acknowledgement before sending the next one.
- Like the others, it starts out by defining some variables. Next frame to send tells which frame the sender is trying to send. Similarly, frame expected tells which frame the receiver is expecting. In both cases, 0 and 1 are the only possibilities.
- Under normal circumstances, one of the two data link layers goes first and transmits the first frame. In other words, only one of the data link layer programs should contain the to physical layer and start timer procedure calls outside the main loop. The starting machine fetches the first packet from its network layer, builds a frame from it, and sends it.
- The acknowledgement field contains the number of the last frame received without error. If this number agrees with the sequence number of the frame the sender is trying to send, the sender knows it is done with the frame stored in buff- er and can fetch the next packet from its network layer.
- When the first valid frame arrives at computer B, it will be accepted and frame expected will be set to a value of 1. All the subsequent frames received will be rejected because B is now expecting frames with sequence number 1, not0. Furthermore, since all the duplicates will have ack 1 and B is still waiting for an acknowledgement of 0, B will not go and fetch a new packet from its network layer.
- After every rejected duplicate comes in, B will send A a frame containing seq 0 and ack 0. Eventually, one of these will arrive correctly at A, causing A to begin sending the next packet. No combination of lost frames or premature timeouts can cause the protocol to deliver duplicate packets to either network layer, to skip a packet, or to deadlock.
- In part (a), the normal operation of the protocol is shown. In (b) the peculiarity is illustrated. If B waits for A’s first frame before sending one of its own, the sequence is as shown in (a), and every frame is accepted.
- if A and B simultaneously initiate communication, their first frames cross, and the data link layers then get into situation (b). In (a) each frame arrival brings a new packet for the network layer; there are no duplicates. In (b) half of the frames contain duplicates, even though there are no transmission errors. Simi- lar situations can occur as a result of premature timeouts, even when one side clearly starts first. In fact, if multiple premature timeouts occur, frames may be sent three or more times, wasting valuable bandwidth.
A Protocol Using Go-Back-N
- The transmission time re- quired for a frame to arrive at the receiver plus the transmission time for the acknowledgement to come back is negligible. Sometimes this assumption is clearly false. In these situations the long round-trip time can have important implications for the efficiency of the bandwidth utilization.
- The problem described here can be viewed as a consequence of the rule requiring a sender to wait for an acknowledgement before sending another frame. If we relax that restriction, much better efficiency can be achieved. Basically, the solution lies in allowing the sender to transmit up to w frames before blocking, in- stead of just 1
- To find an appropriate value for w we need to know how many frames can fit inside the channel as they propagate from sender to receiver. This capacity is determined by the bandwidth in bits/sec multiplied by the one-way transit time, or the bandwidth-delay product of the link. We can divide this quantity by the number of bits in a frame to express it as a number of frames. Call this quantity BD.
- For smaller window sizes, the utilization of the link will be less than 100% since the sender will be blocked sometimes. We can write the utilization as the fraction of time that the sender is not blocked:
link utilization w 1 2BD
- This value is an upper bound because it does not allow for any frame processing time and treats the acknowledgement frame as having zero length, since it is usually short. The equation shows the need for having a large window w when- ever the bandwidth-delay product is large.
- This technique of keeping multiple frames in flight is an example of pipelining. Pipelining frames over an unreliable communication channel raises some serious issues. First, what happens if a frame in the middle of a long stream is damaged or lost? Large numbers of succeeding frames will arrive at the receiver before the sender even finds out that anything is wrong.
- One option, called go-back-n, is for the receiver simply to discard all subsequent frames, sending no acknowledgements for the discarded frames. This strategy corresponds to a receive window of size 1. In other words, the data link layer refuses to accept any frame except the next one it must give to the network layer. If the sender’s window fills up before the timer runs out, the pipeline will begin to empty.
- we see go-back-n for the case in which the receiver’s window is large. Frames 0 and 1 are correctly received and acknowledged. Frame 2, however, is damaged or lost. The sender, unaware of this problem, continues to send frames until the timer for frame 2 expires. Then it backs up to frame 2 and starts over with it, sending 2, 3, 4, etc. all over again.
- The other general strategy for handling errors when frames are pipelined is called selective repeat. When it is used, a bad frame that is received is discarded, but any good frames received after it are accepted and buffered.
- Two basic approaches are available for dealing with errors in the presence of pipelining, both of which are shown
- When the sender times out, only the oldest unacknowledged frame is retransmitted. If that frame arrives correctly, the receiver can deliver to the network layer, in sequence, all the frames it has buffered. Selective repeat corresponds to a receiver window larger than 1.
- Selective repeat is often combined with having the receiver send a negative acknowledgement (NAK) when it detects an error, for example, when it receives a checksum error or a frame out of sequence. NAKs stimulate retransmission be- fore the corresponding timer expires and thus improve performance.
- When that arrives, the data link layer now has 2, 3, 4, and 5 and can pass all of them to the network layer in the correct order. It can also acknowledge all frames up to and including 5, If the NAK should get lost, eventually the sender will time out for frame 2 and send it (and only it) of its own accord, but that may be a quite a while later.
- When the network layer has a packet it wants to send, it can cause a net- work layer ready event to happen. However, to enforce the flow control limit on the sender window or the number of unacknowledged frames that may be out- standing at any time, the data link layer must be able to keep the network layer from bothering it with more work. The library procedures enable network layer and disable network layer do this job.
- The maximum number of frames that may be outstanding at any instant is not.
THE MEDIUM ACCESS CONTROL SUBLAYER :
- In any broadcast network, the key issue is how to determine who gets to use the channel when there is competition for it. To make this point, consider a conference call in which six people, on six different telephones, are all connected so that each one can hear and talk to all the others.
- When only a single channel is available, it is much harder to determine who should go next. Many protocols for solving the problem are known. They form the contents of this chap- ter. In the literature, broadcast channels are sometimes referred to as multiaccess channels or random access channels.
- The protocols used to determine who goes next on a multiaccess channel be- long to a sublayer of the data link layer called the MAC (Medium Access Control) sublayer. The MAC sublayer is especially important in LANs, particularly wireless ones because wireless is naturally a broadcast channel.
- WANs, in contrast, use point-to-point links, except for satellite networks. Because multiaccess channels and LANs are so closely related, in this chapter we will discuss LANs in eneral including a few issues that are not strictly part of the MAC sublayer, but the main subject here will be control of the channel.
- Technically, the MAC sublayer is the bottom part of the data link layer, so logically we should have studied it before examining all the point-to-point proto- cols in Chap. 3. Nevertheless, for most people, it is easier to understand protocols involving multiple parties after two-party protocols are well understood.
THE CHANNEL ALLOCATION PROBLEM :
- The central theme of this chapter is how to allocate a single broadcast channel among competing users. The channel might be a portion of the wireless spectrum in a geographic region, or a single wire or optical fiber to which multiple nodes are connected.
- In both cases, the channel connects each user to all other users and any user who makes full use of the channel interferes with other users who also wish to use the channel.
- We will first look at the shortcomings of static allocation schemes for bursty traffic. Then, we will lay out the key assumptions used to model the dynamic schemes that we examine in the following sections.
Static Channel Allocation
- The traditional way of allocating a single channel, such as a telephone trunk, among multiple competing users is to chop up its capacity by using one of the multiplexing schemes we described in Sec. 2.5, such as FDM (Frequency Division Multiplexing).
- When there is only a small and constant number of users, each of which has a steady stream or a heavy load of traffic, this division is a simple and efficient allocation mechanism. A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most of the time to broadcast its signal.
- when the number of senders is large and varying or the traffic is bursty, FDM presents some problems. If the spectrum is cut up into N regions and fewer than N users are currently interested in communicating, a large piece of valuable spectrum will be wasted.
- The poor performance of static FDM can easily be seen with a simple queueing theory calculation. Let us start by finding the mean time delay, T, to send a frame onto a channel of capacity C bps. We assume that the frames arrive randomly with an average arrival rate of frames/sec, and that the frames vary in length with an average length of 1/ bits. With these parameters, the service rate of the channel is C frames/sec. A standard queueing theory result is: T C .
- The mean delay for the divided channel is N times worse than if all the frames were somehow magically arranged orderly in a big central queue. Since none of the traditional static channel allocation methods work well at all with bursty traffic, we will now explore dynamic methods.
Tags:
Computer network