Data Link Layer Design Issues, Error Control, Flow Control, Framing, Error Detection & Correction Code in computer network
DATA LINK LAYER DESIGN ISSUES
The data link layer uses the services of the physical layer to send and receive bits over communication channels. It has a number of functions, including:
- Providing a well-defined service interface to the network layer.
- Dealing with transmission errors.
- Regulating the flow of data so that slow receivers are not swamped by fast senders.
*To accomplish these goals, the data link layer takes the packets it gets from the network layer and encapsulates them into frames for transmission.
*Each frame contains a frame header, a payload field for holding the packet, and a frame trailer, . Frame management forms the heart of what the data link layer does.
*Each frame contains a frame header, a payload field for holding the packet, and a frame trailerFrame management forms the heart of what the data link layer does.
*.They often show up in their simplest and purest forms in the data link layer, making this a good place to examine them in detail.
Services Provided to the Network Layer
- The function of the data link layer is to provide services to the network layer
- The principal service is transferring data from the network layer on the source ma- chine to the network layer on the destination machine
- On the source machine is an entity, call it a process, in the network layer that hands some bits to the data link layer for transmission to the destination.
- The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the net-work layer there
- Unacknowledged connectionless service.
- Acknowledged connectionless service.
- Acknowledged connection-oriented service.
Framing
- Breaking up the bit stream into frames is more difficult than it at first appears. A good design must make it easy for a receiver to find the start of new frames while using little of the channel bandwidth. We will look at four methods:
(a) Byte count.(b) Flag bytes with byte stuffing.(c) Flag bits with bit stuffing.(d) Physical layer coding violations.
- The usual approach is for the data link layer to break up the bit stream into discrete frames, compute a short token called a checksum for each frame, and include the checksum in the frame when it is transmitted. (Checksum algorithms will be discussed later in this chapter.)
- When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum is different from the one contained in the frame, the data link layer knows that an error has occurred and takes steps to deal with it (e.g., discarding the bad frame and possibly also sending back an error report).
- The first framing method uses a field in the header to specify the number of bytes in the frame. When the data link layer at the destination sees the byte count, it knows how many bytes follow and hence where the end of the frame is. This technique is shown in Fig.(a)
- The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the byte count of 5 in the second frame of Fig.(b) becomes a 7 due to a single bit flip, the destination will get out of synchronization. It will then be unable to locate the correct start of the next frame.
- The second framing method gets around the problem of resynchronization after an error by having each frame start and end with special bytes. Often the same byte, called a flag byte, is used as both the starting and ending delimiter.
- The data link layer on the receiving end re- moves the escape bytes before giving the data to the network layer. This technique is called byte stuffing.
- The byte-stuffing scheme depicted is a slight simplification of the one used in PPP (Point-to-Point Protocol), which is used to carry packets over communications links Framing can be also be done at the bit level, so frames can contain an arbitrary number of bits made up of units of any size. It was developed for the once very popular HDLC (High- level Data Link Control) protocol.
- This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character stream before a flag byte in the data.
- When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so is bit stuffing. If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the receiver’s memory as 01111110. Figure gives an example of bit stuffing.
![]() |
(a) The original data. (b) The data as they appear on the line. (c) The data as they are stored in the receiver’s memory after destuffing. |
Error Control :
- The usual way to ensure reliable delivery is to provide the sender with some feedback about what is happening at the other end of the line, Typically, the protocol calls for the receiver to send back special control frames bearing positive or negative acknowledgements about the incoming frames
- If the sender receives a positive acknowledgement about a frame, it knows the frame has arrived safely, On the other hand, a negative acknowledgement means that something has gone wrong and the frame must be transmitted again
- An additional complication comes from the possibility that hardware troubles may cause a frame to vanish completely (e.g., in a noise burst). In this case, the receiver will not react at all, since it has no reason to react. Similarly, if the acknowledgement frame is lost, the sender will not know how to proceed
- It should be clear that a protocol in which the sender transmits a frame and then waits for an acknowledgement, positive or negative, will hang forever if a frame is ever lost due to, for example, malfunctioning hardware or a faulty communication channel
- This possibility is dealt with by introducing timers into the data link layer. When the sender transmits a frame, it generally also starts a timer, However, if either the frame or the acknowledgement is lost, the timer will go off, alerting the sender to a potential problem
- The obvious solution is to just transmit the frame again. However, when frames may be transmitted multiple times there is a danger that the receiver will accept the same frame two or more times and pass it to the network layer more than once. To prevent this from happening, it is generally necessary to assign sequence numbers to outgoing frames, so that the receiver can distinguish retransmissions from originals.
Flow Control
- The design issue that occurs in the data link layer (and higher layers as well) is what to do with a sender that systematically wants to transmit frames faster than the receiver can accept them, is situation can occur when the sender is running on a fast, powerful computer and the receiver is running on a slow, low-end machine
- A common situation is when a smart phone requests a Web page from a far more powerful server, which then turns on the fire hose and blasts the data at the poor helpless phone until it is completely swamped. Even if the transmission is error free, the receiver may be unable to handle the frames as fast as they arrive and will lose some.
- Clearly, something has to be done to prevent this situation. Two approaches are commonly used. In the first one, feedback-based flow control, the receiver sends back information to the sender giving it permission to send more data, or at least telling the sender how the receiver is doing.
- In the second one, rate-based flow control, the protocol has a built-in mechanism that limits the rate at which senders may transmit data, without using feedback from the receiver.
- For example: hardware implementations of the link layer as NICs (Network Interface Cards) are some- times said to run at ‘‘wire speed,’’ meaning that they can handle frames as fast as they can arrive on the link. Any overruns are then not a link problem, so they are handled by higher layers.
Error Detection and Correction
- The optical fiber in telecommunications networks, have tiny error rates so that transmission errors are a rare occurrence. But other channels, especially wireless links and aging local loops, have error rates that are orders of magnitude larger. For these links, transmission errors are the norm. They cannot be avoided at a reasonable expense or cost in terms of performance. The conclusion is that transmission errors are here to stay
- One strategy is to include enough redundant information to enable the receiver to deduce what the transmitted data must have been. The other is to include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which error) and have it request a retransmission. The former strategy uses error-correcting codes and the latter uses error detecting codes. The use of error-correcting codes is often referred to as FEC (Forward Error Correction)
- Each of these techniques occupies a different ecological niche. On channels that are highly reliable, such as fiber, it is cheaper to use an error-detecting code and just retransmit the occasional block found to be faulty, FEC is used on noisy channels because retransmissions are just as likely to be in error as the first transmission.
- A key consideration for these codes is the type of errors that are likely to occur. Neither error-correcting codes nor error-detecting codes can handle all possible errors since the redundant bits that offer protection are as likely to be received in error as the data bits (which can compromise their protection).
- Other types of errors also exist. Sometimes, the location of an error will be known, perhaps because the physical layer received an analog signal that was far from the expected value for a 0 or 1 and declared the bit to be lost. This situation is called an erasure channel
- It is easier to correct errors in erasure channels than in channels that flip bits because even if the value of the bit has been lost, at least we know which bit is in error, Error-correcting codes are also seen in the physical layer, particularly for noisy channels, and in higher layers, particularly for eal-time media and content distribution. Error-detecting codes are commonly used in link, network, and transport layers.
Error-Correcting Codes
- Hamming codes.
- Binary convolutional codes.
- Reed-Solomon codes.
- Low-Density Parity Check codes.
- All of these codes add redundancy to the information that is sent. A frame con- sists of m data (i.e., message) bits and r redundant (i.e. check) bits. In a block code, the r check bits are computed solely as a function of the m data bits with which they are associated, as though the m bits were looked up in a large table to find their corresponding r check bits.
- In a systematic code, the m data bits are sent directly, along with the check bits, rather than being encoded themselves before they are sent.
- In a linear code, the r check bits are computed as a linear function of the m data bits. Exclusive OR (XOR) or modulo 2 addition is a popular choice. This means that encoding can be done with operations such as matrix multiplications or simple logic circuits.
- Let the total length of a block be n (i.e., n m r). We will describe this as an (n,m ) code. An n-bit unit containing data and check bits is referred to as an n- bit codeword
- The code rate, or simply rate, is the fraction of the codeword that carries information that is not redundant, or m/n. The rates used in practice vary widely.
- To understand how errors can be handled, it is necessary to first look closely at what an error really is. Given any two codewords that may be transmitted or received—say, 10001001 and 10110001—it is possible to determine how many corresponding bits differ. In this case, 3 bits differ. To determine how many bits differ, just XOR the two codewords and count the number of 1 bits in the result
For example:100010011011000100111000
- The number of bit positions in which two codewords differ is called Hamming distance (Hamming, 1950). Its significance is that if two codewords are a Hamming distance d apart, it will require d single-bit errors to convert one into the other.
- In Hamming codes the bits of the codeword are numbered consecutively, starting with bit 1 at the left end, bit 2 to its immediate right, and so on. The bits that are powers of 2 (1, 2, 4, 8, 16, etc.) are check bits. The rest (3, 5, 6, 7, 9, etc.) are filled up with the m data bits. This pattern is shown for an (11,7) Hamming code with 7 data bits and 4 check bits
- If the check results are not all zero, however, an error has been detected. The set of check results forms the error syndrome that is used to pinpoint and correct the error. In Fig., a single-bit error occurred on the channel so the check results are 0, 1, 0, and 1 for k = 8, 4, 2, and 1, respectively.
- Hamming distances are valuable for understanding block codes, and Hamming codes are used in error-correcting memory, However, most networks use stronger codes. The second code we will look at is a convolutional code. This code is the only one we will cover that is not a block code
- In a convolutional code, an encoder processes a sequence of input bits and generates a sequence of output bits. There is no natural message size or encoding boundary as in a block code. The output depends on the current and previous input bits. That is, the encoder has memory. The number of previous bits on which the output depends is called the constraint length of the code.
- Extensions of the Viterbi algorithm can work with these uncertainties to provide stronger error correction. This approach of working with the uncertainty of a bit is called soft-decision decoding. Conversely, deciding whether each bit is a 0 or a 1 before subsequent error correction is called hard-decision decoding.
- The third kind of error-correcting code we will describe is the Reed-Solomon code. Like Hamming codes, Reed-Solomon codes are linear block codes, and they are often systematic too.
- Reed-Solomon codes are widely used in practice because of their strong error-correction
- properties, particularly for burst errors. They are used for DSL, data over cable, satellite communications, and perhaps most ubiquitously on CDs, DVDs, and Blu-ray discs.
- Reed-Solomon codes are often used in combination with other codes such as a convolutional code. The thinking is as follows. Convolutional codes are effective at handling isolated bit errors, but they will fail, likely with a burst of errors, if there are too many errors in the received bit stream. the Reed-Solomon decoding can mop up the error bursts, a task at which it is very good.
- The final error-correcting code we will cover is the LDPC (Low-Density Parity Check) code. LDPC codes are linear block codes that were invented by Robert Gallagher in his doctoral thesis (Gallagher, 1962), they were promptly forgotten, only to be reinvented in 1995 when advances in computing power had made them practical.
- n an LDPC code, each output bit is formed from only a fraction of the input bits. This leads to a matrix representation of the code that has a low density of 1s, hence the name for the code. The received codewords are decoded with an approximation algorithm that iteratively improves on a best fit of the received data to a legal codeword. This corrects error.
- LDPC codes are practical for large block sizes and have excellent error correction abilities that outperform many other codes (including the ones we have looked at) in practice.
Error-Detecting Codes
- Parity.
- Checksums.
- Cyclic Redundancy Checks (CRCs).
transmit ordern=1001110 n=1001110e=1100101 e=1100011t=1110100 t=1101100w=1110111 w=1110111
- there is something else we can do that provides better protection against burst errors: we can compute the parity bits over the data in a different order than the order in which the data bits are transmitted. Doing so is called interleaving. Interleaving is a general technique to convert a code that detects (or corrects) isolated errors into a code that detects (or corrects) burst errors.
- In Fig. 3-8, when a burst error of length n 7 occurs, the bits that are in error are spread across different columns. The second kind of error-detecting code, the checksum, is closely related to groups of parity bits. The word ‘‘checksum’’ is often used to mean a group of check bits associated with a message, regardless of how are calculated A group of parity bits is one example of a checksum.
- However, there are other, stronger checksums based on a running sum of the data bits of the message. The checksum is usually placed at the end of the message, as the complement of the sum function. This way, errors may be detected by summing the entire received codeword, both data bits and checksum A better choice is Fletcher’s checksum (Fletcher, 1982).
- It includes a positional component, adding the product of the data and its position to the running sum. This provides stronger detection of changes in the position of data. Although the two preceding schemes may sometimes be adequate at higher layers, in practice, a third and stronger kind of error-detecting code is in wide- spread use at the link layer: the CRC (Cyclic Redundancy Check), also known as a polynomial code.
- Polynomial arithmetic is done modulo 2, according to the rules of algebraic field theory. It does not have carries for addition or borrows for subtraction. Both addition and subtraction are identical to exclusive OR.
10011011+11001010=01010001
- Let r be the degree of G (x ). Append r zero bits to the low-order end of the frame so it now contains m r bits and corresponds to the polynomial x rM (x ).
- Divide the bit string corresponding to G (x ) into the bit string corresponding to x rM(x ), using modulo 2 division.
- Subtract the remainder (which is always r or fewer bits) from the bit string corresponding to x rM (x ) using modulo 2 subtraction. The result is the checksummed frame to be transmitted. Call its polynomial T (x ).
- Both the high- and low- order bits of the generator must be 1. To compute the CRC for some frame with m bits corresponding to the polynomial M(x ), the frame must be longer than the generator polynomial.
- The idea is to append a CRC to the end of the frame in such a way that the polynomial represented by the checksummed frame is divisible by G(x ). When the receiver gets the checksummed frame, it tries dividing it by G(x ). If there is a remainder, there has been a transmission error.