Data Link Layer Protocols

As previously mentioned in the context of network models, the Physical Layer is chiefly tasked with physically transmitting " bits " over a communication medium. Positioned between the Network Layer and the Physical Layer, the Data Link Layer takes on the responsibility of guaranteeing the delivery of data, presented as " packets " from the Network Layer to the correct device on the network. This process adheres to the transmission standards set by the Physical Layer.


In this section, we will delve into the functions of the Data Link Layer and explore its associated protocols.


Data Link Layer Services

To ensure accurate data delivery, the Data Link Layer provides the following services:

1. Identification of Physical Addresses:

  • Identifying the physical addresses of both sending and receiving devices is a crucial service provided by the Data Link Layer.

2. Formatting Packets into Frames:

  • The Data Link Layer formats Network Layer " packets " into frames, attaching physical addresses for proper transmission.

3. Sequencing and Re-sequencing:

  • Frames transmitted out of sequence are managed through sequencing and re-sequencing mechanisms, enhancing data integrity.

4. Error Detection and Media Access Control:

  • The Data Link Layer performs error detection and controls media access, ensuring reliable communication.

Data Link Sublayers

The IEEE has categorized the Data Link Layer into two sub-layers:

i. Logical Link Control (LLC) Sub layer (802.2):

  • Manages communications over a single network link, supporting both connection-oriented and connectionless services.
  • Incorporates flow control using ready/not ready codes and sequence control for transmitted frames.
  • Enables independent functioning from underlying technologies, providing versatility to network layer protocols.

 

ii. Media Access Control (MAC) Sublayer (802.3 & 802.5):

  • Maintains unique physical device addresses, known as MAC Addresses, facilitating targeted message transmission.
  • MAC addresses are burned into the Network Interface Card (NIC) during manufacturing.
  • Handles media access technologies, ensuring efficient and organized network communication.

 

Data Link Sublayers


Common Data Link Layer Protocols

1. Ethernet and Token Ring:

  • Ethernet and Token Ring stand out as widely used LAN Layer 2 Protocols, defined by IEEE specifications 802.3 and 802.5, respectively.

2. Media Access Control (MAC) Protocols:

  • IEEE 802.3 and 802.5 standards, defining station access to the media, are categorized as Media Access Control (MAC) protocols. These protocols are integrated into the MAC sublayer of the Data Link Layer.
  • Both Ethernet and Token Ring protocols incorporate another specification in the Data Link Layer known as Logic Link Control (LLC) 802.2.

3. Logic Link Control (LLC) 802.2:

  • IEEE 802.2 is specifically crafted to provide common functions shared by both Ethernet and Token Ring protocols.
  • While 802.3 and 802.5 focus on Data Link functions related to either Ethernet or Token Ring topologies, 802.2 serves as a unifying element designed to harmonize functionalities for both.

Common Data Link Layer Protocols

Ethernet (IEEE 802.3)

Ethernet, developed by Xerox in 1970, was initially implemented through thicknet cable, operating at 10 Mbps. It has become one of the most widely used LAN protocols. The original version of Ethernet, designed for over 100 computers on a 1 km cable, has evolved into a standard that encompasses three principal categories.


Ethernet / IEEE 802.3 Specifications:

  • Ethernet/IEEE 802.3 operates at 10 Mbps on coaxial and twisted-pair cables. The IEEE 802.3 specification allows for a 100 Mbps transfer as well.

Types of Ethernet:

Ethernet, a network standard for data communication, utilizes twisted pair or coaxial cables, connecting computers to the internet or a network. It is classified into two types based on speed:

  1. Fast Ethernet: Designed to compete with protocols like FDDI, it operates at 100 Mbps over twisted pair cables.
  2. Gigabit Ethernet: Operates at 1000 Mbps (1 Gbps) over fiber and twisted-pair cables, designed to connect two or more stations.

Ethernet Properties:

  • Uses 10Mbps/100Mbps broadcast bus technology.
  • The transceiver passes all packets from the bus to the host adapter.
  • The host adapter chooses some packets and filters others.
  • Best-effort delivery, where hardware provides no information to the sender about whether the packet was delivered.
  • If the destination machine is powered down, packets will be lost.
  • TCP/IP protocols accommodate best-effort delivery.

Fast Ethernet Goals:

  • Upgrade the data rate to 100 Mbps.
  • Maintain compatibility with Standard Ethernet.
  • Retain the same 48-bit address, frame format, minimum, and maximum frame length.

Gigabit Ethernet:

  • Designed for connecting two or more stations, supporting point-to-point connections.
  • Operates at 1000 Mbps (1 Gbps) over fiber and twisted-pair cables.

Broadcasting:

  • Ethernet operates in a broadcast-based environment, where all stations see all frames on the network. After any transmission, each station must examine every frame to determine its intended recipient. Frames identified for a specific station are then passed to a higher-layer protocol.

 

 

Token Ring / IEEE 802.5

Where Did Token Ring Come From?

Token Ring, developed in the 1970s by IBM, was later standardized by IEEE as IEEE 802.5. Initially, it was the most prevalent network implementation, and while it is currently surpassed by Ethernet in usage, IBM continues to employ Token Ring in its network design.

Token Passing Mechanism:

  • Token passing in Token Ring entails circulating a token or a small frame throughout the network.
  • The device in possession of the token holds the "right-of-way" to transmit information around the ring.

Token Ring and Ethernet represent two distinct approaches to networking. While Ethernet utilizes a bus or star topology with a contention-based access mechanism, Token Ring employs a ring topology with a token-passing access mechanism. In a Token Ring network, devices are organized in a physical ring or star-wired ring, and the token circulates in a predictable order. This ensures orderly access to the network, preventing collisions and optimizing data transmission.

Despite its historical significance and once being a dominant technology, the Token Ring has gradually declined in popularity due to the widespread adoption of Ethernet. However, it still finds niche applications, especially in legacy systems where Token Ring infrastructure remains in use.

 

Medium Access Control:

In systems where multiple users share a common channel, leading to potential conflicts, these scenarios are termed contention or collision. The time during which conflicts may occur is referred to as the contention period. The MAC sub-layer of the Data Link Layer is responsible for collision resolution. Contention arises when there are instances when it is not suitable to send data across the media.

 

Multiple Access Protocols:

Multiple Access Protocols can be categorized into three groups:

Taxonomy of Multiple Access Protocols


1. Random Access:

  • In random access or contention methods, no station holds superiority over another, and none is assigned control over another. Each station allows or does not allow another station to send without any inherent hierarchy.

 

2. Controlled Access:

  • In controlled access, stations consult with one another to determine which station has the right to send. A station cannot transmit unless it has been authorized by other stations, introducing a controlled hierarchy.

 

3. Channelization:

  • Channelization is a multiple-access method in which the available bandwidth of a link is shared in time, frequency, or through code among different stations. This method helps allocate specific segments of the channel to individual stations, reducing the likelihood of collisions.

 

Understanding these multiple access protocols is crucial for optimizing network efficiency. Random access is often employed in scenarios where fairness is prioritized, while controlled access provides a more structured and controlled approach. Channelization, on the other hand, allocates dedicated portions of the channel to different stations, minimizing conflicts and enhancing overall network performance.

 

Random Access Protocols

 

Aloha Protocols

The Aloha protocol originated from a project at the University of Hawaii, aiming to facilitate data transmission among computers on different Hawaiian Islands through radio transmissions. Communication primarily occurred between remote stations and a central site known as Menehune, or vice versa. All messages sent to Menehune utilized the same frequency. Upon receiving an intact message, Menehune broadcasted an acknowledgment (ACK) on a distinct outgoing frequency, which was also used for messages from the central site to remote computers. All stations monitored this second frequency for incoming messages.

 

Pure Aloha

Pure Aloha, a fully decentralized and unspotted protocol, operates as a random access protocol with a straightforward implementation. Its guiding principle is simplicity: "When you want to talk, just talk!" Nodes desiring transmission send packets on their broadcast channel without regard for other ongoing transmissions. However, a significant drawback is the lack of knowledge about successful reception. To address this, Pure Aloha incorporates a mechanism where, after transmitting, a node expects an acknowledgment within a finite time. If none is received, the data is retransmitted. While effective in small networks with low loads, this approach falters in larger, high-load networks, prompting the development of Slotted Aloha.

Pure Aloha

Slotted Aloha

Similar to Pure Aloha but with a different approach to transmissions, Slotted Aloha introduces a delay before sending. It divides the timeline into equal slots, allowing transmissions only at slot boundaries. Assumptions include fixed frame sizes, time divided into slots of size L/R seconds, nodes initiating transmissions only at slot beginnings, and synchronized nodes aware of slot start times. Collisions, detected by all nodes before a slot ends, are reduced significantly.

This synchronization minimizes collisions among nodes attempting to transmit simultaneously, leading to improved performance compared to Pure Aloha.


Carrier Sense Multiple Access (CSMA)

Carrier Sense Multiple Access (CSMA) operates on the principle of "sense before transmit" or "listen before talk." It aims to reduce, though not eliminate, the possibility of collisions during data transmission. The persistence of collision risk arises from propagation delay, as it takes a short amount of time for the first bit of a transmitted frame to reach all stations.

In CSMA, at time 't,' station B senses the medium and perceives it as idle because, at that moment, the first bits from station B have not yet reached station C. However, station C, unaware of B's transmission, also sends a frame. The collision of these two signals destroys both frames.

CSMA encompasses two distinct protocols: CSMA/CD (Carrier Sense Multiple Access with Collision Detection) and CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). These protocols offer different approaches to managing collisions and enhancing the efficiency of data transmission in network communication.

 

Carrier Sense Multiple Access Collision Detection (CSMA/ CD):

In CSMA/CD, a Local Area Network (LAN) is structured as a shared medium, requiring each device to wait for an appropriate time before transmitting data. Stations following this protocol in CSMA/CD agree upon terms and collision detection measures to ensure effective transmission. The protocol strategically determines which station will transmit at a given time, preventing data corruption on its way to the destination.

Algorithm:

In a CSMA/CD environment, any station in the network can transmit when the network is quiet.

1. Listen Before Transmit:

  • Before sending data, stations actively listen for ongoing traffic on the network.

2. Transmit if Idle:

  • If no other frame is present on the Ethernet, the station proceeds to send its data.

3. Wait in Case of Traffic:

  • If another frame is detected on the Ethernet network, the station patiently waits until it senses no traffic before initiating data transmission.

4. Collision Handling:

  • If two or more stations attempt to transmit simultaneously (collision), they stop, wait for a random amount of time, and reevaluate the network before retransmitting. Back-off algorithms determine the retransmission order for stations involved in the collision, assigning random order numbers to ensure fairness and efficient retransmission.

 

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA):

CSMA/CD faces limitations in certain wireless scenarios, particularly in "hidden node" problems. Imagine three nodes—A, B, and C—communicating wirelessly. B can communicate with both A and C, but A and C are beyond each other's range. If A and C attempt simultaneous communication with B, there's a risk of interference, and neither A nor C can detect it. To address this, CSMA/CA, a refined version suitable for wireless applications, was developed.

 

Algorithm:

1. Channel Status Check: When a frame is ready, the transmitting station checks if the channel is idle or busy.

2. Wait for Idle Channel: If the channel is busy, the station waits until it becomes idle.

3. Inter-frame Gap and Transmission: Once the channel is idle, the station waits for an Inter-frame Gap (IFG) time and then sends the frame.

4. Set Timer: After sending the frame, the station sets a timer.

5. Wait for Acknowledgement: The station waits for an acknowledgment from the receiver. If acknowledgment is received before the timer expires, the transmission is marked as successful.

6. Back-off and Retry: If no acknowledgment is received, the station waits for a back-off period and restarts the algorithm, aiming to improve the chances of successful transmission. This back-off mechanism helps avoid collisions and ensures efficient wireless communication.

 

Token Ring Method

In the Token Ring method, a distinctive approach is employed. A free token circulates within a ring when no device has data to send. When a device intends to transmit, it claims the free token by modifying bits in the 802.5 header to indicate token occupancy. Subsequently, the data is inserted into the ring following the token ring header.

 

Algorithm:

The fundamental steps for utilizing Token Ring when there's data to be sent are outlined below:

1. Listen for the Passing Token: Devices actively listen for the circulating token within the ring.

2. Token Availability Check: If the token is currently in use, the device waits for the next passing token.

3. Token Claim and Data Transmission: When the token is free, the device marks it as busy, appends the data, and transmits the data onto the ring.

4. Token Return and Data Removal: After completing a full revolution around the ring, the sender removes the data when the header with the busy token returns to the sender of that frame.

5. Free Token Transmission: The device, having sent its data, transmits a free token to enable another station to send a frame. This process ensures an orderly and efficient circulation of the token for data transmission within the Token Ring network.

 

Controlled Access Protocols

In controlled access protocols, stations collaborate to determine which station has the authorization to send data. The station seeking to transmit must receive approval from other stations. Three controlled-access methods are discussed:

Reservation

In the reservation method, a station must reserve before transmitting data. Time is divided into intervals, with a reservation frame preceding data frames in each interval. Each station has a dedicated mini slot in the reservation frame, making a reservation when needing to send a data frame. This method ensures orderly data transmission without collisions.


Polling

Polling is employed in topologies with a primary station and secondary stations. All data exchanges go through the primary device, which controls the link. The primary device determines which device can use the channel at a given time, preventing collisions using poll and select functions. However, a drawback is that if the primary station fails, the system becomes non-functional.

Select and Poll

Select

The select function is used by the primary device when it has data to send. The primary alerts the secondary devices to an upcoming transmission, transmitting a select (SEL) frame. This ensures that the secondary is prepared to receive the data.

 

Select

Poll

The poll function is used by the primary device to solicit transmissions from secondary devices. It asks each device if it has anything to send, and upon receiving a positive response, the primary reads the data and acknowledges its receipt.

Poll


 

Token Passing

In token passing, stations are organized in a logical ring, with each station having a predecessor and a successor. The right to access the channel is represented by a circulating token. The station holding the token has the authority to send data. When a station has data to send, it waits for the token from its predecessor, sends the data, and then passes the token to the next logical station. Token management is crucial to limit time, ensure token integrity, assign priorities, and facilitate the release of the token from low-priority to high-priority stations.

 

 

Channelization

Channelization, also known as channel partitioning, is a multiple-access method that involves sharing the available bandwidth of a communication link among different stations. This sharing can occur in three key dimensions: time, frequency, or through code. The purpose of channelization is to efficiently allocate resources and facilitate communication among multiple stations. In this section, we discuss three channelization protocols: FDMA, TDMA, and CDMA.

 

FDMA - Frequency Division Multiple Access

In Frequency Division Multiple Access (FDMA), the available bandwidth is partitioned into distinct frequency bands. Each station is assigned a dedicated frequency band for transmitting its data. This allocation remains constant, ensuring that each station has exclusive access to its designated frequency band throughout the communication.

 


Implementation:

1. Allocation and Band Pass Filters:

  • Each station is assigned a specific frequency band.
  • Stations employ bandpass filters to confine transmitter frequencies within their allocated bands.
  • Guard bands separate the allocated frequency bands to prevent interference between stations.

 

2. Channel Separation:

  • Allocated frequency bands are separated by guard bands, minimizing the risk of interference.
  • The visual representation in the figure illustrates the concept of FDMA, emphasizing the assigned frequency bands for different stations.

 

Key Characteristics:

  • Continuous Band Usage:
  • FDMA designates a fixed frequency band for the entire communication duration.
  • Well-suited for streaming data, allowing a continuous flow without the need for packetization.

 

Comparison with FDM:

  • Although FDMA and Frequency Division Multiplexing (FDM) seem conceptually similar, distinctions exist.
  • In FDM, low-pass channels are combined, modulated, and create a band-pass signal with shifted bandwidth.
  • FDMA operates at the data-link layer, where each station independently instructs its physical layer to generate a band-pass signal for the allocated frequency band. No physical multiplexer is involved at the physical layer, and signals are automatically band-pass filtered and mixed when transmitted to the common channel.

FDMA's distinct allocation strategy and continuous band usage make it a suitable choice for various communication systems, including cellular telephone networks, as explored further in another chapter.

 

TDMA - Time Division Multiple Access

In Time Division Multiple Access (TDMA) , stations collaborate to share the channel's bandwidth in sequential time slots. Each station is assigned a dedicated time slot, allowing exclusive transmission during that period. The core concept of TDMA is illustrated in the figure below.

The primary challenge in TDMA is achieving synchronization among stations. Each station must know the start and location of its designated time slot. Propagation delays, particularly in expansive network setups, pose difficulties in achieving precise synchronization. To address this, guard times, and synchronization bits (often referred to as preamble bits) are inserted, facilitating alignment and coordination.

It's crucial to highlight the distinction between TDMA and Time-Division Multiplexing (TDM). TDM operates at the physical layer, combining data from slower channels and transmitting them using a faster channel, employing a physical multiplexer for interleaving. In contrast, TDMA is a data-link layer access method. Each station instructs its physical layer to utilize the allocated time slot, eliminating the need for a physical multiplexer at the physical layer.


CDMA - Code Division Multiple Access  

Code Division Multiple Access (CDMA) originated decades ago and has become feasible with recent advancements in electronic technology. CDMA stands apart from FDMA and TDMA as it utilizes a unique approach where a single channel encompasses the entire bandwidth of the link, enabling all stations to transmit simultaneously without time-sharing.

To understand CDMA, consider the analogy of communication with different codes. Imagine a large room with diverse conversations happening simultaneously. Two people can converse privately in English, while another pair communicates in Chinese, and so forth. Despite the multiple conversations in the common space, each pair uses a distinct code or language, allowing for simultaneous and independent communication.

In CDMA, each station employs a unique code to differentiate its transmission from others, effectively utilizing the shared channel. This distinctive coding scheme contributes to CDMA's ability to support simultaneous transmissions without the need for time-sharing, making it a robust and efficient multiple-access method.

 

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Ok, Go it!