Problems with Existing Congestion Control Schemes

 

There are several congestion control schemes in use and proposed for networks with Internet-like architectures. There is not space in this technical report to give a complete survey of these schemes; however, a few of the main examples have been selected for demonstration.

Most of the congestion control schemes can be classified as one or a combination of the following methods:

Let us look at these examples.

Transport Layer Design

The Transport layer, as the basic generator of a traffic flow, is the essential ``source'' of the traffic flow. Unfortunately, while it is responsible for the flow of traffic that may cause network congestion, it is divorced from the Network layer, and is unable to directly see the effect of its traffic flow on the intermediate nodes between it and the destination of the traffic.

The Transport layer is only able to determine that there is network congestion in two ways:

Delayed acknowledgments and loss of data do not specifically imply network congestion. They may be caused by packet corruption and temporary traffic instability due to new flows of data being created. Therefore, transport protocols that rely on the second method to deduce the existence of congestion may not detect congestion when it exists, and may believe that congestion exists when it doesn't.

Transport protocols which perform end-to-end reliability must retransmit data when it is lost in transit. In general, retransmissions are prompted either by the arrival of a negative acknowledgment, or the expiry of a timer set to detect the non-arrival of any acknowledgment. Neither of these events specifically imply the loss of data. Transport protocols which use these events to prompt data retransmission may retransmit data unnecessarily, burdening the network with redundant transmissions and increasing the chance of congestion.

TCP, as a transport layer which has all of the above properties, also has all of the deficiencies described above. Two of its largest problems are the use of a Go-Back-N sliding window and use of a retransmission scheme based on round-trip time.

TCP's Sliding Window

 

TCP does not implement any congestion control or congestion avoidance schemes; instead it uses a Go-Back-N sliding window for end-to-end flow control [Postel 81]. TCP's window mechanism has been found to be fraught with several problems and deficiencies. There have been several proposals that remedy these problems, such as the ``Silly Window Syndrome'' [Clark 82], ``Small Packet Avoidance'' [Nagle 84] and ``Slow Start'' [Jacobson 88]. Nearly all implementations have these extensions, and TCP's window operation is now well-behaved. However, TCP's sliding window management is now anything but elegant.

Another problem with a Go-Back-N sliding window mechanism is that if data at point N is lost, the source must retransmit all data from N onwards, even is the destination has already received much of this data. This causes unnecessary network traffic.

Retransmission Timer Deficiencies

TCP uses the expiry of a timer set to detect the non-arrival of an acknowledgment to prompt data retransmission. The dilemma here is that if the timer is set too small, it expires before an acknowledgment can arrive, and data is retransmitted unnecessarily. If it is set too large, the protocol reacts too slowly to the loss of data, reducing the rate of data transmission and throughput.

The acknowledgment for transmitted data should return at the round-trip time. Collective experience has shown that the round-trip time is anything but constant, and it varies due to the load on the network and short- and long-term congestion within the network. Estimating the round-trip time has been shown to be very difficult.

The original TCP round-trip time estimation scheme was simple and worked well when the Arpanet was small. As the network size increased, round-trip time variance increased and the estimation scheme could not cope with this high variance. Jacobson [Jacobson 88] proposed a new round-trip time estimation scheme which could cope with high round-trip time variance, and this is now the recommended scheme for TCP.

Karn [Karn & Partridge 87] added to Jacobson's work by showing an ambiguity in round-trip time estimation due to acknowledgments for packets that have been retransmitted. His algorithm for removing the ambiguity from round-trip time estimates is found in all major implementations of TCP.

Despite these solutions, transport protocols like TCP are unable to accurately deduce the round-trip time in conditions when it has large variance. This leads to network congestion and/or poor throughput and can cause further round-trip time variance.

ICMP Source Quench

 

The ICMP Source Quench is the only congestion control mechanism in the Network layer of the TCP/IP protocol suite. Both routers and hosts play a part in the mechanism to control congestion. When a router believes itself to be congested, it sends `source quench' packets back to the source of the packets causing the congestion. On receipt of these packets, the host will throttle its data rate so as to prevent router congestion. It can then slowly increase its data rate, until source quench packets begin to arrive again.

Source Quench's effectiveness as a congestion control mechanism is flawed due to two problems. Firstly, it is not clearly stated as to how a router or destination decides when it is congested, who to send a source quench message to, how often to send source quench messages, and when to stop. Secondly, it is not clearly stated as to how a host should react to a source quench message, by how much it should reduce its output rate, if it should inform upper layers in the network stack, and how to properly increase its output rate in the absence of source quench messages.

Even worse, a router or destination may be congested and not send any source quench messages. Sources in receipt of a source quench message cannot determine if a router or destination dropped the datagram that caused the source quench message. The source does not know to what rate it should reduce its transmission to prevent further source quench messages.

Since the original specification of the Source Quench mechanism, several further papers have been produced to improve the performance of the mechanism, and to tighten the mechanism's specification ([Nagle 84] and [Braden & Postel 87]). One serious problem, raised by Nagle in [Nagle 84], is that sources who receive source quench messages may not inform the corresponding Transport layer of the source quench. This effectively renders Source Quench useless in some instances.

Some of the deficiencies in Source Quench were recognised in RFC 1009 - Requirements for Internet Gateways [Braden & Postel 87], and while it mandated certain parts of the Source Quench mechanism, it still left others non-mandatory, with suggested behaviour given. The document notes that ``... the Source Quench mechanism is known to be an imperfect means for Internet congestion control, and research towards more effective means is in progress ...''.

The Dec Bit Mechanism

The Dec Bit mechanism, also known as the Congestion Indication mechanism, is a congestion avoidance mechanism developed for the Digital Network Architecture at DEC [Jain et al 87], and has since been specified as the congestion avoidance mechanism for the ISO TP4 and CLNP transport and network protocols.

In Dec Bit, all network packets have a single bit, the `Congestion Experienced Bit', in their headers. Sources set this bit to zero. If the packet passes through a router that believes itself to be congested, it sets the bit to a one. Acknowledgment packets from the destination return the received Congestion Experienced Bit to the source. If the bit is set, the source knows there was some congestion along the path to the destination, and takes remedial action. In DECNET, a source reacts to the Congestion Experienced Bit by either increasing its window size by one (if not set), or decreasing the window size by 12.5% (if set). Once the window size changes, new Congestion Experienced Bits are ignored for one round-trip's time, to allow the traffic's effect on the network to be monitored and returned to the source via another Congestion Experienced Bit.

Jain et. al show that Dec Bit is robust and brings the network to a stable operating point with minimal oscillations in sources' window size.



Warren Toomey
Fri Mar 15 10:43:33 EST 1996