The control of congestion in connectionless packet-switched wide-area networks is a major problem, especially in such networks as the Internet, which is experiencing an exponential growth in users and network traffic. This thesis outlines a rate-based framework for congestion control in these networks, examines the requirements of the framework, and describes a number of control mechanisms which meet the framework's requirements.
The first section (Chapters 1 to 3) describes the problem of congestion in these networks, and reviews the current methods employed to deal with congestion, plus other solutions that are described in the literature. The second section (Chapters 4 to 7) outlines my proposed rated-based congestion control framework, and describes its ramifications and parameters. Finally, the framework is tested in several network simulations where it is compared against some existing methods, and the third section (Chapters 8 to 11) gives experimental results and summarises them.
Most computer networks suffer from network congestion to some extent. This thesis limits its scope to congestion in connectionless packet-switched wide-area networks, and in particular, to networks with the same architecture as the Internet.
The network architecture under consideration has the following characteristics:
All the links on a network are joined together by routers. These forward packets arriving on incoming links to the appropriate outgoing links, to ensure that the packet is routed to its intended destination. Figure 1 shows the basic architecture of a router.
The router is connected to incoming links and outgoing links. and may be different, although usually equals . In most situations, input and output links are paired to form either a full-duplex channel where data can flow in both directions simultaneously, or a half-duplex channel where data can flow in only one direction at a time.
Incoming packets are buffered in the input buffers. Once in the buffer, they are selected by the Packet Selection Function to be passed to the Routing Function. The Routing Function determines on to which outgoing link a packet must be placed for it to get closer to its destination: this is done with a routing table which, as described earlier, is semi-static. When the correct link for the packet is found, the Routing Function passes the packet to the Packet Dropping Function for queueing on the output buffer for the link. When the packet reaches the other end of the queue, it is transmitted over the link to the next router, or to the final destination.
The Packet Selection Function can choose any of the packets queued in the input buffers for routing by the Routing Function. Normally this would be done in a First-In First-Out method, but other selection methods may be useful in a congested environment.
There are two fundamental bottlenecks in the given router architecture.
Firstly, there is a minimum time needed for the router to decode the
network header of the incoming
packet, determine the route for the packet, and pass the packet to an outgoing link for transmission. There is also a delay for the packet to be transmitted on the outgoing link: this may just be the packet's transmission time for a full-duplex link, or there may be an extra delay for the link to become clear on a half-duplex link. These delays form one bottleneck.
The second bottleneck indicates that the router must be prepared to buffer output packets, to prevent them from being lost while the router waits for an outgoing link to become clear. The two bottlenecks together indicate that the router must buffer incoming packets, to prevent them from being lost if they arrive too quickly for the router to process.
By definition, the router's input and output buffers are finite. If a buffer becomes full, then no more packets can be queued in the buffer, and the router has no choice but to discard them. This causes data loss in the data flow between a source and destination, and usually causes the source to retransmit the data.
Although a router cannot queue a packet if the corresponding output buffer is full, it can choose either to discard the unqueued packet, or to discard a packet already in the output queue, and then queue the unqueued packet. This choice is performed by the Packet Dropping Function. This cannot be done for the input buffers, as the packet has not been placed in the router's internal memory until it has been queued in the input buffers. Thus, packets may be lost on input, and the router has no control over which packets are lost.
Finally, the router's buffers may share a common memory space, or they may have individual memory spaces. With the former, no buffer can become full until the router's entire memory space is allocated to buffers, at which point all buffers become full. With the latter, any buffer's utilisation has no influence on any other buffer's utilisation.
A network is congested when one or more network components must discard packets due to lack of buffer space. Given the above architecture, it is possible to see how network congestion can occur. A source of data flow on the network cannot reserve bandwidth across the network to its data's destination. It, therefore, is unable to determine what rate of data flow can be sustained between it and the destination.
If a source transmits data at a rate too high to be sustained between it and the destination, one or more routers will begin to queue the packets in their buffers. If the queueing continues, the buffers will become full and packets from the source will be discarded, causing data loss. If the source is attempting to guarantee transmission reliability, retransmission of data and increased transmission time between the source and the destination is the result. Figure 2 from [Jain & Ramakrishnan 88] demonstrates the problem of congestion.
As the load (rate of data transmitted) through the network increases, the throughput (rate of data reaching the destination) increases linearly. However, as the load reaches the network's capacity, the buffers in the routers begin to fill. This increases the response time (time for data to traverse the network between source and destination) and lowers the throughput.
Once the routers' buffers begin to overflow, packet loss occurs. Increases in load beyond this point increase the probability of packet loss. Under extreme load, response time approaches infinity and the throughput approaches zero; this is the point of congestion collapse. This point is known as the cliff due to the extreme drop in throughput. Figure 2 also shows a plot of power, defined as the ratio of throughput to response time. The power peaks at the knee of the figure.
Network congestion control schemes can be divided into two categories. Congestion avoidance schemes attempt to keep the network operating at the knee of Figure 2. Congestion recovery schemes attempt to keep the network operating to the left of the cliff in Figure 2. A congestion avoidance scheme attempts to keep the load on the network to a level where response time is low, and power is optimised. A congestion recovery scheme attempts to keep the network operating by preventing the loss of data due to congestion.
Jain notes in [Jain & Ramakrishnan 88] that ``Without congestion recovery a network may cease operating, whereas networks have been operating without congestion avoidance for a long time.'' Clearly, although congestion recovery prevents network collapse, it is better to operate the network at the point where power is maximised.
Network congestion is affected by the following factors:
The scope of this thesis is constrained to the protocols and procedures used in the Transport and Network Layers of packet sources, routers and packet destinations of connectionless packet-switched wide-area networks. Therefore, the first three factors fall outside the scope of this thesis, and I will give them little or no coverage.
The next two chapters will review network congestion control schemes which have either been proposed or implemented. Following the thesis scope, only techniques operating at the Transport and Network Layers will be considered. The advantages and disadvantages of each mechanism will be discussed.
Given the deficiencies I will identify with many existing congestion control schemes, and the advantages of rated-based controls schemes that I will highlight, I will then present a rate-based congestion control framework which is designed for connectionless packet-switched networks.