Controlling Queueing delays
Routers use a buffer to store the packets that have arrived but have not yet been retransmitted on their output linke. These buffers play an important role in combination with TCP’s congestion control scheme TCP uses packet losses to detect congestion. To manage their buffers, routers rely on a buffer acceptance algorithm. The simplest buffer acceptance algorithm is to discard packets as soon as the buffer is full. This algorithm can be easily implemented, but simulations and measurements have shown that is does not always provide good performance with TCP.
In the 1990s, various buffer acceptance algorithms have been proposed to overcome this problem. Random Early Detection (RED) probabilistically drops packets when the average buffer occupancy becomes too high. RED has been implemented on routers and has been strongly recommended by the IETF in RFC 2309. However, as of this writing, RED is still not widely deployed. One of the reasons is that RED uses many parameters and is difficult to configure and tune correctly (see the references listed on http://www.icir.org/floyd/red.html).
In a recent paper published in ACM Queue, Kathleen Nichols and Van Jacobson propose a new Adaptive Queue Management algorithm. The new algorithm measures the waiting time of each packet in the buffer and its control law depends on the minimum buffer occupancy. An implementation for Linux-based routers seems to be in progress. Maybe it’s time to revisit buffer acceptance algorithms again…