Wednesday, November 25, 2015

Paper Review #9: Controlling Queue Delay

The paper presented an innovative approach to AQM called CoDel (Controlled Delay) to provide part of the solution to persistently full buffer problems, also known as Bufferbloat, that we have been encountering for the past three decades. AQM (Active Queue Management) has been actively researched for the past two decades but has not been widely deployed because of difficulties in implementation and general misunderstandings about Internet packet loss and queue dynamics.

Bufferbloat is defined as the standing queue that resulted from mismatch between the window and pipe size. This queue creates large delays but no improvement in throughput. It is deemed hard to address because window sizes are difficult to estimate as the bottleneck bandwidth and round trip time changes constantly. 

The following are the three major improvements that differ CoDel from prior AQMs:
- Using minimum rather than average as the queue measure
- Simplified single-state variable tracking of minimum
- Use of packet-sojourn time through the queue. 

These features lead directly to CoDel’s likeness as suitable management of modern packet buffers. 

Cheap memory, a “More is Better” mentality and dynamically varying path characteristics contribute to the continued existence of delays which can greatly impact Internet usage and hinder growth of new applications. As there are still ongoing research on bufferfloat and AQMs, I agree that a full solution has to include additional incentives for service providers so that buffer management can be widely deployed.

Reference: 

Kathleen Nichols, Pollere Inc., Van Jacobson, PARC, Controlling Queue Delay, 2012

Monday, November 23, 2015

Paper Review #8: A Generalised Processor Sharing Approach to Flow Control in Integrated Services Network: The Single-Node Case

The paper presented the combined use of a packet service discipline based on Generalized Processor Sharing and Leaky Bucket rate control to provide flexible, efficient and fair use of the links.

As explained in the paper, Generalized Processing Sharing is a natural generalization of uniform processor sharing. It is an ideal scheduling algorithm that aims for fairness in sharing service capacities. However, it is a discipline that does not transmit packets as entities. It assumes that server can serve multiple sessions simultaneously and that the traffic is infinitely divisible. An alternative was proposed which called Packet-by-Packet Generalized Processor Sharing. It is a simple packet-by -packet transmission scheme that is an excellent approximation of GPS even when the packets are of variable length. Rate of admission is done through Leaky Buckets. PGPS combined with Leaky Bucket Admission control was said to guarantee a flexible environment.


I had a hard time reading the paper because it focuses more on the technicalities. However, I can say it proposes a new approach to provide an effective flow control in the network. It offers flexibility to users without compromising the fairness of the scheme.

Reference:

A.K. Parekh and R.G Gallager, A Generalised Processor Sharing Approach to Flow Control in Integrated Services Network: The Single-Node Case, 1993

Paper Review #7: Random Early Detection Gateways for Congestion Avoidance

In the current internet, congestions are detected by the TCP transport protocol only when a packet is dropped. The paper presented a new mechanism where detection of congestion is in the gateway itself. The Random Early Detection (RED) Gateways is proposed to maintain high throughput and low delay in the network. Aside from this, the paper also discussed other congestion avoidance gateways, described simple simulations and specific details of the algorithm used in calculating the average queue size and the packet marking probability for RED gateways.

In a nutshell, the main goal of RED gateway is to provide congestion avoidance by controlling the average queue size. It drops packets when the average queue size exceeds a preset threshold providing an upper bound on the average delay at the gateway. Also, as the gateway can monitor the size of the queue overtime, it is the appropriate agent to detect incipient congestion. It has a unified view of all the contributing sources to the congestion thus it is able to decide which sources to notify of said congestion.The RED gateway is designed to do all the following objectives in order to avoid bias to bursty traffic and global synchronization that may result to notifying all connections to reduce their windows at the same time. They are important points to consider to maintain high throughput in the network. It chooses a particular connection to be dropped using a probability that is roughly proportional to that connection’s share of the bandwidth at the gateway. 


Overall, it is a simple gateway algorithm that can be implemented in the current system we have today. This is the first paper I’ve read that present congestion avoidance algorithm on a layer apart from the transport layer. I hope further studies as cited in its recommendation can be done so it can be deployed gradually in the internet.

Reference:

S. Floyd and V. Jacobson, Random Early Detection Gateways for Congestion Avoidance