Wednesday, November 25, 2015

Paper Review #9: Controlling Queue Delay

The paper presented an innovative approach to AQM called CoDel (Controlled Delay) to provide part of the solution to persistently full buffer problems, also known as Bufferbloat, that we have been encountering for the past three decades. AQM (Active Queue Management) has been actively researched for the past two decades but has not been widely deployed because of difficulties in implementation and general misunderstandings about Internet packet loss and queue dynamics.

Bufferbloat is defined as the standing queue that resulted from mismatch between the window and pipe size. This queue creates large delays but no improvement in throughput. It is deemed hard to address because window sizes are difficult to estimate as the bottleneck bandwidth and round trip time changes constantly. 

The following are the three major improvements that differ CoDel from prior AQMs:
- Using minimum rather than average as the queue measure
- Simplified single-state variable tracking of minimum
- Use of packet-sojourn time through the queue. 

These features lead directly to CoDel’s likeness as suitable management of modern packet buffers. 

Cheap memory, a “More is Better” mentality and dynamically varying path characteristics contribute to the continued existence of delays which can greatly impact Internet usage and hinder growth of new applications. As there are still ongoing research on bufferfloat and AQMs, I agree that a full solution has to include additional incentives for service providers so that buffer management can be widely deployed.

Reference: 

Kathleen Nichols, Pollere Inc., Van Jacobson, PARC, Controlling Queue Delay, 2012

Monday, November 23, 2015

Paper Review #8: A Generalised Processor Sharing Approach to Flow Control in Integrated Services Network: The Single-Node Case

The paper presented the combined use of a packet service discipline based on Generalized Processor Sharing and Leaky Bucket rate control to provide flexible, efficient and fair use of the links.

As explained in the paper, Generalized Processing Sharing is a natural generalization of uniform processor sharing. It is an ideal scheduling algorithm that aims for fairness in sharing service capacities. However, it is a discipline that does not transmit packets as entities. It assumes that server can serve multiple sessions simultaneously and that the traffic is infinitely divisible. An alternative was proposed which called Packet-by-Packet Generalized Processor Sharing. It is a simple packet-by -packet transmission scheme that is an excellent approximation of GPS even when the packets are of variable length. Rate of admission is done through Leaky Buckets. PGPS combined with Leaky Bucket Admission control was said to guarantee a flexible environment.


I had a hard time reading the paper because it focuses more on the technicalities. However, I can say it proposes a new approach to provide an effective flow control in the network. It offers flexibility to users without compromising the fairness of the scheme.

Reference:

A.K. Parekh and R.G Gallager, A Generalised Processor Sharing Approach to Flow Control in Integrated Services Network: The Single-Node Case, 1993

Paper Review #7: Random Early Detection Gateways for Congestion Avoidance

In the current internet, congestions are detected by the TCP transport protocol only when a packet is dropped. The paper presented a new mechanism where detection of congestion is in the gateway itself. The Random Early Detection (RED) Gateways is proposed to maintain high throughput and low delay in the network. Aside from this, the paper also discussed other congestion avoidance gateways, described simple simulations and specific details of the algorithm used in calculating the average queue size and the packet marking probability for RED gateways.

In a nutshell, the main goal of RED gateway is to provide congestion avoidance by controlling the average queue size. It drops packets when the average queue size exceeds a preset threshold providing an upper bound on the average delay at the gateway. Also, as the gateway can monitor the size of the queue overtime, it is the appropriate agent to detect incipient congestion. It has a unified view of all the contributing sources to the congestion thus it is able to decide which sources to notify of said congestion.The RED gateway is designed to do all the following objectives in order to avoid bias to bursty traffic and global synchronization that may result to notifying all connections to reduce their windows at the same time. They are important points to consider to maintain high throughput in the network. It chooses a particular connection to be dropped using a probability that is roughly proportional to that connection’s share of the bandwidth at the gateway. 


Overall, it is a simple gateway algorithm that can be implemented in the current system we have today. This is the first paper I’ve read that present congestion avoidance algorithm on a layer apart from the transport layer. I hope further studies as cited in its recommendation can be done so it can be deployed gradually in the internet.

Reference:

S. Floyd and V. Jacobson, Random Early Detection Gateways for Congestion Avoidance

Sunday, August 30, 2015

Paper Review #6: Interdomain Internet Routing

The paper explained how routing between different domains in the internet happens. It tackled how the Internet Service Providers (ISPs) cooperate and exchange routing information in order to provide global connectivity and generate money from their customers. Since the internet service is provided by a large number of commercial enterprises who are in competition with each other, the internet routing infrastructure operates in an environment of “competitive cooperation”.

The routing architecture is divided into autonomous systems (ASes) that exchange reachability information. It was defined as owned and administered by a single commercial entity and implements some set of policies in deciding how to route its packets to the rest of the internet. There are two prevalent forms of ASes mentioned: Provider-customer(Transit) and Peering. The transit relationships generate revenue while peering usually don’t. It also explained how ISPs decide on what routes to export (using route filters)and import (by ranking routes). Their main consideration for their routing policies basically boils down to what can help them earn or save money.

These routing policies are realized through Border Gateway Protocol (BGP). It is designed for scalability, enforce different policies and handle cooperation under competitive circumstances. The paper explained how the protocol works, its two types of sessions (eBGP and iBGP) , the key BGP attributes (NEXT HOP, ASPATH,  Local Preference and Multi-Exit Discriminator (MED)) and the fragility of the system that can lead to anomalies and disruption of connectivity. These are usually caused by misconfigurations, malice and slow convergence. The authors cited some interesting examples of highjacking routes by mistake or for profit and spam from hijacked prefixes which can be avoided or solved by logging BGP announcements so one can trace where hijacked routes are. 

The paper is easy to follow especially on their explanation of the concept of BGP. However, the actual operation is extremely complex because it has to cater to different types of policies and sizes of Internet providers. As the main consideration for routing centers on individual ISPs interests, I think its configurations has to be really flexible to handle the exchanged in routes announcement.

Reference:

Hari Balakrishnan, Interdomain Internet Routing, 2001-2009

Sunday, August 23, 2015

Paper Review #5: Understanding BGP Misconfiguration

The paper presented a quantitative study of BGP misconfiguration, the frequency of its occurrence, possible causes, its overall impact on Internet connectivity  as well as different ways to prevent these instances. Border Gateway Protocol (BGP), the Internet’s inter-domain routing protocol, is a crucial part of the overall reliability of the Internet. Misconfigurations in BGP may result in excessive routing load, connectivity disruption and policy violations. 

The authors analyzed  the BGP updates over  a period of 21 days from 23 different vantage points across a diverse set of ISPs and have validated the results by emailing the operators involved in the incidents. Two globally visible BGP misconfigurations were considered: Origin misconfiguration (caused by initialization bugs, reliance on upstream filtering, old configuration, redistribution, communities, hijacks, forgotten filter, incorrect summary) and Export Misconfiguration (caused by prefix based configuration, Bad ACL or route map). These causes are categorized into slips(error in the execution of a correct plan) and mistakes(design mistakes).

Based on their study, the authors found out that the Internet is surprisingly robust to most misconfigurations. Connectivity is affected in only 4% of the misconfigured announcements or 13% of the misconfiguration incidents. However, the effect on routing load is quite significant. 

To reduce the Internet’s vulnerability to accidental errors, the authors proposed solutions such as user interfaced design improvement, high-level language design, database consistency and deployment of protocol extensions (S-BGP).  

As the primary goal is to minimize human errors in large distributed system, I think redesigning the system to limit the need for interaction with an operator will likely help in avoiding these misconfigurations. Automated monitoring can also be added.  


Reference: 

R. Mahajan, D. Wetherall, T. Anderson, “Understanding BGP Misconfiguration”, August 2002

Saturday, August 15, 2015

Paper Review #4: Rethinking the design of the Internet: The end to end arguments vs. the brave new world

The paper presented an assessment of the Internet and the forces that are pushing the change of Internet today: a greater call for stable and reliable operation; new sophisticated applications that are consumer-based; motivation of Internet Service Providers (ISP) to enhanced their service in order to gain advantage over their competitors; the rise of third party involvement; the proliferation of less sophisticated users; and the new forms of computing and communication that call for new software structures. 

Much of the Internet’s design has been influenced by the end to end argument assuring accurate and reliable transfer of information across the network. It suggests that specific application level functions must not be built into the lower levels of the system (the core of the network). This principle have preserved the flexibility, generality and openness of the Internet leading to the introduction of new applications. 

However, the changing uses of the Internet may demand adjustment in its original design principle. The early Internet consists of a  group of mutually trusting users attached  to a transparent network. Today, the brave new world consists of a bigger user base seeking to attend to their own objectives. The loss of trust at different layers is one of the most fundamental change resulting to the involvement of third parties to mediate between end users.  Yet, another question will be on how to identify trustworthy third parties. Also, the increasing involvement of government and ISPs present a greater challenge as new mechanisms for enhancement and restriction of the use of Internet are implemented in the core of the network.

In another paper by Clark on the design philosophy of the early Internet, privacy and security were not considered in the fundamental goals behind the architecture of Internet. These new requirements that are now emerging forces us to rethink the original design. It may be more complex than what was anticipated back then but it only proves the great impact of Internet in our society.


Reference:

 David D. Clark, Rethinking the design of the Internet: The end to end argument vs the brave new world, 2000
David D. Clark, The Design Philosophy of the DARPA Internet Protocols, 1988

Friday, August 14, 2015

Paper Review #3: Architectural Principles of the Internet

The paper presented an overview of the principles that influence the evolution of Internet. These were the collective observation of the Internet community that may serve as basic foundation and guide in designing and evaluating new protocols up to this day.

It also explained the principle of constant change. There are so many remarkable things that are happening in the Internet and because of these fast-paced advancements, some principles that were found sacred back then are being deprecated today indicating the need to postulate principles that are adaptive to change.

The next part relayed that there is no internet architecture but only a tradition that the community believes in. Maybe because from the early design, the architecture gradually changed as new requirements arise to pave way for new applications. This highlights the contribution of “end to end argument”  where the remaining end to end functions can only be performed correctly by the end-systems themselves allowing new innovations without additional complexities in the network.

Some general design, name, address and security issues were also raised. It must be scalable, supports diverse types of hardware, cost-effective, simple and modular. Naming convention must follow a simple structure and address must be unique. And it is the end user’s responsibility to protect their privacy.

Even though the Internet architecture have changed greatly from its modest beginnings to what it is now, these principles observed from its evolution will always be useful in guiding the community to bringing the Internet beyond.


Reference:

B. Carpenter, RFC 1958: "Architectural Principles of the Internet", 1996

Paper Review #2: The Design Philosophy of the DARPA Internet Protocols

This paper presented the philosophy behind the development of internet architecture by explaining the original objectives for the Defense Advanced Research Projects Agency (DARPA) Internet Protocols, the relation of these objectives to one another and the important features of the protocol.

The fundamental goal was to develop an effective technique for multiplexed utilization of existing interconnected networks. An effective interconnection means that it must achieve the following goals which are in order of importance: can continue communication despite loss of networks or gateways through “fate-sharing” leading to the introduction of Datagram,  can support a variety of services resulting to layering of the architecture into TCP and IP, is able to incorporate and utilize a wide variety of network technology, permits distributed management of resources, must be cost effective,  allows host attachment with a low level of effort and must be accountable.

One interesting point made by the paper is that the designers were working with a specific set and ordering of goals in mind. As this was designed to operate in a military context, the protocol focused more in survivability of the communication service as compared to its accountability. These set of priorities strongly influenced the design decisions within the internet architecture. An architecture that is for commercial deployment would clearly place the goals in a different order.

The use of Datagram has served very well in solving the most important goals set for this design. It’s a stateless packet switch letting the Internet to be reconstituted after a failure without concern about the state. It provides a basic building bloc provisioning for a variety of types of service to be implemented. And it represents the minimum network service assumption, utilizing a wide variety of networks. However, not all goals were properly addressed due to lack of available tools back then and were further down the priority  list. 

The implementation of this design is an entirely different matter and would require intensive engineering to come up with protocols that meet the specifications. Would it be better if separate networks for different services were supported instead of creating a single network for all of them? This presents a lot of drawbacks especially in the performance side though it was explained that it was constructed based on the needs of that time. All in all, it effectively advances the evolution of Internet towards what it is today.

Reference:

David D. Clark, The Design Philosophy of the DARPA Internet Protocols, 1988

Paper Review #1: A Protocol for Packet Network Intercommunication

The paper presented a protocol that supports sharing of resources among different packet switching networks as well as mechanisms to handle some issues that may arise in implementing the said protocol. 

A packet switching network is composed of HOSTS, set of packet switches and a collection of communication media that interconnect the packet switches. This paper elaborated on how communication between hosts in different networks happen as well as how data is routed from one network to another. 

There were two main concepts discussed in this paper that played a major role on how the protocol resolve the issues that arise in interconnecting existing networks.

To overcome the difference in implementation of existing networks without imposing a uniform practice, the concept of Gateway was introduced. It is an interface that is responsible for passing data between networks. It may split a large packet into two or more packets but it is not responsible for reassembling. The gateway sees to it that these packets meet the requirements of its target network. 

The next one was Transmission control program (TCP). Processes that want to communicate present messages to TCP for transmission. TCP delivers these messages as packet through the network. As these packets may propagate through the packet switches, the TCP on the receiving end is responsible for the reconstruction of these packets and delivery of the original message to its destination process. Each packet have assigned sequence number to determine the relative location of the packet text in the messages under reconstruction. This solves the out of order arrival of packets.  End of segment (ES) and End of message (EM) flags are used by the destination TCP to discover the presence of check sum for a given segment and to discover if the message has completely arrived. The source TCP waits for acknowledgement from the destination TCP to see if retransmission of the packet is needed. A window strategy is also used in order to aid in duplicate  detection.

With these mechanisms, issues like individual network packet sizes, transmission failures, sequencing, flow and error control and so on can be addressed. They have described a good protocol in transmitting data between existing networks. The design focused heavily on reliability so it might not be suitable for types of service that require fast communication between two hosts.

Reference:

Vinton G. Cerf and Robert E. Kahn, A Protocol for Packet Network Intercommunication, 1974