Sunday, August 30, 2015

Paper Review #6: Interdomain Internet Routing

The paper explained how routing between different domains in the internet happens. It tackled how the Internet Service Providers (ISPs) cooperate and exchange routing information in order to provide global connectivity and generate money from their customers. Since the internet service is provided by a large number of commercial enterprises who are in competition with each other, the internet routing infrastructure operates in an environment of “competitive cooperation”.

The routing architecture is divided into autonomous systems (ASes) that exchange reachability information. It was defined as owned and administered by a single commercial entity and implements some set of policies in deciding how to route its packets to the rest of the internet. There are two prevalent forms of ASes mentioned: Provider-customer(Transit) and Peering. The transit relationships generate revenue while peering usually don’t. It also explained how ISPs decide on what routes to export (using route filters)and import (by ranking routes). Their main consideration for their routing policies basically boils down to what can help them earn or save money.

These routing policies are realized through Border Gateway Protocol (BGP). It is designed for scalability, enforce different policies and handle cooperation under competitive circumstances. The paper explained how the protocol works, its two types of sessions (eBGP and iBGP) , the key BGP attributes (NEXT HOP, ASPATH,  Local Preference and Multi-Exit Discriminator (MED)) and the fragility of the system that can lead to anomalies and disruption of connectivity. These are usually caused by misconfigurations, malice and slow convergence. The authors cited some interesting examples of highjacking routes by mistake or for profit and spam from hijacked prefixes which can be avoided or solved by logging BGP announcements so one can trace where hijacked routes are. 

The paper is easy to follow especially on their explanation of the concept of BGP. However, the actual operation is extremely complex because it has to cater to different types of policies and sizes of Internet providers. As the main consideration for routing centers on individual ISPs interests, I think its configurations has to be really flexible to handle the exchanged in routes announcement.

Reference:

Hari Balakrishnan, Interdomain Internet Routing, 2001-2009

Sunday, August 23, 2015

Paper Review #5: Understanding BGP Misconfiguration

The paper presented a quantitative study of BGP misconfiguration, the frequency of its occurrence, possible causes, its overall impact on Internet connectivity  as well as different ways to prevent these instances. Border Gateway Protocol (BGP), the Internet’s inter-domain routing protocol, is a crucial part of the overall reliability of the Internet. Misconfigurations in BGP may result in excessive routing load, connectivity disruption and policy violations. 

The authors analyzed  the BGP updates over  a period of 21 days from 23 different vantage points across a diverse set of ISPs and have validated the results by emailing the operators involved in the incidents. Two globally visible BGP misconfigurations were considered: Origin misconfiguration (caused by initialization bugs, reliance on upstream filtering, old configuration, redistribution, communities, hijacks, forgotten filter, incorrect summary) and Export Misconfiguration (caused by prefix based configuration, Bad ACL or route map). These causes are categorized into slips(error in the execution of a correct plan) and mistakes(design mistakes).

Based on their study, the authors found out that the Internet is surprisingly robust to most misconfigurations. Connectivity is affected in only 4% of the misconfigured announcements or 13% of the misconfiguration incidents. However, the effect on routing load is quite significant. 

To reduce the Internet’s vulnerability to accidental errors, the authors proposed solutions such as user interfaced design improvement, high-level language design, database consistency and deployment of protocol extensions (S-BGP).  

As the primary goal is to minimize human errors in large distributed system, I think redesigning the system to limit the need for interaction with an operator will likely help in avoiding these misconfigurations. Automated monitoring can also be added.  


Reference: 

R. Mahajan, D. Wetherall, T. Anderson, “Understanding BGP Misconfiguration”, August 2002

Saturday, August 15, 2015

Paper Review #4: Rethinking the design of the Internet: The end to end arguments vs. the brave new world

The paper presented an assessment of the Internet and the forces that are pushing the change of Internet today: a greater call for stable and reliable operation; new sophisticated applications that are consumer-based; motivation of Internet Service Providers (ISP) to enhanced their service in order to gain advantage over their competitors; the rise of third party involvement; the proliferation of less sophisticated users; and the new forms of computing and communication that call for new software structures. 

Much of the Internet’s design has been influenced by the end to end argument assuring accurate and reliable transfer of information across the network. It suggests that specific application level functions must not be built into the lower levels of the system (the core of the network). This principle have preserved the flexibility, generality and openness of the Internet leading to the introduction of new applications. 

However, the changing uses of the Internet may demand adjustment in its original design principle. The early Internet consists of a  group of mutually trusting users attached  to a transparent network. Today, the brave new world consists of a bigger user base seeking to attend to their own objectives. The loss of trust at different layers is one of the most fundamental change resulting to the involvement of third parties to mediate between end users.  Yet, another question will be on how to identify trustworthy third parties. Also, the increasing involvement of government and ISPs present a greater challenge as new mechanisms for enhancement and restriction of the use of Internet are implemented in the core of the network.

In another paper by Clark on the design philosophy of the early Internet, privacy and security were not considered in the fundamental goals behind the architecture of Internet. These new requirements that are now emerging forces us to rethink the original design. It may be more complex than what was anticipated back then but it only proves the great impact of Internet in our society.


Reference:

 David D. Clark, Rethinking the design of the Internet: The end to end argument vs the brave new world, 2000
David D. Clark, The Design Philosophy of the DARPA Internet Protocols, 1988

Friday, August 14, 2015

Paper Review #3: Architectural Principles of the Internet

The paper presented an overview of the principles that influence the evolution of Internet. These were the collective observation of the Internet community that may serve as basic foundation and guide in designing and evaluating new protocols up to this day.

It also explained the principle of constant change. There are so many remarkable things that are happening in the Internet and because of these fast-paced advancements, some principles that were found sacred back then are being deprecated today indicating the need to postulate principles that are adaptive to change.

The next part relayed that there is no internet architecture but only a tradition that the community believes in. Maybe because from the early design, the architecture gradually changed as new requirements arise to pave way for new applications. This highlights the contribution of “end to end argument”  where the remaining end to end functions can only be performed correctly by the end-systems themselves allowing new innovations without additional complexities in the network.

Some general design, name, address and security issues were also raised. It must be scalable, supports diverse types of hardware, cost-effective, simple and modular. Naming convention must follow a simple structure and address must be unique. And it is the end user’s responsibility to protect their privacy.

Even though the Internet architecture have changed greatly from its modest beginnings to what it is now, these principles observed from its evolution will always be useful in guiding the community to bringing the Internet beyond.


Reference:

B. Carpenter, RFC 1958: "Architectural Principles of the Internet", 1996

Paper Review #2: The Design Philosophy of the DARPA Internet Protocols

This paper presented the philosophy behind the development of internet architecture by explaining the original objectives for the Defense Advanced Research Projects Agency (DARPA) Internet Protocols, the relation of these objectives to one another and the important features of the protocol.

The fundamental goal was to develop an effective technique for multiplexed utilization of existing interconnected networks. An effective interconnection means that it must achieve the following goals which are in order of importance: can continue communication despite loss of networks or gateways through “fate-sharing” leading to the introduction of Datagram,  can support a variety of services resulting to layering of the architecture into TCP and IP, is able to incorporate and utilize a wide variety of network technology, permits distributed management of resources, must be cost effective,  allows host attachment with a low level of effort and must be accountable.

One interesting point made by the paper is that the designers were working with a specific set and ordering of goals in mind. As this was designed to operate in a military context, the protocol focused more in survivability of the communication service as compared to its accountability. These set of priorities strongly influenced the design decisions within the internet architecture. An architecture that is for commercial deployment would clearly place the goals in a different order.

The use of Datagram has served very well in solving the most important goals set for this design. It’s a stateless packet switch letting the Internet to be reconstituted after a failure without concern about the state. It provides a basic building bloc provisioning for a variety of types of service to be implemented. And it represents the minimum network service assumption, utilizing a wide variety of networks. However, not all goals were properly addressed due to lack of available tools back then and were further down the priority  list. 

The implementation of this design is an entirely different matter and would require intensive engineering to come up with protocols that meet the specifications. Would it be better if separate networks for different services were supported instead of creating a single network for all of them? This presents a lot of drawbacks especially in the performance side though it was explained that it was constructed based on the needs of that time. All in all, it effectively advances the evolution of Internet towards what it is today.

Reference:

David D. Clark, The Design Philosophy of the DARPA Internet Protocols, 1988

Paper Review #1: A Protocol for Packet Network Intercommunication

The paper presented a protocol that supports sharing of resources among different packet switching networks as well as mechanisms to handle some issues that may arise in implementing the said protocol. 

A packet switching network is composed of HOSTS, set of packet switches and a collection of communication media that interconnect the packet switches. This paper elaborated on how communication between hosts in different networks happen as well as how data is routed from one network to another. 

There were two main concepts discussed in this paper that played a major role on how the protocol resolve the issues that arise in interconnecting existing networks.

To overcome the difference in implementation of existing networks without imposing a uniform practice, the concept of Gateway was introduced. It is an interface that is responsible for passing data between networks. It may split a large packet into two or more packets but it is not responsible for reassembling. The gateway sees to it that these packets meet the requirements of its target network. 

The next one was Transmission control program (TCP). Processes that want to communicate present messages to TCP for transmission. TCP delivers these messages as packet through the network. As these packets may propagate through the packet switches, the TCP on the receiving end is responsible for the reconstruction of these packets and delivery of the original message to its destination process. Each packet have assigned sequence number to determine the relative location of the packet text in the messages under reconstruction. This solves the out of order arrival of packets.  End of segment (ES) and End of message (EM) flags are used by the destination TCP to discover the presence of check sum for a given segment and to discover if the message has completely arrived. The source TCP waits for acknowledgement from the destination TCP to see if retransmission of the packet is needed. A window strategy is also used in order to aid in duplicate  detection.

With these mechanisms, issues like individual network packet sizes, transmission failures, sequencing, flow and error control and so on can be addressed. They have described a good protocol in transmitting data between existing networks. The design focused heavily on reliability so it might not be suitable for types of service that require fast communication between two hosts.

Reference:

Vinton G. Cerf and Robert E. Kahn, A Protocol for Packet Network Intercommunication, 1974